Avram Piltch is the editor in chief of Tom’s Hardware, and he’s written a thoroughly researched article breaking down the promises and failures of LLM AIs.
Avram Piltch is the editor in chief of Tom’s Hardware, and he’s written a thoroughly researched article breaking down the promises and failures of LLM AIs.
Is there a meaningful difference between reproducing the work and giving a summary? Because I’ll absolutely be using AI to filter all the editorial garbage out of news, setup and trained myself to surface what is meaningful to me stripped of all advertising, sponsorships, and detectable bias
When you figure out how to train an AI without bias, let us know.
You’re confusing ai with chatgpt, but to answer your question: if it’s my own bias, why would I care that it’s in my personal ai? That’s kind of the point: using my personal lens (bias) to determine what info I would be interested in being alerted of
The bias is in the AI design and the training dataset.
oooh I dunno man having an AI feed you shit based on what fits your personal biases is basically what social media already does and I do not think that’s something we need more of.
???
I have yet to find an LLM that can summarize a text without errors. I already mentioned this in another post a few days back, but Google‘s new search preview is driving me mad with all the hidden factual errors. They make me click only to realize that the LLM told me what I wanted to find, not what is there (wrong names, wrong dates, etc.).
I greatly prefer the old excerpt summaries over the new imaginary ones (they‘re currently A/B testing).