Artificial intelligence is more and more used. Of course, their wrong results are also.
Photo: Matthias Bein/dpa
The hype about artificial intelligence has now reached a kind of saddle point. Were the warnings of the AI world rule that made people superfluous, maybe a little exaggerated? Are the supposedly superior voice models not the better poets and thinkers, but simply better search engines? The fact that pragmatic normality seems to be returned can be observed in more and more areas in which “AI supported” is being worked on. In some scientific disciplines, the attitude has already prevailed that AI is not only an option in research, but also a bid – otherwise the results and processed data would simply no longer be state of the art. And also in journalism, for example, the new compression of work and data quantities becomes the standard.
Too stupid that the subdued expectations of the AI as a mere search engine do not clear out their fundamental problems. At the beginning of March the Journal “Columbia Journalism Review” An investigation into the susceptibility of generative AI models and came to sobering results. Eight models were provided with content of newspaper articles and should assign them to the original items with headings and source. With an average of more than 60 percent of these queries, the models provided incorrect information. Of the 200 queries each, Chatgpt 134 was wrong, Grok 3 could only quote ten properly and even the much praised Perplexity model gave 90 false information in the payment version.
The remarkable thing is not only the enormously high error rate, but that the AI models prioritize the uniqueness from correctness. So the models answer “confidently wrong”, that is, they provide incorrect information free of irritation. As a result, the false statements are also less checked, which can lead to further and faster spread of disinformation.
Has the machine copied over too human misconduct here? Perhaps it suffers from the so-called Imposter syndrome, in which the smallest suspicion of fallability is like a transferred fraudster. Or she acts according to the seller to maintain the expert license and molve customers by fantasizing anything about the product rather than being unsuspecting.
The fact is that these machines reflect the social average from which they emerged. We trained this KIS. Perhaps it is slowly time that users also start learning a lesson from the AI: do not take the punch of the machine -optimized probability calculations for bare coin; Confect and admit mistakes; And reject this work -compressed optimization, in which no space will remain for all of this.
The “nd” remains. Thanks to them.
The Nd. cooperative belongs to our readers and authors. With the cooperative, we guarantee the independence of our editorial team and try to make all of our texts accessible – even if you have no money to get our work with you.
We have no hard paywall on the website out of conviction. But that also means that we all who can make a contribution must always ask for financing our journalism from the left. That costs nerves, and not just our reader, our author also becomes too much from time to time.
Nevertheless: Only together can we defend left -wing points of view!
With your support we can still:
→ offer independent and critical reporting.
→ cover topics that are overlooked elsewhere.
→ create a platform for diverse and marginalized voices.
→ Write to false information and hate speech.
→ Accompany and deepen social debates from the left.
Be part of the solidarity financing and support the “ND” with a contribution of your choice. Together we can create a media landscape that is independent, critical and accessible to everyone.
sbobet link sbobet judi bola online slot demo