Hey chatbot, is this true? AI ‘factchecks' sow misinformation

Business Recorder Coverage
AI chatbots like Grok, ChatGPT, and Gemini were widely used for fact-checking during the conflict but often spread misinformation.

Grok wrongly identified old footage from Sudan’s Khartoum airport as a missile strike on Pakistan’s Nur Khan airbase.

A Nepal fire video was misidentified as Pakistan’s military response to Indian strikes.

Experts warn that AI chatbots are unreliable for breaking news verification.

The Hindu Coverage
AI chatbots failed to verify viral content accurately, often fabricating details.

NewsGuard’s research found 10 leading chatbots prone to repeating falsehoods, including Russian disinformation.

The Tow Center for Digital Journalism reported that AI chatbots struggle to decline answering questions they can’t verify.

Grok wrongly validated an AI-generated image, even fabricating details about the person in the image.

Samaa TV Coverage
AI chatbots misled users during the India-Pakistan conflict, raising concerns over their reliability.

Grok incorrectly labeled a viral video of a giant anaconda as genuine, citing non-existent scientific expeditions.

Meta’s decision to end third-party fact-checking in the U.S. has increased reliance on AI chatbots.

Experts warn that AI-generated misinformation is becoming harder to detect.

Arab News Coverage
AI chatbots spread misinformation during the India-Pakistan conflict, misidentifying old footage.

Grok inserted far-right conspiracy theories into unrelated queries.

Researchers warn that AI chatbots may be subject to political influence.

Human fact-checking resources have been reduced, increasing reliance on AI.

Each source highlights different aspects—Business Recorder focuses on specific misinformation cases, The Hindu emphasizes AI’s inability to verify content, Samaa TV discusses the broader impact of AI-generated misinformation, and Arab News raises concerns about political bias in AI responses.


Comment As:

Comment (0)