As tensions escalated during the recent four-day conflict between India and Pakistan, many social media users turned to artificial intelligence (AI) chatbots for instant fact-checks. Platforms like Elon Musk’s Grok on X, OpenAI’s ChatGPT, and Google’s Gemini are being widely used for real-time verification. But experts warn: these tools are spreading more misinformation than they prevent.
AI Fact-Checkers Under Fire
Phrases like “Hey @Grok, is this true?” are becoming common on X, reflecting a growing reliance on AI tools for verifying viral claims. However, many of the answers these bots provide are inaccurate, misleading, or entirely fabricated.
For example, during the India-Pakistan conflict:
- Grok misidentified old footage from Sudan’s Khartoum airport as a missile strike on Pakistan’s Nur Khan airbase.
- A video from Nepal showing a burning building was falsely labeled as Pakistan’s retaliation to Indian airstrikes.
These incidents highlight the critical flaws in using generative AI as a substitute for professional fact-checking.
Human Fact-Checkers on the Decline
Tech companies like Meta (Facebook and Instagram) have recently cut back on third-party fact-checkers in the U.S. Meanwhile, X has promoted a “Community Notes” model, where users write fact-checks. However, researchers say this model is slow and often ineffective, especially during fast-moving or politically charged events.
This shift leaves a dangerous gap — one that AI isn’t ready to fill.
AI Tools Fabricating ‘Facts’
According to NewsGuard and Columbia University’s Tow Center, AI chatbots frequently produce fabricated content, particularly when asked about sensitive or emerging issues. Examples include:
- Gemini confirming a fake AI-generated image as authentic and inventing background details.
- Grok labeling a fictional video of a giant anaconda in the Amazon as “genuine” — citing imaginary scientific expeditions.
These AI “hallucinations” mislead users, often adding a false sense of credibility to viral hoaxes.
Bias, Manipulation, and Political Influence
Experts have also raised concerns about bias in AI chatbot responses, depending on how they are programmed or modified.
A recent scandal saw Grok referencing “white genocide,” a known far-right conspiracy theory, due to what xAI later called an “unauthorized modification.” When questioned, Grok even speculated Elon Musk himself was responsible for the system change — a bizarre example of how unpredictable and opaque these tools can be.
Experts Sound the Alarm
Angie Holan, Director of the International Fact-Checking Network, warns:
“AI assistants can fabricate results or give biased answers after human coders specifically change their instructions. That’s especially dangerous when dealing with disinformation during conflicts or elections.”
Meanwhile, researchers emphasize that AI should not replace human fact-checkers. While useful for summarizing or sourcing, AI tools lack the judgment, accountability, and transparency that professional fact-checking demands.
Final Thoughts: Use AI With Caution
AI chatbots are becoming popular sources of information. But when it comes to fact-checking viral content, especially during wars or elections, these tools are still deeply flawed. From misidentifying videos to inventing facts, their mistakes can fuel disinformation instead of stopping it.
In a world flooded with fake news, the human touch still matters.