AI News Misinformation: Why Nearly Half of AI Assistants Get the Facts Wrong

AI News Misinformation: Why Nearly Half of AI Assistants Get the Facts Wrong

In a major new study, the European Broadcasting Union (EBU) and the BBC investigated how well artificial intelligence assistants handle news-related questions. Their findings have sparked serious debate about the reliability of AI tools that millions use daily for information, news, and media consumption.

🔍 Study Overview and Key Findings

The research examined around 3,000 responses from four leading AI assistants — ChatGPT (OpenAI), Gemini (Google), Copilot (Microsoft), and Perplexity. It covered 14 different languages and involved 22 public-service media organizations across 18 countries, making it one of the most comprehensive analyses of its kind.

Here are the main findings:

  • About 45% of all AI-generated news responses contained at least one major error, such as false facts or sourcing failures.
  • Roughly 81% of all answers had some form of problem, including minor issues like missing context or subtle misattributions.
  • Nearly one-third of responses displayed sourcing errors — missing, misleading, or incorrect attributions.
  • In 20% of cases, assistants delivered outdated or clearly incorrect information.

⚠️ Platform-by-Platform Performance

The study found significant differences between platforms, but none were completely accurate. Among them, Google’s Gemini performed the worst, with nearly 72% of its responses showing major sourcing issues. The other assistants — ChatGPT, Copilot, and Perplexity — performed better but still displayed widespread factual and attribution errors.

These results show that while AI assistants can be helpful tools, their reliability as news sources remains deeply flawed.

📚 Examples of Errors and Why They Matter

The study highlighted several striking examples of misinformation:

  • One AI model falsely claimed that new legislation on disposable vapes had been passed when it had not.
  • Another instance showed an assistant stating that Pope Francis was still the current Pope several months after his reported death.

Such errors may appear trivial, but they undermine trust in AI and, more importantly, trust in verified journalism. When people depend on these tools for breaking news, even small inaccuracies can spread rapidly and distort public understanding.

📈 Rising Use of AI for News — and the Risks

According to the Reuters Institute Digital News Report 2025, around 7% of global online news consumers already use AI assistants for daily updates. That number jumps to 15% among users under the age of 25.

As younger generations shift away from traditional news outlets toward conversational AI platforms, the danger of misinformation increases. When AI cannot distinguish between opinion, rumor, and verified fact, the line between truth and fiction becomes alarmingly blurred.

🧠 Why Do AI Assistants Struggle with News Accuracy?

Researchers outlined several reasons why AI assistants frequently fail to deliver accurate news responses:

  1. Outdated Training Data: AI models are trained on large data sets that may not reflect real-time events. This lag often results in outdated or missing information.
  2. Sourcing Weakness: Many AI tools fail to properly cite or verify their sources, leading to unreliable or incomplete references.
  3. Confusion Between Opinion and Fact: AI sometimes blends editorial commentary with factual reporting, misrepresenting opinion pieces as verified news.
  4. Complexity of Global News: Legal, political, and cultural nuances make it hard for AI systems to fully grasp evolving stories without context.

🏛️ Impact on Trust, Democracy, and Media Literacy

According to Jean Philip De Tender, EBU’s Media Director, “When people don’t know what to trust, they end up trusting nothing at all — and that can deter democratic participation.”

In short, if AI continues to misinform, it could weaken faith not just in technology but also in journalism and democratic institutions. Media experts emphasize the growing need for media literacy — helping users verify facts and understand how AI produces its answers.

🔧 What Needs to Happen Next

The EBU and BBC report offered several key recommendations for AI developers, regulators, and media organizations:

  • Accountability and Transparency: AI companies must release public reports showing accuracy rates and language-specific error data.
  • Better Source Linking: Every AI-generated answer should include direct references and timestamps for its information.
  • Independent Monitoring: Regular external audits should be conducted to track misinformation trends across multiple languages and markets.
  • Collaboration with Media Organizations: Public-service media outlets should work alongside AI companies to set new ethical and technical standards for news integrity.

✅ What You Can Do as a News Consumer

Here are some practical steps to avoid being misled by AI-generated misinformation:

  • Always check if the AI assistant provides a clear, traceable source.
  • Verify major claims with established, reputable news websites.
  • Be skeptical of vague or sensational answers that lack citations.
  • Encourage friends and family — especially younger users — to question and double-check AI responses.

🔮 Looking Ahead: AI’s Evolving Role in News

This study is a stark reminder that while AI assistants are improving, they are not yet reliable as primary sources of news. With tools like ChatGPT, Gemini, and Copilot becoming deeply integrated into search engines, smart devices, and voice platforms, developers face an urgent need to strengthen accuracy, credibility, and transparency.

The future of AI in journalism depends on one critical factor — trust. To maintain it, AI companies must take responsibility for their outputs and ensure that innovation never comes at the expense of truth.

For those interested in deeper insights, the complete study titled “News Integrity in AI Assistants” is available from the European Broadcasting Union’s official website.

RELATED BLOGS