New research finds that artificial intelligence can significantly reduce people’s belief in conspiracy theories, challenging the notion that conspiratorial thinking is impervious to intervention. Two landmark studies from MIT and Cornell University demonstrate that AI-powered conversations can decrease conspiracy belief by an average of 20 percent through thoughtful presentation of evidence and Socratic questioning, highlighting AI’s potential as an effective tool against misinformation in an era when traditional debunking methods have shown limited success.
The big picture: Scientists have discovered that AI can successfully reduce conspiracy belief where other interventions have failed, with ChatGPT conversations leading a quarter of participants to substantially change their minds.
- Researchers from MIT and Cornell University conducted studies with over 1,000 participants, finding that after AI-facilitated discussions, believers reduced their conspiracy belief by an average of 20 percent.
- The effect was durable, with follow-up measurements taken two months after the initial conversations showing sustained reductions in conspiracy belief.
How it works: Rather than imposing pre-selected conspiracy theories, researchers asked participants to describe scenarios where they believed powerful groups were acting secretly with malicious intent.
- Participants rated their belief in their chosen conspiracy theory before and after conversing with ChatGPT4-Turbo, which used counterfactual evidence and Socratic questioning while building rapport.
- The AI system was programmed to present compelling evidence against the conspiracy while maintaining a respectful dialogue with believers.
By the numbers: The impact of AI intervention varied significantly across participants, with some experiencing profound belief changes.
- One quarter of participants experienced transformative shifts, with their belief ratings dropping below 5 on a 10-point scale—essentially moving from belief to doubt.
- Another quarter became more tentative about their conspiracy beliefs without fully abandoning them.
Between the lines: The studies challenge the conventional wisdom that conspiracy believers are immune to evidence-based persuasion.
- Lead author Thomas Costello emphasized that “believers can revise their views if presented with sufficiently compelling evidence,” suggesting that the approach to countering misinformation matters as much as the content.
- Princeton cognitive scientist Kerem Oktar cautions that beliefs often serve functional purposes in people’s lives, and breaking these allegiances may come with significant personal or social costs.
Why this matters: In an era of rampant misinformation, these findings offer a promising approach to deprogramming conspiratorial thinking using artificial intelligence as an impartial, patient mediator.
- Traditional methods of conspiracy belief intervention have shown limited effectiveness, with a 2023 review of 25 studies finding most existing approaches don’t work.
- AI systems could potentially scale these interventions to reach millions of people holding various conspiracy beliefs across different domains.
Conspiracy Theorists Can Be Deprogrammed