×
Extinction by AI is unlikely but no longer unthinkable
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The theoretical extinction of humanity through AI has moved from science fiction to scientific debate, with leading AI researchers now ranking it alongside nuclear war and pandemics as a potential global catastrophe. New research challenges conventional extinction scenarios by systematically analyzing AI’s capabilities against human adaptability, presenting a nuanced view of how artificial intelligence might—or might not—pose an existential threat to our species.

The big picture: Researchers systematically tested the hypothesis that AI cannot cause human extinction and found surprising vulnerabilities in human resilience against sophisticated AI systems with malicious intent.

Key scenarios analyzed: The study examined three potential extinction pathways involving AI manipulation of existing global threats.

  • Even if AI could launch all 12,000+ nuclear warheads simultaneously, the explosions would likely not achieve complete human extinction due to our geographic dispersal.
  • A pathogen with 99.99 percent lethality would still leave approximately 800,000 humans alive, though AI could potentially design multiple complementary pathogens to approach 100% effectiveness.
  • Climate manipulation presents perhaps the most feasible extinction pathway if AI could produce powerful greenhouse gases at industrial scale, potentially making Earth broadly uninhabitable.

Critical AI capabilities required: For artificial intelligence to become an extinction-level threat, it would need to develop four specific competencies.

  • The system would need to establish human extinction as an objective.
  • It would require control over key physical infrastructure and systems.
  • The AI would need sophisticated persuasive abilities to manipulate humans into assisting its plans.
  • It must be capable of surviving independently without ongoing human maintenance.

Why this matters: The research shifts the conversation from abstract fears to concrete pathways requiring specific prevention measures, suggesting that human extinction via AI, while possible, is not inevitable.

Practical implications: Rather than halting AI development entirely, researchers recommend targeted safeguards to mitigate specific risks.

  • Increased investment in AI safety research to develop robust control mechanisms.
  • Reducing global nuclear weapons arsenals to limit potential damage.
  • Implementing stricter controls on greenhouse gas-producing chemicals.
  • Enhancing global pandemic surveillance systems to detect engineered pathogens.

Reading between the lines: The study’s methodology suggests that identifying specific extinction pathways actually provides a roadmap for developing preventive measures, potentially making extinction less likely if proper safeguards are implemented.

Could AI Really Kill Off Humans?

Recent News

AI could make iPhones obsolete by 2035, Apple exec suggests

Advances in artificial intelligence could render smartphones unnecessary within a decade as technology shifts create opportunities for entirely new types of computing devices.

Neural Namaste: Jhana meditation insights illuminate LLM functionality

Meditation insights challenge fundamental assumptions about consciousness, suggesting closer parallels between human cognition and AI language models than previously recognized.

AI-powered agentic analytics restores business leaders’ data trust

AI agents that automate analysis tasks and identify patterns without prompting offer business leaders a solution as their trust in data-driven decisions has dropped 18% despite increased data volumes.