back
Get SIGNAL/NOISE in your inbox daily

The rise of AI hype and unsupported predictions: Technology leaders and investors are making increasingly bold and precise claims about the future of artificial intelligence, often without substantial evidence or scientific backing.

  • Elon Musk, speaking at the Future Investment Initiative in Riyadh, Saudi Arabia, predicted that by 2040 there would be more humanoid robots than people.
  • Musk also claimed that AI is improving tenfold each year, extrapolating that it would be 10,000 times better in four years and potentially able to do anything a human can within one to two years.
  • Masayoshi Son, a prominent investor, asserted that “Artificial Super Intelligence” would arrive in 2035, being 10,000 times smarter than humans.

Critiquing the claims: These predictions have been met with skepticism from experts in the field, who point out the lack of supporting evidence and the improbability of such rapid advancements.

  • Gary Marcus, an AI researcher, challenged Musk’s prediction about humanoid robots, noting that there are only about 1.5 billion cars on the road and that it’s unlikely to see six humanoid robots for every car in the near future.
  • Marcus also questioned the meaning and validity of Son’s claim about AI being “10,000 times smarter than humans,” describing it as a “made up round number” with no real significance.
  • The lack of specificity in measures of improvement and the failure to consider potential bottlenecks in data, compute power, and inherent limitations of large language models (LLMs) were highlighted as major flaws in these predictions.

The dangers of precise predictions: The trend of making highly specific predictions about future AI capabilities and requirements is problematic due to the numerous unknowns in the field.

  • Son provided precise figures for the power requirements (400 gigawatts), chip needs (200 million), and capital investment ($9 trillion) for future AI development.
  • Experts argue that such precise predictions are impossible given the uncertainties in material costs, battery technology advancements, software improvements, and overall efficiency of future AI models.
  • The track record of similar predictions, such as Musk’s unfulfilled promise of a million robotaxis by 2020, serves as a cautionary tale against taking these claims at face value.

Motivations behind the hype: The inflated predictions and precise figures may serve specific interests within the AI and investment communities.

  • Investors working with other people’s money may benefit from promoting plausible stories with big numbers, as their fees are typically a percentage of the total investment.
  • The hype around AI can drive up investments and valuations, at least in the short term, regardless of whether the predictions come true.
  • There may be additional motivations for oil-rich countries to promote AI development, as current generative AI systems require substantial energy resources, potentially keeping fossil fuels relevant.

The need for critical thinking: As AI continues to advance, it’s crucial for the public, investors, and policymakers to approach bold claims and precise predictions with skepticism.

  • Experts emphasize the importance of demanding evidence and “showing the work” behind such predictions.
  • Understanding the limitations and current state of AI technology is essential for making informed decisions about investments and policy.
  • The field of AI is still in its infancy, and while progress is being made, it’s important to recognize the challenges and uncertainties that lie ahead.

Broader implications: The proliferation of unsupported AI predictions raises concerns about the potential impact on public perception, investment decisions, and policy-making.

  • Overhyped claims may lead to unrealistic expectations and potential disappointment when AI fails to deliver on inflated promises.
  • There’s a risk of misallocating resources based on overly optimistic projections, potentially diverting funding from more promising or necessary areas of research and development.
  • The disconnect between bold predictions and the current state of AI technology underscores the need for increased scientific literacy and critical evaluation of claims made by high-profile figures in the tech industry.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...