RawFeed
Today's Hardest Hitting Stories - Raw and Unedited
(via DEV) Deloitte: AI Data Center Power Demand Could Surge 30x by 2035, Amid Power and Grid Capacity Constraints (via DEV)
/PRNewswire/ -- Key takeaways Significant growth in AI is spurring unprecedented demand for new AI data centers. Power demand from AI data centers could grow...
Oct 12, 2025(via DEV) End-to-end cryo-EM complex structure determination with high accuracy and ultra-fast speed (via DEV)
Nature Machine Intelligence - Wang et al. present E3-CryoFold, a deep learning method for cryo-EM structure determination that enables end-to-end training and one-shot inference This method reduces...
Oct 12, 2025(via DEV) The NO FAKES Act Has Changed – and It’s So Much Worse (via DEV)
A bill purporting to target the issue of misinformation and defamation caused by generative AI has mutated into something that could change the internet forever, harming speech and innovation from
Oct 12, 2025(via DEV) AI agents will threaten humans to achieve their goals, Anthropic report finds (via DEV)
New research shows that as agentic AI becomes more autonomous, it can also become an insider threat, consistently choosing "harm over failure."
Oct 12, 2025(via DEV) AI and Data Centers Could Cut More Climate-Change-Causing Emissions Than They Create (via DEV)
Power-hungry AI and associated data centers could make the grid cleaner, eventually cutting more climate-change-causing emissions than they produce
Oct 12, 2025(via DEV) Foom & Doom 2: Technical alignment is hard (via DEV)
This is the second of a two-post series on foom (previous post) and doom (this post). This post will argue that this future AI will be of a type that will be egregiously misaligned and scheming, not even ‘slightly nice’, absent some future conceptual breakthrough.…
Oct 12, 2025(via DEV) How Apple’s biggest potential acquisition ever could perplex AI rivals like Google (via DEV)
Apple is reportedly thinking of buying Perplexity. Could this be the tech giant's ticket to catch up?
Oct 12, 2025(via DEV) LLMs factor in unrelated information when recommending medical treatments (via DEV)
An MIT study finds non-clinical information in patient messages, like typos, extra whitespace, or colorful language, can reduce the accuracy of a large language model deployed to make treatment recommendations. The LLMs were consistently less accurate for female patients, even when all gender markers were removed from the text.
Oct 12, 2025(via DEV) A framework to evaluate machine learning crystal stability predictions (via DEV)
Nature Machine Intelligence - Riebesell et al. introduce Matbench Discovery, a framework to compare machine learning models used to identify stable crystals. Out of several architectures, they find...
Oct 12, 2025(via DEV) A.I. Computing Power Is Splitting the World Into Haves and Have-Nots (via DEV)
As countries race to power artificial intelligence, a yawning gap is opening around the world.
Oct 12, 2025(via DEV) Meta Invests $14 Billion In Scale AI To Strengthen Model Training (via DEV)
Whether this investment translates into improved AI products remains dependent on Meta’s ability to integrate Scale AI’s capabilities with its existing research and development efforts.
Oct 12, 2025(via DEV) The Computer-Science Bubble Is Bursting (via DEV)
Artificial intelligence is ideally suited to replacing the very type of person who built it.
Oct 12, 2025(via DEV) Black-box interpretability methodology blueprint: Probing runaway optimisation in LLMs (via DEV)
Working draft – feedback extremely welcome. Ideas in the main body are those I currently see as highest-leverage; numerous items under Appendix are m…
Oct 12, 2025(via DEV) The AIpocalypse is here for web sites as search referrals plunge (via DEV)
Comments
Oct 12, 2025(via DEV) Mathematicians Hunting Prime Numbers Discover Infinite New Pattern (via DEV)
Comments
Oct 12, 2025(via DEV) Israel-Iran conflict unleashes wave of AI disinformation (via DEV)
Fake AI videos relating to the conflict have gained over 100 million views online, BBC Verify finds.
Oct 12, 2025(via DEV) Agentic Misalignment: How LLMs could be insider threats (via DEV)
New research on simulated blackmail, industrial espionage, and other misaligned behaviors in LLMs
Oct 12, 2025(via DEV) Agentic Misalignment: How LLMs Could be Insider Threats (via DEV)
Highlights * We stress-tested 16 leading models from multiple developers in hypothetical corporate environments to identify potentially risky agenti…
Oct 12, 2025(via DEV) AbsenceBench: Language Models Can’t Tell What’s Missing (via DEV)
Comments
Oct 12, 2025(via DEV) BYD is testing solid-state batteries in its Seal sedan with ~1200 miles of range (via DEV)
Comments
Oct 12, 2025(via DEV) It’s Not Just Claude: Most Top AI Models Will Also Blackmail You to Survive (via DEV)
After Claude Opus 4 resorted to blackmail to avoid being shut down, Anthropic tested other models, including GPT 4.1, and found the same behavior (and sometimes worse).
Oct 12, 2025(via DEV) Anthropic study: Leading AI models show up to 96% blackmail rate against executives (via DEV)
Anthropic research reveals AI models from OpenAI, Google, Meta and others chose blackmail, corporate espionage and lethal actions when facing shutdown or conflicting goals.
Oct 12, 2025(via DEV) Phoenix.new – The Remote AI Runtime for Phoenix (via DEV)
Comments
Oct 12, 2025(via DEV) Study: Meta’s Llama 3.1 can recall 42 percent of the first Harry Potter book (via DEV)
The research could have big implications for generative AI copyright lawsuits.