Language models like the ones behind ChatGPT have complex, sometimes surprising internal structures, and we don’t yet fully understand how they work.
This approach is an early step toward closing that gap, and a part of a broader set of efforts across OpenAI to make our systems more interpretable—developing methods that help us understand why a model produced a given output. In some cases that means looking at the model’s step-by-step reasoning, and in others it means trying to reverse-engineer the small circuits inside the network.
There’s still a long path to fully understanding the complex behaviors of our most capable models.
Recent Stories
AI agents can talk — orchestration is what makes them work together
Orchestration across multi-agent systems and platforms is a critical concern — and a key differentiator for IT leaders hoping to see significant velocity gains with AI agents.
Jan 14, 2026TSMC Can’t Make AI Chips Fast Enough
Last November, during one of Nvidia CEO Jensen Huang’s five visits to Taiwan last year, he and CC Wei, CEO of Taiwan Semiconductor Manufacturing Co., met with reporters. Wei told the assembled media that Huang had come to ask for more chips, to which Huang, standing next to him, replied: “Yes!” ...
Jan 14, 2026‘AI Native’ Startups Double Annualized Revenue to $30 Billion in Seven Months
The revenue picture for AI startups is brightening, a bit. In just seven months, annualized revenue at “AI native” companies selling AI models or apps has doubled,from $15 billionto more than $30 billion, according to an analysis of 32 companies fromThe Information’s Generative AI Database.One ...