back

From Mixture of Experts to Mixture of Agents with Super Fast Inference – Daniel Kim & Daria Soboleva

AI's next frontier evolves beyond MoE

In the race to build more efficient AI systems, researchers are confronting a pivotal challenge: balancing computational demands against performance. A recent talk by Daniel Kim and Daria Soboleva explores a groundbreaking approach that could fundamentally reshape how we deploy large language models. Their work on transitioning from Mixture of Experts (MoE) to Mixture of Agents represents one of the most promising architectural innovations for making AI systems simultaneously more powerful and more practical.

Key insights from their research:

  • From MoE to Mixture of Agents: The researchers have evolved beyond traditional MoE architectures (which route inputs to specialized neural network "experts") to create a system where specialized language models function as agents with distinct capabilities that can be dynamically composed.

  • Dramatic inference speed improvements: Their approach achieves up to 25x faster inference compared to conventional MoE models while maintaining comparable performance, addressing one of the most significant barriers to real-world AI deployment.

  • Dynamic resource allocation: The system intelligently determines which specialized agents to invoke for each specific task, efficiently managing computational resources by only activating what's needed rather than running the entire model.

  • Hierarchical reasoning capability: By structuring agents in multiple tiers—from small, task-specific models to more sophisticated reasoning models—the architecture enables complex problem-solving through collaboration between different AI components.

Why this architectural shift matters

The most profound insight from this research isn't just about technical performance—it's about reimagining how AI systems should be structured. Traditional approaches to scaling language models have focused primarily on making them bigger, which has yielded impressive capabilities but at unsustainable computational costs. The Mixture of Agents paradigm represents a fundamental pivot toward modular, composable AI that can achieve similar capabilities with dramatically lower resource requirements.

This matters tremendously in the broader AI landscape. We're witnessing a growing tension between what's theoretically possible with AI and what's practically deployable. Companies investing in AI capabilities often find themselves constrained not by what models can do, but by the economics of running them at scale. A 25x improvement in inference speed doesn't just mean faster responses—it potentially transforms AI from a luxury resource to a widely accessible utility.

Beyond the presentation: Implications and applications

Recent Videos

May 6, 2026

Hermes Agent Master Class

https://www.youtube.com/watch?v=R3YOGfTBcQg Welcome to the Hermes Agent Master Class — an 11-episode series taking you from zero to fully leveraging every feature of Nous Research's open-source agent. In this first episode, we install Hermes from scratch on a brand new machine with no prior skills or memory, walk through full configuration with OpenRouter, tour the most important CLI and slash commands, and run our first real task: a competitor research report on a custom children's book AI business idea. Every future episode will build on this fresh install so you can see the compounding value of the agent in real time....

Apr 29, 2026

Andrej Karpathy – Outsource your thinking, but you can’t outsource your understanding

https://www.youtube.com/watch?v=96jN2OCOfLs Here's what Andrej Karpathy just figured out that everyone else is still dancing around: we're not in an era of "better models." We're in a different era of computing altogether. And the difference between understanding that and not understanding it is the difference between being a vibe coder and being an agentic engineer. Last October, Karpathy had a realization. AI didn't stop being ChatGPT-adjacent. It fundamentally shifted. Agentic coherent workflows started to actually work. And he's spent the last three months living in side projects, VB coding, exploring what's actually possible. What he found is a framework that explains...

Mar 30, 2026

Andrej Karpathy on the Decade of Agents, the Limits of RL, and Why Education Is His Next Mission

A summary of key takeaways from Andrej Karpathy's conversation with Dwarkesh Patel In a wide-ranging conversation with Dwarkesh Patel, Andrej Karpathy — former head of AI at Tesla, founding member of OpenAI, and creator of some of the most popular AI educational content on the internet — shared his views on where AI is headed, what's still broken, and why he's now pouring his energy into education. Here are the key takeaways. "It's the Decade of Agents, Not the Year of Agents" Karpathy's now-famous quote is a direct pushback on industry hype. Early agents like Claude Code and Codex are...