×
AI researchers test LLM capabilities using dinner plate-sized chips
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The Cerebras WSE processor is revolutionizing AI capabilities with unprecedented computing power and speed. This dinner plate-sized chip represents a significant departure from traditional processors, offering hundreds of thousands of cores and remarkable context capabilities that are transforming how industries handle large language models and complex data processing tasks. Understanding these hardware advances is crucial as organizations seek competitive advantages through faster and more powerful AI implementations.

The big picture: The Cerebras Wafer Scale Engine (WSE) represents a breakthrough in AI computing hardware, with its massive size and processing power enabling previously impossible AI capabilities.

  • At 8.5 x 8.5 inches—roughly the size of a dinner plate—the WSE is dramatically larger than traditional microprocessors.
  • The processor contains hundreds of thousands of cores and offers extraordinary context capability for language models.

Key technical context: LLMs operate as neural networks that process information through tokenization and contextual understanding to generate responses.

  • Tokens represent small bits of data that get incorporated into an overall context for machine processing.
  • Context refers to how far back the program can look at previous tokens to inform its responses.
  • Inference is the real-time thinking process computers use when responding to questions.

Performance advantages: The WSE offers significant speed improvements over competing AI hardware systems, including Nvidia GPUs, Google TPUs, and Grok LPUs.

  • According to the article, the Cerebras system can process 2500 tokens per second, delivering nearly instantaneous responses that are “too fast to read.”
  • The system’s extraordinary speed and context capabilities make it unique in the market, offering both expansive context windows and rapid inference.

Data management innovations: The article outlines three approaches for handling extremely large data files that would otherwise overwhelm LLMs.

  • These approaches—Log2, square root, and double square root—involve sampling chunks of data in a “funnel” design to produce cohesive results without overloading the model.

Real-world applications: Several major organizations across different sectors are already implementing the Cerebras WSE technology.

  • Adopters include G42 (an AI and cloud company in the UAE), the Mayo Clinic, various pharmaceutical companies, and the Lawrence Livermore National Laboratory.
  • The technology is particularly valuable in legal, government, and trading environments where processing speed is paramount.
Pushing The Limits Of Modern LLMs With A Dinner Plate?

Recent News

Chrome browser uses AI to detect tech support scams

Chrome's new on-device AI feature analyzes suspicious webpages in real-time to identify and block tech support scams that traditional security measures often miss.

AI search firm Perplexity nears $14B valuation in funding round

The AI search startup scales back its initial fundraising target of $1 billion at $18 billion valuation, but still secures substantial backing to challenge Google's dominance.

ChatGPT’s new PDF export feature boosts business productivity

The feature enables ChatGPT Plus and Pro subscribers to download AI-generated research with preserved formatting and citations, bridging the gap between AI capabilities and traditional business workflows.