×
New research suggests language models aren’t merely memorizing information
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

New research explores how Large Language Models (LLMs) develop and apply reasoning capabilities through their pretraining data, offering insights into how these AI systems learn to solve problems rather than simply retrieving memorized information.

Research overview: Scientists investigated two LLMs of different sizes (7B and 35B parameters) to understand how they utilize pretraining data when solving mathematical reasoning tasks versus answering factual questions.

  • The study analyzed 2.5 billion training tokens to identify which documents influenced model outputs
  • Researchers compared the model’s approach to mathematical reasoning tasks against its handling of factual questions
  • The investigation focused on understanding whether LLMs truly reason or simply retrieve memorized information

Key findings about factual knowledge: The models typically relied on distinct sets of data when answering different factual questions, showing a direct correlation between training data and responses.

  • For factual questions, answers were commonly found within the most influential training documents
  • The model’s approach to factual queries appeared more retrieval-based, drawing directly from specific training examples

Mathematical reasoning insights: The research revealed that LLMs employ a more sophisticated approach to solving mathematical problems than simple fact retrieval.

  • Documents showing similar problem-solving methods often influenced multiple reasoning questions within the same task category
  • The actual answers to reasoning questions rarely appeared in the most influential training documents
  • Intermediate reasoning steps were also typically absent from the highly influential training data

Evidence of procedural learning: The study demonstrated that LLMs develop generalized problem-solving strategies through exposure to procedural examples in their training data.

  • Influential documents often contained demonstrations of solution methods, including formulae and code examples
  • The models appeared to synthesize procedural knowledge from similar reasoning patterns across multiple training examples
  • This finding suggests LLMs can develop genuine problem-solving capabilities rather than relying solely on memorization

Future implications: This research challenges previous assumptions about LLM capabilities and suggests these systems may be capable of more genuine reasoning than previously thought, though further research is needed to fully understand the extent and limitations of these capabilities.

Procedural knowledge in pretraining drives reasoning in large language models

Recent News

India reviewing copyright law as AI firms face legal challenges

Expert panel examines whether India's 1957 Copyright Act can address claims that AI systems are using content without permission to train large language models.

AI platform Korl customizes messaging with multiple LLMs

Korl's platform connects siloed business data systems to automatically generate personalized customer communications using model-specific AI assignments.

AI firms Musk’s xAI, TWG Global and Palantir target finance industry

The partnership will integrate xAI's Grok language models with Palantir's analytics to enhance data-driven decision making in finance and insurance operations.