Conditional Memory

A mechanism in large language models (LLMs) that enables selective activation of memory modules based on task requirements, optimizing computational efficiency by distinguishing between deep reasoning and simple recall.

Key Insights from DeepSeek’s Engram Paper

  • Core Problem: Current Transformer architectures waste computation by treating all tasks uniformly, failing to distinguish between deep thought (complex reasoning) and simple recall (factual knowledge).
  • Solution: Conditional Memory via Scalable Lookup introduces a new axis of sparsity through a scalable memory lookup system.
  • Mechanism: Models conditionally access pre-stored knowledge only when required, reducing unnecessary computation for recall tasks.
  • Impact: Enables efficient resource allocation without compromising performance on complex reasoning tasks.
  • 2026 04 14 DeepSAeek Engram paper Prompt Engineering channel

Source Notes