Artificial General Intelligence

Overview

Artificial General Intelligence (AGI) is the theoretical development of autonomous systems capable of performing any intellectual task a human can, characterized by cross-domain reasoning, transfer learning, and autonomous problem-solving.

Current Paradigms

  • large-language-models (LLMs): The dominant current approach, utilizing generative-ai to predict sequences of tokens based on massive linguistic datasets.
  • World Models: Architectures designed to understand and simulate the causal and physical properties of reality.
  • Self-Supervised Learning: A training methodology where models learn representations from unlabeled data by predicting missing parts of the input.

Emerging Architectures & Research

  • vl-jepa (Meta FAIR Lab):
    • Lead Researcher: Yann LeCun.
    • Shift in Approach: Represents a strategic movement away from purely linguistic generative-ai toward vision-centric intelligence.
    • Core Philosophy: Operates on the thesis that “Language is not Intelligence”; argues that linguistic prediction alone is insufficient for true cognitive competence.
    • Mechanism: Focuses on predictive modeling within a latent space (Joint-Embedding Predictive Architecture) rather than generative token production.

Backlink: 2026 04 14 New paper for a vision approach to AGI not LLM