Yann LeCun
Profile
Chief AI Scientist at meta and Professor at New York University. A foundational figure in Deep Learning and Convolutional Neural Networks.
Research Philosophy
- world-models: Advocates for the development of architectures that can predict and model the physical world.
- Anti-LLM Paradigm: Maintains that “Language is not Intelligence”; argues that large-language-models (LLMs) lack the reasoning capabilities required for AGI.
- Self-Supervised Learning: Focuses on enabling models to learn directly from sensory input without massive human-labeled datasets.
Key Research & Architectures
- JEPA (Joint-Embedding Predictive Architecture): A non-generative approach to predictive modeling.
- vl-jepa: A recent advancement in vision-language modeling representing a strategic shift away from generative-ai architectures toward a vision-centric approach to AGI.
- Meta FAIR Lab: Primary laboratory for research into non-generative, predictive reasoning architectures.
Related Notes
Source Notes
- 2026-04-23: https://www.youtube.com/watch?v=Cis57hC3KcM Channel: the AIGRID Here is a detailed breakdown of the transcript regarding Meta’s VL-JEPA, followed by a comparison to other emerging non-LLM reasoning architectures. * * * # 🧠 VL-JEPA: Meta’s Shift Away from Generative AI **Based o (🧠 VL-JEPA: Meta’s Shift Away from Generative AI)
- 2026-04-14: # New paper for a vision approach to AGI - not LLM --- --- https://www.youtube.com/watch?v=Cis57hC3KcM Channel: the AIGRID Here is a detailed breakdown of the transcript regarding Meta’s VL-JEPA, followed by a comparison to other emerging non-LLM reasoning architectures. * * * (New paper for a vision approach to AGI - not LLM)