Open-source language models
large-language-models characterized by publicly accessible weights, training methodologies, and architectures, typically distributed under permissive licenses like apache-20.
Key Characteristics
- Accessibility: Enables local deployment, fine-tuning, and inspection of machine-learning weights.
- Efficiency: Increasing focus on Edge AI and Parameter-efficient tuning to reduce computational overhead.
- Capabilities: Evolution from text-only to Multimodal processing (text, image, etc.).
Notable Developments
- Google Gemma 4 (2026-04-22):
- A new family of models optimized for Edge AI via “edge versions” (E2B and E4B).
- Features a 2.3B parameter multimodal architecture designed to approximate the reasoning capabilities of 70B parameter models.
- Released under the apache-20 license.
Backlinks
- 2026 04 22 Google Gemma 4 Efficient 2.3B Parameter Multimodal Edge AI