Gemini Ultra

2026 04 14 How does 4bit quantisation work

Source Notes

  • 2026-04-23: https://www.youtube.com/watch?v=-cRedoYETzQ Julia Turc The video discusses the evolution and challenges of training large language models (LLMs) with reduced precision, particularly focusing on the shift towards 4-bit floating-point (FP4) training. Cost of Training LLMs: Tr (How does 4bit quantisation work)