NemoClaw Knowledge Wiki

Home

❯

concepts

❯

floating point numbers

floating-point-numbers

Apr 24, 20261 min read

  • concept
  • floating-point
  • llm-training
  • reduced-precision
  • fp4
  • quantization

Floating Point Numbers

Source Notes

  • 2026-04-23: https://www.youtube.com/watch?v=-cRedoYETzQ Julia Turc The video discusses the evolution and challenges of training large language models (LLMs) with reduced precision, particularly focusing on the shift towards 4-bit floating-point (FP4) training. Cost of Training LLMs: Tr (How does 4bit quantisation work)

Graph View

  • Floating Point Numbers
  • Source Notes

Backlinks

  • INDEX
  • How does 4bit quantisation work
  • precision-training
  • Maths & Cryptography
  • Adam Lucek - quantisation of LLM

Created with Quartz v4.5.2 © 2026

  • GitHub
  • Discord Community