Skip to main content
← Choose a different target

Unlock: Floating-Point Arithmetic

How computers represent real numbers, why they get it wrong, and why ML uses float32, float16, bfloat16, and int8. IEEE 754, machine epsilon, overflow, underflow, and catastrophic cancellation.

0 Prerequisites0 Mastered0 Working0 Gaps
Prerequisite mastery0%

This topic has no prerequisites. You can start studying it directly.