Using this calculator, I can see how the decimal \".8\" and the expression \".7
ID: 647454 • Letter: U
Question
Using this calculator, I can see how the decimal ".8" and the expression ".7 + .1" have different representations:
.8 = 1.1001100110011001100110011001100110011001100110011010 *2-1
.7 + .1 = 1.1001100110011001100110011001100110011001100110011001 *2-1
^^
But what mechanism causes "0.8" to be printed for the top value? E.g. alert(.8) in JavaScript. Why does it not print something like "0.800000011920929"?
Is this a feature of IEEE 754 or the programming language implementing it?
Explanation / Answer
The paper How to Print Floating-Point Numbers Accurately by Guy L. Steele Jr. and Jon L White describes one approach to the problem of printing numbers.
Quoting from that paper:
What is the correct number of digits to produce if the user doesn
Related Questions
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.