Why is such a “weird” register size used? Is there any documentation on why it is not preferable to use 64 or 128 bits for those registers?
-
What is the practical programming problem you're trying to solve? – Raymond Chen Sep 27 '12 at 16:41
-
1single precision 32 bits, double 64 and extended is 80 bits. has nothing to do with intels processor. – old_timer Sep 27 '12 at 17:17
-
2http://en.wikipedia.org/wiki/Extended_precision#Need_for_the_80-bit_format – Hans Passant Sep 27 '12 at 17:27
-
2I'm sure W. Kahan has a rationale for it somewhere. – ninjalj Sep 27 '12 at 17:57
-
8The 80-bit format was and remains the perfect size for its intended purpose. It is large enough to accommodate a lossless conversion from 64-bit signed or unsigned integer types, its mantissa is small enough to fit in four 16-bit words or two 32-bit words, the exponent is small enough to fit in a 16-bit word, and it allows the mantissa and exponent to be easily extracted without shifts, and using a single bit-masking operation for the exponent. It's important to be able to load and store temp variables of the extended-precision type, but it doesn't usually need to be held in data structures. – supercat Oct 19 '14 at 22:46
-
Even if a processor could only do a full-precision load/store from a 16-byte data type [ignoring 48 bits of padding] the type would still be very useful *were it not for languages' failure to let programmers actually use it*. – supercat Oct 19 '14 at 22:48
-
1in fact Intel Itanium's floating-point registers are **82-bit** wide and is still conforming to IEEE-754 extended precision – phuclv Feb 18 '22 at 02:04
1 Answers
On the Wikipedia page on the IEEE 754-1985 standard there is a pretty good explanation regarding the 80-bit extended format:
"The standard also recommends extended format(s) to be used to perform internal computations at a higher precision than that required for the final result, to minimise round-off errors"
A double precision floating point number is represented in 64 bits. You would want a few more bits to get higher precision for intermediate results, but it would be overkill to use a 128 bit type when you only want 64 bits in the final result.
80 bits is a reasonably even number of bits that is larger than 64 bits.
Consider that the data bus at the time when those standards were established was 8 or 16 bits, not 32 or 64 bits like today. If the standard was written today 96 bits would be a more reasonable number, or perhaps the data would be transmitted as 128 bits even if all those bits wouldn't be used in the calculations.
- 687,336
- 108
- 737
- 1,005
-
10Another advantage of the 80-bit format is that many machines without an FPU can process a 64-bit mantissa and 16-bit exponent/sign word more efficiently than they can process a 53-bit mantissa and a 12-bit exponent/sign field. It's too bad so many compiler vendors neglected to properly support the 80-bit type, since it would allow many operations to be completed using many fewer steps than are needed in its absence. – supercat Oct 19 '14 at 22:08
-
680 bits with a 64 bit mantissa was specifically chosen such that logarithms can be taken without loss of precision (as the logarithm maps exponent+mantissa to just mantissa). – fuz Mar 15 '20 at 09:43
-
2@fuz: njuffa's answer on [Why do higher-precision floating point formats have so many exponent bits?](https://stackoverflow.com/q/40775949) mentions that having a wide enough mantissa in the extended-precision format allows fast *exponentiation* of `double` with a naive algorithm, despite the error magnification of exponentiation. – Peter Cordes Feb 15 '22 at 18:44
-
1Worth explicitly noting that the 80-bit width is not only applicable to floating point operations. I discovered it by chance today while benchmarking different struct widths; at the 10 byte mark things remained fast, at 11 bytes saw a 50% drop in throughput. Struct can be made of whatever primitives, custom value types you like, but 10 bytes is the magic number. – Engineer Apr 30 '22 at 03:38