For specifics I am talking about x87 PC architecture and the C compiler.
I am writing my own interpreter and the reasoning behind the double
datatype confuses me. Especially where efficiency is concerned. Could someone explain WHY C has decided on a 64-bit double
and not the hardware native 80-bit double
? And why has the hardware settled on an 80-bit double
, since that is not aligned? What are the performance implications of each? I would like to use an 80-bit double
for my default numeric type. But the choices of the compiler developers make me concerned that this is not the best choice.
double
on x86 is only 2 bytes shorter, why doesn't the compiler use the 10 bytelong double
by default?- Can I get an example of the extra precision gotten by 80-bit
long double
vsdouble
? - Why does Microsoft disable
long double
by default? - In terms of magnitude, how much worse / slower is
long double
on typical x86/x64 PC hardware?