I just discovered that when dividing float infinity by itself, results in a different NaN on clang when compiling with any optimization option.
std::numeric_limits<float>::infinity()/std::numeric_limits<float>::infinity()
results in the following bit sets:
01111111110000000000000000000000
when compiling with -O1, and11111111110000000000000000000000
when compiling without it.
The difference is only in the sign bit.
I assume the decision of how to represent NaNs is compiler-dependent, but I am surprised that we would see different representations within the same compiler. GCC and MSVC appear to be consistent (11111111110000000000000000000000
) regardless of optimization.
My questions are:
- Should we expect a consistent result across compilers regardless of optimization?
- Should we expect a consistent result within a single compiler regardless of optimization?
7
Should we expect a consistent result across compilers regardless of optimization?
Should we expect a consistent result within a single compiler regardless of optimization?
The behaviors you report for Clang and the other compilers conform both to C++ and to IEEE-754, even if the sign bit varies from circumstance to circumstance, per below.
A scan of the C++ standard (2020 draft N4849) does not show anything that specifies what the sign of a NaN result should be.
C++ does not require implementations to conform to IEEE-754. Nonetheless, supposing an implementation does, then IEEE-754 2019 (draft D2.47, March 2019) 6.3 “The sign bit” says:
… For all other operations, this standard does not specify the sign
bit of a NaN result, even when there is only one input NaN, or when the NaN is produced from an invalid
operation…
“all other operations” includes division, as the operations discussed prior to this are copy
, negate
, abs
, copySign
, totalOrder
, and isSignMinus
.