RSS icon Home icon
  • Sort numbers before adding up!

    Posted on February 19th, 2023 admin No comments

    A few years ago my teammate came up with this statement. At first we misunderstood each other. I thought of integer numbers while he meant floating point. I know how the processor represent floats and I’m aware of how/why floating point numbers are normalized or rounded. But to be honest, I never thought the order of adding numbers is relevant. Ok, let see is it important or not. Before starting I have to say, this is all about floating point numbers, integers are not affected.

    First, let’s quickly go through how FP numbers are stored in the CPU and in software.

    The Intel and AMD processors implement IEEE-754 floating point arithmetic in two different ways:

    • The good-old x87 FPU extension, which uses eight 80 bits wide registers, implemented as a stack.
    • The newer SSE SIMD extension, which uses eight (later sixteen) 128 bits wide registers.

    In software, there are two or three different FP numbers, differ in precision:

    • single: 32 bits wide, 24 bits for the fraction (1 bit implied), 8 bits for the exponent and a sign bit.
    • double: 64 bits wide, 53 bits for the fraction (1 bit implied), 11 bits for the exponent and a sign bit.
    • extended: 80 bits wide, 64 bits for the fraction, 15 bits for the exponent and a sign bit. Not supported on SSE.

    The x87 FPU uses the extended precision internally while the SSE instructions can operate on four 32-bit or two 64-bit numbers at once. New compilers tend to use the SSE extensions by default for floating point operations.

    No matter how many bits are used for the fractional part of a number, it is limited. The number of significand (fraction) bits define the precision, how many digits can be stored for a number. In the FPU, the numbers are stored in a so called register stack where every number is represented by 80 bits. These bits can represent numbers between +/-3.37*10^-4932 to +/-1.18*10^4932. This is a huge range, but not all numbers in this range is representable. Because there is a fixed number of bits for the significand there is always a fixed digits of accuracy. The FPU always use 80 bits precision for calculations, there’s no other option. The loss of precision occurs when the result is written to memory. So theoretically, it makes no sense to use lower precision variables in the program, but of course other aspects must be taken into account, such as memory footprint and speed of execution. For new software, I recommend using the double format. The single format can be very inaccurate as we will see later. SSE instructions support single or double precision internally, resulting in lower computational precision compared to the FPU. Therefore, the FPU is more suitable for scientific calculations and the SSE is better at fast 3D math. This isn’t quite true by default in Visual Studio as we’ll see later. It’s worth noting that GPUs use FP32 (single precision) or even FP16 (half precision) numbers to speed up the computation and conserve space.

    So there’re two sources of loss of accuracy:

    • The CPU’s internal floating-point format.
    • The variable type.

    For example, let’s add these two numbers together: 8.0078125 + 9999992. The result should be 10000000.0078125 but it cannot be represented in single precision, so the result will be just 10000000. If I use double precision then the additional accuracy helps to hold the expected result. Obviously, double precision has its own limits.

    FP numbers are stored normalized (except when the number is very very small and stored denormalized, but for the sake of simplicity I skip this part). Normalization means, there is one decimal digit before the decimal point. In binary terms the decimal part is between [1, 2). Here are some examples for single precision:

    1 = 3F800000 = 0 01111111 00000000000000000000000 = 1 * 2^0
    2 = 40000000 = 0 10000000 00000000000000000000000 = 1 * 2^1
    4 = 40800000 = 0 10000001 00000000000000000000000 = 1 * 2^2
    5 = 40A00000 = 0 10000001 01000000000000000000000 = 1.25 * 2^2
    6 = 40C00000 = 0 10000001 10000000000000000000000 = 1.5 * 2^2

    So, in the previous example the numbers are look like this in single precision:

    0 10000010 00000000010000000000000 = 1.00097656250000000000*2^03 = 8.0078125
    0 10010110 00110001001011001111000 = 1.19209194183349609375*2^23 = 9999992

    When two numbers are added together their exponents must be the same, one number must be scaled to match the other. This scaling eventually results in a loss of accuracy. In my example, the fractional part 0.0078125 was lost due to rounding to less precision. The addition works as follows. I only show the mantissa with the implied bit (which actually exists in the internal 80 bit representation). All of the remaining bits are zero up to the end.

    100000000010000000000000
    100110001001011001111000
    ------------------------
    0000000000000000000010000000001 <- scaling
    1001100010010110011110000000000
    -------------------------------
    1001100010010110100000000000001
    100110001001011010000000 <- rounding to single

    The sum of those two numbers became:

    10000000 = 0 10010110 00110001001011010000000 = 1.1920928955078125 * 2^23

    But why is this scaling and rounding a problem when the FPU stores these numbers with maximum precision (80 bit mantissa or fraction)? As I mentioned earlier, the loss of accuracy occurs when the result is storing in a variable. Here’s a compiler generated code in Delphi:

    mov [ebp-$4c], 0x41002000  ; number 8.0078125
    mov [ebp-$50], 0x4B189678  ; number 9999992
    fld dword ptr [ebp-$4c]    ; ST0 = 8.0078125
    fadd dword ptr [ebp-$50]   ; ST0 = 10000000.0078125
    fstp dword ptr [ebp-$4c]   ; stored in a variable, fraction was lost

    However, when using SSE the loss occurs immediately in the registers:

    movss xmm0, dword ptr [num1]
    addss xmm0, dword ptr [num2] ; fraction was lost here
    movss dword ptr [num1], xmm0

    Why is it interesting? We know that floating numbers have precision limits, nothing new here. Things get interesting when adding many arbitrary numbers together. As long as these numbers are roughly in the same order of magnitude nothing bad happens. However, if the numbers lie in a large scale the scaling can cause unexpected results. Imagine we add numbers from an exponential scale. There are many small numbers and many large numbers. What if I add these numbers in random order? If I add large numbers first, small numbers may be partially or completely omitted. On the contrary, If I add small numbers first, they may accumulate to a larger amount and can contribute to the final sum.

    Here are two examples which add 20000 numbers together from an exponential scale. The first example adds numbers in ascending order, the second example in descending order.

    float n = 0;
    for (int i = 0; i < 20000; i++) n += exp(i / 1000.0);
    n = 0;
    for (int i = 19999; i >= 0; i--) n += exp(i / 1000.0);

    The following table shows the results for single and double precisions using SSE (compiled by Visual Studio):

    Single precision Double precision Difference
    Ascending 484923080704 484922652243.017395019531250 ~428461
    Descending 484911316992 484922652243.018127441406250 ~11335251
    Difference 11763712 0.000732421875

     

    As you can see, if I add numbers in ascending order the final sum will be much closer to the expected result. The error is much greater when using single precision, but the error may be unacceptable even in double format.

    Why it can be so important? For instance, when calculating the average on an array of numbers, the error can be reduced when the array is sorted in ascending order. It’s also interesting to note that you can achieve full accuracy with an FPU as long as you keep your calculations in the stack registers. This is not true on Visual Studio since the compiler sets the FPU Control Word to 64-bit rounding, which means the FPU rounds results to double after each arithmetic instruction. I think this is due to compatibility with SSE. Of course, you can override the CW register to 80-bit rounding to keep full precision.

    Even though the SSE registers limit floating-point numbers to 32 or 64 bits, it looks like they use higher internal precision when calculating. To prove this, I added two numbers in SSE registers that differ by an enormous order of magnitude. 31525197391593472 + 3.25 = 31525197391593476. The result is rounded up. If the CPU hadn’t calculated more bits, it would just truncate the number. I assume that, for simplicity, the SSE uses 80-bit internal registers for calculations in the same way as the FPU. If I calculate with FPU and store the number in a double, I get exactly the same result.

    111000000000000000000000000000000000000000000000000000000 = 1.75 * 2^54
    110100000000000000000000000000000000000000000000000000000 = 1.625 * 2^1
    ---------------------------------------------------------------------------
    111000000000000000000000000000000000000000000000000000000
    000000000000000000000000000000000000000000000000000001101 <- scaling
    ---------------------------------------------------------------------------
    111000000000000000000000000000000000000000000000000001101 <- result
    ---------------------------------------------------------------------------
    11100000000000000000000000000000000000000000000000001 <- rounded to 64 bits

    As I wrote above, by default, Visual Studio calculates the same accuracy using FPU as it does with SSE. If you add these numbers together more than once the rounding accumulates even more errors. We would expect the magnitude of the error to be smaller for the FPU due to the 80-bit precision. But if you compile the following code, the result is exactly the same. This is due Visual Studio sets FPU rounding precision to 64 bits (53 bits mantissa).

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    double op1 = 31525197391593472;
    double op2 = 3.25;
    double op3;

    __asm {
    movsd xmm0, qword ptr[op1]
    addsd xmm0, dword ptr[op2]
    addsd xmm0, dword ptr[op2]
    addsd xmm0, dword ptr[op2]
    addsd xmm0, dword ptr[op2]
    movsd dword ptr[op3], xmm0 ; 31525197391593488

    fld qword ptr op1
    fld qword ptr op2
    fadd st(1), st(0)
    fadd st(1), st(0)
    fadd st(1), st(0)
    faddp st(1), st(0)
    fstp qword ptr op3 ; 31525197391593488
    }

    But, if you change the FPU rounding precision back to its default 80 bits mode (64 bits mantissa) the result will be more accurate. Note that Visual Studio doesn’t support 80-bit extended floating point numbers. The “long double” type is identical to “double”. Moreover, Visual Studio uses SSE instructions to calculate floating point numbers. You cannot force VS to compile FPU code. You must embed it manually in assembly if you insist, but there is no 80-bit datatype where you can store the result. On top of it, VS doesn’t support “tword ptr” so you cannot compile 80-bit FPU instructions.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    short fpucw, fpucworig;
    __asm {
    fnclex
    fstcw fpucworig
    mov ax, word ptr [fpucworig]
    or ax, 0x0300 ; rounding precision is set to 80 bits
    mov word ptr [fpucw], ax
    fldcw fpucw
    fld qword ptr op1
    fld qword ptr op2
    fadd st(1), st(0)
    fadd st(1), st(0)
    fadd st(1), st(0)
    faddp st(1), st(0)
    ;fstp tword ptr op3 ; doesn't work, would be 31525197391593485
    fstp qword ptr op3 ; rounding to double -> 31525197391593484
    fldcw fpucworig
    }

    Furthermore, the SSE rounding control can also be changed exactly as with FPU:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    unsigned long oldmxcsr, mxcsr;

    __asm {
    stmxcsr dword ptr[oldmxcsr]
    mov eax, dword ptr[oldmxcsr]
    or eax, 0x6000 ; rounding control is set to truncate
    mov dword ptr[mxcsr], eax
    ldmxcsr dword ptr[mxcsr]
    movss xmm0, dword ptr[op1]
    addss xmm0, dword ptr[op2]
    movss dword ptr[op3], xmm0
    ldmxcsr dword ptr[oldmxcsr]
    }

    Available rounding controls:

    0x0000 To nearest rounding mode
    0x2000 Toward negative infinity rounding mode
    0x4000 Toward positive infinity rounding mode
    0x6000 Toward zero rounding mode

    Obviously, you cannot alter the rounding precision for SSE (32-bits, 64-bits, 80-bits) as it’s determined by the SSE arithmetic instruction (single or double precision), since the rounding happens immediately in the register.

    If you want to achieve scientific-level accuracy, you have to be prepared for several pitfalls in floating-point calculations. It’s also possible that you’ll need your own, higher-precision floating-point library.

    Leave a Reply

    Your email address will not be published. Required fields are marked *