What does it mean to calculate the sine?
Posted: Wed Mar 19, 2025 2:41 pm
Hi all,
in my quest to write a maths library to call my own, I am studying fdlibm. Of course I am. Seems like every libc that doesn't do the topic a gross injustice uses it or something based on it.
Now, I don't want to just straight up copy it. For one thing, the goal of sirius is my own edification, and copy-and-paste doesn't teach me anything. For two, vanilla fdlibm only contains the C89 maths library, so that means only double precision, and adapting the code for the other three precisions is going to be hard. As is developing all the new maths functions the intervening C standards since then have dumped into math.h.
And in the course of doing this, I looked at the trigonometric functions. Now, they work mostly by reducing the argument modulo π/2 symmetrically (so the argument ends up in [-π/4; π/4]), then calculating a kernel function on the reduced argument, possibly transforming that for the final answer.
The kernel functions are very simple indeed. Because in the end, they mostly just calculate the Taylor series of their respective function up to some degree, possibly perturbing the result a little according to the second input. But the argument reduction functions (for there are two) are incredibly complicated. And I'm wondering if all this effort isn't spurious rigor.
The goal of the __ieee754_rem_pio2 function in fdlibm is to first calculate n = round(x * 2/π + copysign(0.5, x)), then calculate y = x - n * π/2 to two words of precision. The main focus of the function seems to be around the possibility of cancellation. It is possible that x is so close to n * π/2 that the leading bits get chopped off. But this got me thinking: Surely x only has 53 mantissa bits, and therefore cannot match nπ/2 beyond that. So the difference should start there, so to get my desired 2 words of difference, I need to at most calculate this to three words precision, right?
Because I'm finally getting to the title question right now: Given that each floating-point number is a stand-in for all real numbers that are closer to that number than any other representable number, what does it mean to calculate the sine (a transcendent function) on a floating-point number? I'm thinking the result is fine so long as the result interval contains the exact sine of any value in the source interval. And that does mean that things get less precise as the number gets larger, but that is normal for FP. In particular, as soon as the unit in the last place becomes 8, isn't the return value entirely arbitrary? At that point, the ULP is larger than the period of the function it's emulating, so any value from the codomain is OK to return, right?
in my quest to write a maths library to call my own, I am studying fdlibm. Of course I am. Seems like every libc that doesn't do the topic a gross injustice uses it or something based on it.
Now, I don't want to just straight up copy it. For one thing, the goal of sirius is my own edification, and copy-and-paste doesn't teach me anything. For two, vanilla fdlibm only contains the C89 maths library, so that means only double precision, and adapting the code for the other three precisions is going to be hard. As is developing all the new maths functions the intervening C standards since then have dumped into math.h.
And in the course of doing this, I looked at the trigonometric functions. Now, they work mostly by reducing the argument modulo π/2 symmetrically (so the argument ends up in [-π/4; π/4]), then calculating a kernel function on the reduced argument, possibly transforming that for the final answer.
The kernel functions are very simple indeed. Because in the end, they mostly just calculate the Taylor series of their respective function up to some degree, possibly perturbing the result a little according to the second input. But the argument reduction functions (for there are two) are incredibly complicated. And I'm wondering if all this effort isn't spurious rigor.
The goal of the __ieee754_rem_pio2 function in fdlibm is to first calculate n = round(x * 2/π + copysign(0.5, x)), then calculate y = x - n * π/2 to two words of precision. The main focus of the function seems to be around the possibility of cancellation. It is possible that x is so close to n * π/2 that the leading bits get chopped off. But this got me thinking: Surely x only has 53 mantissa bits, and therefore cannot match nπ/2 beyond that. So the difference should start there, so to get my desired 2 words of difference, I need to at most calculate this to three words precision, right?
Because I'm finally getting to the title question right now: Given that each floating-point number is a stand-in for all real numbers that are closer to that number than any other representable number, what does it mean to calculate the sine (a transcendent function) on a floating-point number? I'm thinking the result is fine so long as the result interval contains the exact sine of any value in the source interval. And that does mean that things get less precise as the number gets larger, but that is normal for FP. In particular, as soon as the unit in the last place becomes 8, isn't the return value entirely arbitrary? At that point, the ULP is larger than the period of the function it's emulating, so any value from the codomain is OK to return, right?