Hi all,
in my quest to write a maths library to call my own, I am studying fdlibm. Of course I am. Seems like every libc that doesn't do the topic a gross injustice uses it or something based on it.
Now, I don't want to just straight up copy it. For one thing, the goal of sirius is my own edification, and copy-and-paste doesn't teach me anything. For two, vanilla fdlibm only contains the C89 maths library, so that means only double precision, and adapting the code for the other three precisions is going to be hard. As is developing all the new maths functions the intervening C standards since then have dumped into math.h.
And in the course of doing this, I looked at the trigonometric functions. Now, they work mostly by reducing the argument modulo π/2 symmetrically (so the argument ends up in [-π/4; π/4]), then calculating a kernel function on the reduced argument, possibly transforming that for the final answer.
The kernel functions are very simple indeed. Because in the end, they mostly just calculate the Taylor series of their respective function up to some degree, possibly perturbing the result a little according to the second input. But the argument reduction functions (for there are two) are incredibly complicated. And I'm wondering if all this effort isn't spurious rigor.
The goal of the __ieee754_rem_pio2 function in fdlibm is to first calculate n = round(x * 2/π + copysign(0.5, x)), then calculate y = x - n * π/2 to two words of precision. The main focus of the function seems to be around the possibility of cancellation. It is possible that x is so close to n * π/2 that the leading bits get chopped off. But this got me thinking: Surely x only has 53 mantissa bits, and therefore cannot match nπ/2 beyond that. So the difference should start there, so to get my desired 2 words of difference, I need to at most calculate this to three words precision, right?
Because I'm finally getting to the title question right now: Given that each floating-point number is a stand-in for all real numbers that are closer to that number than any other representable number, what does it mean to calculate the sine (a transcendent function) on a floating-point number? I'm thinking the result is fine so long as the result interval contains the exact sine of any value in the source interval. And that does mean that things get less precise as the number gets larger, but that is normal for FP. In particular, as soon as the unit in the last place becomes 8, isn't the return value entirely arbitrary? At that point, the ULP is larger than the period of the function it's emulating, so any value from the codomain is OK to return, right?
What does it mean to calculate the sine?
What does it mean to calculate the sine?
Carpe diem!
-
- Member
- Posts: 5721
- Joined: Mon Mar 25, 2013 7:01 pm
Re: What does it mean to calculate the sine?
Depends on what you consider to be matching. If you extend x to greater precision, the bits following the first 53 are zero, which can indeed match nπ/2. One paper claims up to 61 additional matching bits across all possible double values. (I'd like to know how to figure out the required precision for other floating-point types, but I still haven't found the source for the 61-bit claim...)
You're assuming that large arguments always have large rounding errors. Can you prove that your assumption is true for every program that will ever call your math library?
Re: What does it mean to calculate the sine?
Actually I would go as far as to say that if x has 53 bits matching with nπ/2, then returning 0 from the remainder operation is the right thing to do. Because it means that the exact value of nπ/2 is within 1/2 ULP of the argument.Octocontrabass wrote: ↑Wed Mar 19, 2025 10:42 pm Depends on what you consider to be matching. If you extend x to greater precision, the bits following the first 53 are zero, which can indeed match nπ/2.
I don't know the rounding error on the argument, and if it is not further stated, then 1/2 ULP is the least I can assume. But if an application needs high accuracy for the sine but hands over large arguments, then it is likely already doing something wrong and should probably use its own sine implementation.Octocontrabass wrote: ↑Wed Mar 19, 2025 10:42 pm You're assuming that large arguments always have large rounding errors. Can you prove that your assumption is true for every program that will ever call your math library?
Carpe diem!
-
- Member
- Posts: 5721
- Joined: Mon Mar 25, 2013 7:01 pm
Re: What does it mean to calculate the sine?
The exact value of the remainder is not within 1/2 ULP of 0.
Passing large arguments to the sine function might indicate a problem if the application is trigonometry, but the application might instead be procedural generation, where the arguments are chosen purely because the return values can be arranged in an aesthetically pleasing way. (Programs that do procedural generation usually don't have room to include their own implementations of functions that are provided by the standard library.)
Re: What does it mean to calculate the sine?
It may be tricky in the sense that while any particular value alone is OK, the caller of the function may rely heavily on some invariants still holding. E.g. if you return 0 for both sin(x) and cos(x) when x is large, then the sum of their squares isn't 1 anymore. I haven't yet figured out the best behavior here. Ultimately, it may be application-dependent.
Re: What does it mean to calculate the sine?
No danger of that: I make __rem_pio2() return 0 for large inputs, so then __kernel_sin() returns 0 and __kernel_cos() returns 1, thus the invariant holds.alexfru wrote: ↑Fri Mar 21, 2025 7:58 am It may be tricky in the sense that while any particular value alone is OK, the caller of the function may rely heavily on some invariants still holding. E.g. if you return 0 for both sin(x) and cos(x) when x is large, then the sum of their squares isn't 1 anymore. I haven't yet figured out the best behavior here. Ultimately, it may be application-dependent.
Carpe diem!
Re: What does it mean to calculate the sine?
Well, there's another dependency possible: sin(x) varies with x and doesn't become a constant beyond some point.nullplan wrote: ↑Tue Mar 25, 2025 1:27 pmNo danger of that: I make __rem_pio2() return 0 for large inputs, so then __kernel_sin() returns 0 and __kernel_cos() returns 1, thus the invariant holds.alexfru wrote: ↑Fri Mar 21, 2025 7:58 am It may be tricky in the sense that while any particular value alone is OK, the caller of the function may rely heavily on some invariants still holding. E.g. if you return 0 for both sin(x) and cos(x) when x is large, then the sum of their squares isn't 1 anymore. I haven't yet figured out the best behavior here. Ultimately, it may be application-dependent.