Hi,
Candy wrote:Brendan wrote:The most common problem is that CPUs work in binary and some decimal numbers can't be represented precisely in binary. For example, "0.1" in binary repeats forever (it becomes 0.0001100110011001100110011...), so the CPU will either truncate it or round it to make it fit. This means that "(1 / 10) * 10" might give the answer 1.00000000000001 because the intermediate value "1 / 10" can't be represented precisely.
WRT the binary representation of any fraction, is it true (proof request) that all elements in Q are representable in a finite amount of bits , possibly with a repetition of the last N bits infinitely many times?
That depends - can you define "all elements of Q"? I think Q was the inventor in James Bond movies, and I assume he only had a single integer (possibly in octal) like 007 did...
Candy wrote:If so, you could make the arbitrary precision integer class also represent an arbitrary precision rational, without resorting to an explicit type of division encoded into a type.
You could have a significand, an (optional?) exponent, a "recurring" and a "recurring length", where a number like "0.10" would be represented as significand = 0, exponent = -1, "recurring" = 0011b and "recurring length" = 4. In this case you'd be able to represent any rational number that fits in the significand and "recurring", and would lose precision for things that don't fit.
The problem is writing the code to do it. For example, if "recurring" is 32 bits wide then for division you'd need a 32 entry stack of remainders to detect the number of recurring bits, and I've got no idea how you'd go about adding, multiplying or dividing 2 numbers with recurring digits of different lengths.
Alternatively, storing a significand, divisor and exponent is theoretically easier, as the algorithms to do it are well documented (e.g. normal binary math algorithms, and the
Stein algorithm cover most of it). Something tells me that the width of the divisor determines how many recurring digits it supports, but I need to do more thinking before I call that idea more than a guess.
To be honest I've been experimenting with arbitrary precision "S/D * 2^E" numbers - unfortunately it's surprisingly complex to do efficiently without unnecessary precision loss.
For an example, for a simple operation like adding 2 numbers you end up finding the lowest common multiples (which involves finding greatest common divisors) and then doing 2 multiplications and 2 divisions just to get the divisors the same. After that you'd shift significands and increase/decrease exponents to get the exponents the same. Then you'd add the significands. Lastly you'd work out another greatest common divisor and divide the resulting significand and divisor by the GDC.
The entire process looks like this:
Code: Select all
5/8 * 2^2 + 7/12 * 2^1
= 5*LCM/8 / LCM * 2^2 + 7*LCM/12 / LCM * 2^1 ;Make divisors the same
= 5*24/8 / 24 * 2^2 + 7*24/12 / 24 * 2^1
= 5*3 / 24 * 2^2 + 7*2 / 24 * 2^1
= 15 / 24 * 2^2 + 14 / 24 * 2^1
= 30 / 24 * 2^1 + 14 / 24 * 2^1 ;Make exponents the same
= (30 + 14) / 24 * 2^1
= 44 / 24 * 2 ^ 1
= 44/GCD / 24/GCD * 2^1 ;Simplify
= (44/4) / (24/4) * 2^1
= 11/6 * 2^1
Ironically, multiplication is probably faster than addition, as it requires less multiplication and division of bits. It's easy to find the inverse of a number though (just swap the significand and divisor), which makes division easy to implement once multiplication works.
I'm thinking AND, OR and XOR are worse than addition. I'm also wondering if a NOT operation could be done by negation (change the "significand is negative" flag, and test the negative flag during AND, OR and XOR operations). For e.g. for "negative number AND negative number", you'd set result's negative flag and do an NAND operation for each bit in the significand.
Cheers,
Brendan