The topic is quite self-explanatory, I want to know how this is done.
I'm not C programmer so keep your atoi and atof and printf -answers away from this thread! ;D I want to know 'how' not 'which'.
With numbers I mean base-2 numbers; binary numbers.
Lets see. with normal numbers it seems quite simple:
123 -> 1*100 + 2*10 + 3
simple for loop.
With binary->string it seems also a quite simple:
n -> zyx
n 10 /mod ->x 10 /mod ->y 10 /mod ->z
Also simple for loop.
For floating point numbers it seems hard. >:(
sign * 2^(exponent-127) * mantissa which goes: 1+1/2+1/4+1/8 etc.
How does one turn this to an xx.yy[Ez] -form? and from that form back to IEEE floating point?
I'm going to do this for retroforth, but I appreciate any kind of answer which answers to the question.
Turning floating points and numbers to string representation
Re:Turning floating points and numbers to string representat
If you know how to use logarithms
begin to convert A.a*b^2 into C.c*d^10 and then Cc*e^10
else just copy some sources from internet and translate them to your language.
begin to convert A.a*b^2 into C.c*d^10 and then Cc*e^10
else just copy some sources from internet and translate them to your language.
Re:Turning floating points and numbers to string representat
ok, I guess I copy some from net then...
But where could I find one? I don't know any good keywords to google.
But where could I find one? I don't know any good keywords to google.
Re:Turning floating points and numbers to string representat
search for ftoa and atof sources.See also in the source code of assemblers, compilers and libraries.I have these routines in the source of my OS but are written in assembly and you will find it to complex if aren?t familiar with the language.
Note that there are more than one algorithm.
Here are lots of source code:
http://www.programmersheaven.com/default.htm
The assembly section contains some assembly libraries.
Note that there are more than one algorithm.
Here are lots of source code:
http://www.programmersheaven.com/default.htm
The assembly section contains some assembly libraries.
- Kevin McGuire
- Member
- Posts: 843
- Joined: Tue Nov 09, 2004 12:00 am
- Location: United States
- Contact:
Re:Turning floating points and numbers to string representat
I think you _could_ be making it much more complicated than it actually is. The reasoning behind it may be complicated, but the application of it is not -- It is rather simple.
I got interested since I have never actually tried. So I googled and found:
http://www.keil.com/support/man/docs/c5 ... tingpt.htm
Then I noticed the basic concept for a 32 bit scalar or FLOAT in C using the IEEE-754 standard is that the lower 24 bits or bit 0 - 23 is the mantissa which has a complicated meaning of course but is rather simple as part of the application. The next 8 bits are the exponet. As far as I can tell the mantissa is multiplied by 2 raised to the power of the exponet... but I think the rest of the explanation can be found which of course is better explained than by me with in the link above.
Here is the code I _just_ wrote, and actually enjoyed doing so. I now know how to do this.
GOOGLE: floating point number storage format... ::)
I got interested since I have never actually tried. So I googled and found:
http://www.keil.com/support/man/docs/c5 ... tingpt.htm
Then I noticed the basic concept for a 32 bit scalar or FLOAT in C using the IEEE-754 standard is that the lower 24 bits or bit 0 - 23 is the mantissa which has a complicated meaning of course but is rather simple as part of the application. The next 8 bits are the exponet. As far as I can tell the mantissa is multiplied by 2 raised to the power of the exponet... but I think the rest of the explanation can be found which of course is better explained than by me with in the link above.
Here is the code I _just_ wrote, and actually enjoyed doing so. I now know how to do this.
Code: Select all
float f = 125.26;
// Reference the memory of the float varible.
unsigned int *u = (unsigned int*)&f;
// The mantissa is the least significant 24 bits.
unsigned int mana = (*u & 0x007FFFFF);
// Trying to grab the 8 bits starting at the bit 30 and moving down.
int exponet = ((*u & 0x7F800000) >> 23) - 127;
// The sign is stored in bit 31.
unsigned int sign = *u >> 31;
unsigned int wholenumber, fractional;
// mana is expected to be "1.mana"
printf("[%u,%d,%u]\n", mana, exponet, sign);
// I move the entire 23 bits snug up to the 31st bit.
// I set the 31st bit to 1 with 0x80000000 to repersent the 1.mana
// I think slide it back and then move the entire mana below bit 0 with >> 8+23
// Then I slide it back to the left with -exponet to have only the whole number
// bits revealed, and ofcourse I store them.
wholenumber = ((mana << 8) | 0x80000000) >> (8 + 23 - exponet);
// Any bits not used in wholenumber are fractional.
fractional = mana << (9 + exponet) >> (9 + exponet);
// Display the parts.
printf("sign:%u whole: %u fractional: %u\n", sign, wholenumber, fractional);
What in the world do you mean, "good keywords". Is google supposed to be the lottery where you get lucky, its more like you use some common sense.But where could I find one? I don't know any good keywords to google.
GOOGLE: floating point number storage format... ::)
Re:Turning floating points and numbers to string representat
I'm getting it now. ;D
Turning the fraction to an string is also a quite straightforfard. Simply, first you choose the biggest number starting with 1000... and ending to 0 by zero pattern which fits to the number cell, ie. if you have 32 bit number cell, you can put there 1E9, 1000000000,
then you take the each bit in mantissa and divide their value dividers by this number, then sum them together, when there starts coming fractions from that number, the result is getting unprecise but it seems to be somehow precise, at least for representing the number. I guess same can be done in reverse order to get floating point fraction from string. I'm not done with ten base exponent but maybe this is enough to make up with the target...
Edit: yes, google is close to lottery.
Turning the fraction to an string is also a quite straightforfard. Simply, first you choose the biggest number starting with 1000... and ending to 0 by zero pattern which fits to the number cell, ie. if you have 32 bit number cell, you can put there 1E9, 1000000000,
then you take the each bit in mantissa and divide their value dividers by this number, then sum them together, when there starts coming fractions from that number, the result is getting unprecise but it seems to be somehow precise, at least for representing the number. I guess same can be done in reverse order to get floating point fraction from string. I'm not done with ten base exponent but maybe this is enough to make up with the target...
Edit: yes, google is close to lottery.
Re:Turning floating points and numbers to string representat
IEEE754, replaced by IEEE854.
The format is (+/-) <base-2 number of mantissa-digits, starting with a one that is usually not present in the encoding> * 2^(exponent). If you want that in base-10, you'll have to convert it. Note that there are a few special cases, since you can't represent a number of values in this format. The main specific cases are NaN (invalid outcome, say, of a non-exception divide by 0), Infinite (positive and negative) and Zero (positive and negative). There is also a different type of number in the standard, Denormal. Denormal implies that the implicit 1 is not there, but should be a zero instead, allowing you to represent even slightly smaller numbers at the expense of a lot of precision. Most people don't care.
The format is (+/-) <base-2 number of mantissa-digits, starting with a one that is usually not present in the encoding> * 2^(exponent). If you want that in base-10, you'll have to convert it. Note that there are a few special cases, since you can't represent a number of values in this format. The main specific cases are NaN (invalid outcome, say, of a non-exception divide by 0), Infinite (positive and negative) and Zero (positive and negative). There is also a different type of number in the standard, Denormal. Denormal implies that the implicit 1 is not there, but should be a zero instead, allowing you to represent even slightly smaller numbers at the expense of a lot of precision. Most people don't care.