I guess all of you may have encountered the datatype size in C/C++. As it differs on the different architectures you may use, how do you deal with it ?
For instant I've been using "uint32_t, uint8_t" and so on. I know it works on linux with gcc on different CPU but it is not really "portable". (see <sys/types.h>). Do you know a better way ? (In fact I'd like to avoid autoconf...).
INeo
[attachment deleted by admin]
[attachment deleted by admin]
Dealing with datatype size in C/C++
Re:Dealing with datatype size in C/C++
(Note to moderators: This is General Programming, isn't it?)
C99 defines many useful types as standard in <stdint.h>. That's reasonally "portable" (of course, that <stdint.h> has to be redefined on every platform).
C99 defines many useful types as standard in <stdint.h>. That's reasonally "portable" (of course, that <stdint.h> has to be redefined on every platform).
Every good solution is obvious once you've found it.
Re:Dealing with datatype size in C/C++
@Solar: perfect. That's exactly what I needed, in fact the datatypes I use are defined in these standard.
This topic should be moved to general programming, I almost forgot it exists :S
INeo
This topic should be moved to general programming, I almost forgot it exists :S
INeo
Re:Dealing with datatype size in C/C++
Up to this moment I've kept a kernel/types.h which contains uintX and sintX for x=8,16,32,64,n (where n stands for native) and some kernel-types (inode_t, uid_t, tid_t, those things).
uintX maps to:
8 - char
16 - short int
32 - int
64 - long long
n - long
so as _long_ as the assumptions hold for those last datatypes (which they do for my target platforms on gcc at least), my data types are correct. Still, std-compliant is better if there's no actual difference between them. I'll use some form of sed to apply it (I really have used those types already).
uintX maps to:
8 - char
16 - short int
32 - int
64 - long long
n - long
so as _long_ as the assumptions hold for those last datatypes (which they do for my target platforms on gcc at least), my data types are correct. Still, std-compliant is better if there's no actual difference between them. I'll use some form of sed to apply it (I really have used those types already).
Re:Dealing with datatype size in C/C++
a char is, more often than not, represented by 8 bits
an int (in c) is the size of a word,
the data type on which operations like add and multiply perform faster (than on long int for instance)
one-before-last rule of thermodynamics:
any variable can overflow (unlimited) times
an int (in c) is the size of a word,
the data type on which operations like add and multiply perform faster (than on long int for instance)
one-before-last rule of thermodynamics:
any variable can overflow (unlimited) times
Re:Dealing with datatype size in C/C++
and in most languages, /NOTHING/ warns you that it happened.vhjc, wrote: one-before-last rule of thermodynamics:
any variable can overflow (unlimited) times
Re:Dealing with datatype size in C/C++
Well, that's what the Bignum library class is for. Oh, wait, C++ doesn't have a standard Bignum library class. My Bad.vhjc, wrote:one-before-last rule of thermodynamics:
any variable can overflow (unlimited) times
(Actually, there are Good Reasons why C++ doesn't a standard Bignum class. As I understand it, most of the standard libraries don't dynamically allocate any memory themselves, on the principle that the client programmer should have total control over the memory usage of the program; the major exception is the Strings library, IIRC. Any realistic implementation of Bignums has to be able to expand or contract at need, and thus would be new()ing and delete()ing memory with nearly every operation, which would be a gross violation this principle. At least, that is my understaning of it; am I correct here, Solar?).
Of course, in standard Scheme you get big integers, rational fractions, big reals and big complex numbers - the whole 'numeric tower' - for free... if you discount the fact that in most Scheme compilers and interpreters, there are no fixed-size integers, so you take the performance hit for using bigints every time you use any integer math at all. Ouch. Better compilers will do some optimizations on this, and some let you specify fixed-size integers, but those aren't standard to the language. Rational fractions incur an even worse performance hit, and can take some getting used to as well; running [tt](/ 355 113)[/tt] (a classic approximation of pi, accurate up the 6th decimal place) and getting [tt]5 [sup]71[/sup]/[sub]113[/sub][/tt] instead of [tt]3.1415929[/tt] can be a little disconcerting, especially if you were looking for the decimal expansion to begin with. Absolute accuracy is not always the most desirable trait in calculation.