Dealing with datatype size in C/C++

Programming, for all ages and all languages.
Post Reply
ineo

Dealing with datatype size in C/C++

Post by ineo »

I guess all of you may have encountered the datatype size in C/C++. As it differs on the different architectures you may use, how do you deal with it ?

For instant I've been using "uint32_t, uint8_t" and so on. I know it works on linux with gcc on different CPU but it is not really "portable". (see <sys/types.h>). Do you know a better way ? (In fact I'd like to avoid autoconf...).

INeo

[attachment deleted by admin]

[attachment deleted by admin]
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re:Dealing with datatype size in C/C++

Post by Solar »

(Note to moderators: This is General Programming, isn't it?)

C99 defines many useful types as standard in <stdint.h>. That's reasonally "portable" (of course, that <stdint.h> has to be redefined on every platform).
Every good solution is obvious once you've found it.
ineo

Re:Dealing with datatype size in C/C++

Post by ineo »

@Solar: perfect. That's exactly what I needed, in fact the datatypes I use are defined in these standard.

This topic should be moved to general programming, I almost forgot it exists :S

INeo
User avatar
Candy
Member
Member
Posts: 3882
Joined: Tue Oct 17, 2006 11:33 pm
Location: Eindhoven

Re:Dealing with datatype size in C/C++

Post by Candy »

Up to this moment I've kept a kernel/types.h which contains uintX and sintX for x=8,16,32,64,n (where n stands for native) and some kernel-types (inode_t, uid_t, tid_t, those things).

uintX maps to:

8 - char
16 - short int
32 - int
64 - long long
n - long

so as _long_ as the assumptions hold for those last datatypes (which they do for my target platforms on gcc at least), my data types are correct. Still, std-compliant is better if there's no actual difference between them. I'll use some form of sed to apply it (I really have used those types already).
vhjc,

Re:Dealing with datatype size in C/C++

Post by vhjc, »

a char is, more often than not, represented by 8 bits

an int (in c) is the size of a word,
the data type on which operations like add and multiply perform faster (than on long int for instance)

one-before-last rule of thermodynamics:
any variable can overflow (unlimited) times
User avatar
Candy
Member
Member
Posts: 3882
Joined: Tue Oct 17, 2006 11:33 pm
Location: Eindhoven

Re:Dealing with datatype size in C/C++

Post by Candy »

vhjc, wrote: one-before-last rule of thermodynamics:
any variable can overflow (unlimited) times
and in most languages, /NOTHING/ warns you that it happened.
Schol-R-LEA

Re:Dealing with datatype size in C/C++

Post by Schol-R-LEA »

vhjc, wrote:one-before-last rule of thermodynamics:
any variable can overflow (unlimited) times
Well, that's what the Bignum library class is for. Oh, wait, C++ doesn't have a standard Bignum library class. My Bad. ;)

(Actually, there are Good Reasons why C++ doesn't a standard Bignum class. As I understand it, most of the standard libraries don't dynamically allocate any memory themselves, on the principle that the client programmer should have total control over the memory usage of the program; the major exception is the Strings library, IIRC. Any realistic implementation of Bignums has to be able to expand or contract at need, and thus would be new()ing and delete()ing memory with nearly every operation, which would be a gross violation this principle. At least, that is my understaning of it; am I correct here, Solar?).

Of course, in standard Scheme you get big integers, rational fractions, big reals and big complex numbers - the whole 'numeric tower' - for free... if you discount the fact that in most Scheme compilers and interpreters, there are no fixed-size integers, so you take the performance hit for using bigints every time you use any integer math at all. Ouch. Better compilers will do some optimizations on this, and some let you specify fixed-size integers, but those aren't standard to the language. Rational fractions incur an even worse performance hit, and can take some getting used to as well; running [tt](/ 355 113)[/tt] (a classic approximation of pi, accurate up the 6th decimal place) and getting [tt]5 [sup]71[/sup]/[sub]113[/sub][/tt] instead of [tt]3.1415929[/tt] can be a little disconcerting, especially if you were looking for the decimal expansion to begin with. Absolute accuracy is not always the most desirable trait in calculation.
Post Reply