Page 1 of 3

int versus long

Posted: Mon Aug 27, 2007 12:07 pm
by jerryleecooper
In C, I always tought that an "int" was equal to the native word size of the processor. If the processor was running in 16 bit mode, the int was 16 bit long, if the processor was 32 bit, the int was 32 bit in length, and in 64 bits, the int is 64 bits.
The "long" type is always 32 bits, and the "short" type is always 16 bits.
Am I right, or wrong? Because some people seems to confuse that? :?: :!:

Posted: Mon Aug 27, 2007 12:53 pm
by jnc100
ISO C99 does not specify what size the various integral types should be, but it does define a minimum size for each type:

char - 8 bits
short int - 16 bits
int - 16 bits
long int - 32 bits
long long int - 64 bits

GCC, on the other hand, leaves the determination of integer sizes up to the ABI. The sysV i386 ABI, a base document for the Itanium ABI, defines 'int' as being 32 bits in length, similar to long int. If you want a 64-bit integer, you have to use long long int. In addition, you should note that the default operand size in long mode is still 32 bits.

Regards,
John.

Posted: Mon Aug 27, 2007 1:13 pm
by jerryleecooper
I meant, the same size as the pointer size. Like "int" being the same size as the pointer size.
Thank you for your reply it was instructional. :)

Posted: Mon Aug 27, 2007 1:39 pm
by jnc100
In 32-bit protected mode, sizeof(unsigned int) == sizeof(unsigned long int) == sizeof(void *) (generally), presumably because the default operand size and default addressing sizes are the same (32-bit).

In long mode, a virtual address (at least in the first implementation of long mode) is 48 bits long, usually sign-extended to 64 bits, whereas the default operand size is still 32 bits (a REX prefix is required to use a 64-bit operand). This is presumably the logic behind int still being 32-bits in the ABI whereas pointers are extended to 64 bits.

In general, for ultimate portability, you should never assume the size of a type. You can usually find the sizes defined in the documentation for the C compiler or ABI that you are using.

Regards,
John.

Posted: Mon Aug 27, 2007 4:06 pm
by Brynet-Inc
I myself would use the stdint.h header:

Code: Select all

int8_t
int16_t
int32_t
uint8_t
uint16_t
uint32_t
int64_t
uint64_t
These are all defined in the ISO/IEC 9899:1999 standard, a nice way of managing types across platforms and operating systems.

Reference: http://www.opengroup.org/onlinepubs/009 ... int.h.html

Posted: Mon Aug 27, 2007 4:11 pm
by bluecode
stdint.h defines types with specific sizes, like uint8_t, int8_t, etc... And for pointers there is (u)intptr_t (also stdint.h).

Re: int versus long

Posted: Tue Aug 28, 2007 12:14 am
by os64dev
jerryleecooper wrote:In C, I always tought that an "int" was equal to the native word size of the processor. If the processor was running in 16 bit mode, the int was 16 bit long, if the processor was 32 bit, the int was 32 bit in length, and in 64 bits, the int is 64 bits.
The "long" type is always 32 bits, and the "short" type is always 16 bits.
Am I right, or wrong? Because some people seems to confuse that? :?: :!:
well there is a diference between data and address ranges. in 64bit longmode int is still 32 bit whereas long is 64bit. Default data width is 32 bit and the address range is 64-bit. To solve the problem i also use gcc because off the __attribute__((mode)) where you can specify the size of the integer in bytes.

Posted: Tue Aug 28, 2007 11:28 am
by Zacariaz
Brynet-Inc wrote:I myself would use the stdint.h header:

Code: Select all

int8_t
int16_t
int32_t
uint8_t
uint16_t
uint32_t
int64_t
uint64_t
These are all defined in the ISO/IEC 9899:1999 standard, a nice way of managing types across platforms and operating systems.

Reference: http://www.opengroup.org/onlinepubs/009 ... int.h.html
i have a question for this.
i have often been somewhat enoyed over the fact that a char allways act as a char, eg. you have to write int(var) to make it at as a regualr var. So ofcourse i was pleased over the fact that i could just use uint8_t instead, but no, it allso acts as a char.

This is basicly what i wanna do:

Code: Select all

typedef uint8_t int(uint8_myvar);
but of course it doesnt work.
I know its a very litle and stupid thing, but is there anything one can do?

Posted: Mon Sep 03, 2007 1:23 am
by Solar
Er... pardon?

Of course a char acts as char and an int acts as an int... if you want a char-acting-as-int you would use an int, wouldn't you?

I don't get what you are trying to achieve.

Posted: Mon Sep 03, 2007 3:46 am
by bluecode
He wants a type with 8 bits - (u)int8_t so to say - that acts as an integer and not as a char. I also don't know how to achive this, so I defined (u)int8_t as char. But that drives you nuts if you don't cast when doing a "cout".

Posted: Mon Sep 03, 2007 4:47 am
by AJ
I don't quite get this either. Are you just saying you want an 8 bit type you can do integer maths with? If so, you can already perform maths on a char type (char++, char--, char1 / char2 etc...).

And what do you mean a char does not act as a regular var?
you have to write int(var) to make it at as a regualr var
??

If you want to expand a char to int size, can't you just do:

Code: Select all

int x = (int)mychar;
Or are you trying to achieve something else?

Cheers,
Adam

Posted: Mon Sep 03, 2007 5:21 am
by Solar
Ah... if the behaviour of cout is the problem... well, that could only be helped if the compiler would support a non-char-8-bit-integer so <stdint.h> could define int8_t to that instead of char, and cout actually making a difference between them. Until then... no luck.

Posted: Mon Sep 03, 2007 6:18 am
by bluecode
Exactly Solar :-) Does gcc offer something like this? How do you typdef (u)int8_t in pdclib?

Posted: Mon Sep 03, 2007 7:08 am
by Solar
bluecode wrote:Does gcc offer something like this?
Not to my knowledge. I'm not even sure this is desirable, as it could lead to some surprising errors in code that mixes char with int8_t...
How do you typdef (u)int8_t in pdclib?
Example configuration uses (unsigned) char. You are free to use any definition you see fit in a plattform overlay.

Posted: Mon Sep 03, 2007 4:34 pm
by Zacariaz
well i dont think theres anything more to say about this other that i for one think its weird that we have 16, 32 and 64 bit integers but no 8 bit integer. Somewhere along the line, whoever developed c++, though that the 8 bit int should act different that than the others. I really dont see the point.