int versus long
- jerryleecooper
- Member
- Posts: 233
- Joined: Mon Aug 06, 2007 6:32 pm
- Location: Canada
int versus long
In C, I always tought that an "int" was equal to the native word size of the processor. If the processor was running in 16 bit mode, the int was 16 bit long, if the processor was 32 bit, the int was 32 bit in length, and in 64 bits, the int is 64 bits.
The "long" type is always 32 bits, and the "short" type is always 16 bits.
Am I right, or wrong? Because some people seems to confuse that?
The "long" type is always 32 bits, and the "short" type is always 16 bits.
Am I right, or wrong? Because some people seems to confuse that?
ISO C99 does not specify what size the various integral types should be, but it does define a minimum size for each type:
char - 8 bits
short int - 16 bits
int - 16 bits
long int - 32 bits
long long int - 64 bits
GCC, on the other hand, leaves the determination of integer sizes up to the ABI. The sysV i386 ABI, a base document for the Itanium ABI, defines 'int' as being 32 bits in length, similar to long int. If you want a 64-bit integer, you have to use long long int. In addition, you should note that the default operand size in long mode is still 32 bits.
Regards,
John.
char - 8 bits
short int - 16 bits
int - 16 bits
long int - 32 bits
long long int - 64 bits
GCC, on the other hand, leaves the determination of integer sizes up to the ABI. The sysV i386 ABI, a base document for the Itanium ABI, defines 'int' as being 32 bits in length, similar to long int. If you want a 64-bit integer, you have to use long long int. In addition, you should note that the default operand size in long mode is still 32 bits.
Regards,
John.
- jerryleecooper
- Member
- Posts: 233
- Joined: Mon Aug 06, 2007 6:32 pm
- Location: Canada
In 32-bit protected mode, sizeof(unsigned int) == sizeof(unsigned long int) == sizeof(void *) (generally), presumably because the default operand size and default addressing sizes are the same (32-bit).
In long mode, a virtual address (at least in the first implementation of long mode) is 48 bits long, usually sign-extended to 64 bits, whereas the default operand size is still 32 bits (a REX prefix is required to use a 64-bit operand). This is presumably the logic behind int still being 32-bits in the ABI whereas pointers are extended to 64 bits.
In general, for ultimate portability, you should never assume the size of a type. You can usually find the sizes defined in the documentation for the C compiler or ABI that you are using.
Regards,
John.
In long mode, a virtual address (at least in the first implementation of long mode) is 48 bits long, usually sign-extended to 64 bits, whereas the default operand size is still 32 bits (a REX prefix is required to use a 64-bit operand). This is presumably the logic behind int still being 32-bits in the ABI whereas pointers are extended to 64 bits.
In general, for ultimate portability, you should never assume the size of a type. You can usually find the sizes defined in the documentation for the C compiler or ABI that you are using.
Regards,
John.
- Brynet-Inc
- Member
- Posts: 2426
- Joined: Tue Oct 17, 2006 9:29 pm
- Libera.chat IRC: brynet
- Location: Canada
- Contact:
I myself would use the stdint.h header:
These are all defined in the ISO/IEC 9899:1999 standard, a nice way of managing types across platforms and operating systems.
Reference: http://www.opengroup.org/onlinepubs/009 ... int.h.html
Code: Select all
int8_t
int16_t
int32_t
uint8_t
uint16_t
uint32_t
int64_t
uint64_t
Reference: http://www.opengroup.org/onlinepubs/009 ... int.h.html
Re: int versus long
well there is a diference between data and address ranges. in 64bit longmode int is still 32 bit whereas long is 64bit. Default data width is 32 bit and the address range is 64-bit. To solve the problem i also use gcc because off the __attribute__((mode)) where you can specify the size of the integer in bytes.jerryleecooper wrote:In C, I always tought that an "int" was equal to the native word size of the processor. If the processor was running in 16 bit mode, the int was 16 bit long, if the processor was 32 bit, the int was 32 bit in length, and in 64 bits, the int is 64 bits.
The "long" type is always 32 bits, and the "short" type is always 16 bits.
Am I right, or wrong? Because some people seems to confuse that?
Author of COBOS
i have a question for this.Brynet-Inc wrote:I myself would use the stdint.h header:These are all defined in the ISO/IEC 9899:1999 standard, a nice way of managing types across platforms and operating systems.Code: Select all
int8_t int16_t int32_t uint8_t uint16_t uint32_t int64_t uint64_t
Reference: http://www.opengroup.org/onlinepubs/009 ... int.h.html
i have often been somewhat enoyed over the fact that a char allways act as a char, eg. you have to write int(var) to make it at as a regualr var. So ofcourse i was pleased over the fact that i could just use uint8_t instead, but no, it allso acts as a char.
This is basicly what i wanna do:
Code: Select all
typedef uint8_t int(uint8_myvar);
I know its a very litle and stupid thing, but is there anything one can do?
I don't quite get this either. Are you just saying you want an 8 bit type you can do integer maths with? If so, you can already perform maths on a char type (char++, char--, char1 / char2 etc...).
And what do you mean a char does not act as a regular var?
If you want to expand a char to int size, can't you just do:
Or are you trying to achieve something else?
Cheers,
Adam
And what do you mean a char does not act as a regular var?
??you have to write int(var) to make it at as a regualr var
If you want to expand a char to int size, can't you just do:
Code: Select all
int x = (int)mychar;
Cheers,
Adam
Ah... if the behaviour of cout is the problem... well, that could only be helped if the compiler would support a non-char-8-bit-integer so <stdint.h> could define int8_t to that instead of char, and cout actually making a difference between them. Until then... no luck.
Every good solution is obvious once you've found it.
Not to my knowledge. I'm not even sure this is desirable, as it could lead to some surprising errors in code that mixes char with int8_t...bluecode wrote:Does gcc offer something like this?
Example configuration uses (unsigned) char. You are free to use any definition you see fit in a plattform overlay.How do you typdef (u)int8_t in pdclib?
Every good solution is obvious once you've found it.
well i dont think theres anything more to say about this other that i for one think its weird that we have 16, 32 and 64 bit integers but no 8 bit integer. Somewhere along the line, whoever developed c++, though that the 8 bit int should act different that than the others. I really dont see the point.