In C, I always tought that an "int" was equal to the native word size of the processor. If the processor was running in 16 bit mode, the int was 16 bit long, if the processor was 32 bit, the int was 32 bit in length, and in 64 bits, the int is 64 bits.
The "long" type is always 32 bits, and the "short" type is always 16 bits.
Am I right, or wrong? Because some people seems to confuse that?
ISO C99 does not specify what size the various integral types should be, but it does define a minimum size for each type:
char - 8 bits
short int - 16 bits
int - 16 bits
long int - 32 bits
long long int - 64 bits
GCC, on the other hand, leaves the determination of integer sizes up to the ABI. The sysV i386 ABI, a base document for the Itanium ABI, defines 'int' as being 32 bits in length, similar to long int. If you want a 64-bit integer, you have to use long long int. In addition, you should note that the default operand size in long mode is still 32 bits.
In 32-bit protected mode, sizeof(unsigned int) == sizeof(unsigned long int) == sizeof(void *) (generally), presumably because the default operand size and default addressing sizes are the same (32-bit).
In long mode, a virtual address (at least in the first implementation of long mode) is 48 bits long, usually sign-extended to 64 bits, whereas the default operand size is still 32 bits (a REX prefix is required to use a 64-bit operand). This is presumably the logic behind int still being 32-bits in the ABI whereas pointers are extended to 64 bits.
In general, for ultimate portability, you should never assume the size of a type. You can usually find the sizes defined in the documentation for the C compiler or ABI that you are using.
jerryleecooper wrote:In C, I always tought that an "int" was equal to the native word size of the processor. If the processor was running in 16 bit mode, the int was 16 bit long, if the processor was 32 bit, the int was 32 bit in length, and in 64 bits, the int is 64 bits.
The "long" type is always 32 bits, and the "short" type is always 16 bits.
Am I right, or wrong? Because some people seems to confuse that?
well there is a diference between data and address ranges. in 64bit longmode int is still 32 bit whereas long is 64bit. Default data width is 32 bit and the address range is 64-bit. To solve the problem i also use gcc because off the __attribute__((mode)) where you can specify the size of the integer in bytes.
i have a question for this.
i have often been somewhat enoyed over the fact that a char allways act as a char, eg. you have to write int(var) to make it at as a regualr var. So ofcourse i was pleased over the fact that i could just use uint8_t instead, but no, it allso acts as a char.
He wants a type with 8 bits - (u)int8_t so to say - that acts as an integer and not as a char. I also don't know how to achive this, so I defined (u)int8_t as char. But that drives you nuts if you don't cast when doing a "cout".
I don't quite get this either. Are you just saying you want an 8 bit type you can do integer maths with? If so, you can already perform maths on a char type (char++, char--, char1 / char2 etc...).
And what do you mean a char does not act as a regular var?
you have to write int(var) to make it at as a regualr var
??
If you want to expand a char to int size, can't you just do:
Ah... if the behaviour of cout is the problem... well, that could only be helped if the compiler would support a non-char-8-bit-integer so <stdint.h> could define int8_t to that instead of char, and cout actually making a difference between them. Until then... no luck.
Every good solution is obvious once you've found it.
well i dont think theres anything more to say about this other that i for one think its weird that we have 16, 32 and 64 bit integers but no 8 bit integer. Somewhere along the line, whoever developed c++, though that the 8 bit int should act different that than the others. I really dont see the point.