Portability is a waste of time for me. I don't plan on it at all.
I'm an x86 freak

And then you realize that even C compilers on the x86 can be inconsistent with integer sizes.Troy Martin wrote:char, unsigned char, short, unsigned short, int, unsigned int, long, unsigned long.
Portability is a waste of time for me. I don't plan on it at all.
I'm an x86 freak
ugh.. I hate C# in that respect.. Uint32. Just look at it. What an ugly name. It's inconsistent with every other language, and itself(where core types are lower case, such as int, float, object, string) Microsoft should've thought a little harder about that...spere wrote:Being a C# developer, it would be
uint
or
System.UInt32
didn't expect that one, did you?
Then again in masm i would use
VarName dw
If you think about it, why do compilers even have the types "int", "short", etc. anymore? It's not any faster, and it's not any simpler. Plus, I rarely have a circumstance where I don't care exactly how many bits I can use in an integer. If you want to make any remotely portable code, you're going to end up using int32_t and int16_t, or at least intmax_t, and that requires including an extra header.Solar wrote:...in its current version. (Yes, I admit, it is unlikely they'll change it, but...)
This ain't about portability. IMHO, this goes much, much deeper.
You need an integer of exactly 32 bits. If you are the type that will simply check if int or long is good enough and then uses that deserves every bit of pain buggy code ever inflicts on him. And some more.
"int" might be 32 bit, but it does neither guarantee nor say so. "int" says "a whatever-sized integer value, I don't really care". Whereas int32_t says "a 32bit integer, and make it exactly 32 bits because it's important".
And as for using standards vs. rolling your own... sure there might be some advantages in driving on the wrong side of the road (such as, no-one in front of you to slow you down), but in the long run you aren't making friends that way...
Edit: That paragraph above about "saying so" refers to the documentation value of "int32_t" vs. "int", of course.
NickJohnson wrote:If you think about it, why do compilers even have the types "int", "short", etc. anymore? It's not any faster, and it's not any simpler. Plus, I rarely have a circumstance where I don't care exactly how many bits I can use in an integer. If you want to make any remotely portable code, you're going to end up using int32_t and int16_t, or at least intmax_t, and that requires including an extra header.Solar wrote:...in its current version. (Yes, I admit, it is unlikely they'll change it, but...)
This ain't about portability. IMHO, this goes much, much deeper.
You need an integer of exactly 32 bits. If you are the type that will simply check if int or long is good enough and then uses that deserves every bit of pain buggy code ever inflicts on him. And some more.
"int" might be 32 bit, but it does neither guarantee nor say so. "int" says "a whatever-sized integer value, I don't really care". Whereas int32_t says "a 32bit integer, and make it exactly 32 bits because it's important".
And as for using standards vs. rolling your own... sure there might be some advantages in driving on the wrong side of the road (such as, no-one in front of you to slow you down), but in the long run you aren't making friends that way...
Edit: That paragraph above about "saying so" refers to the documentation value of "int32_t" vs. "int", of course.
Why doesn't the C standard simply force the compiler to have those types as builtins, instead of the normal integer types? Then people would write correct code *from the beginning*, without the argument of laziness for using normal types.
If I ever write a C compiler (and I'm planning to... eventually), I would more than happily throw "int" out the window.
Cause generally, programmers that don't have a f**king clue about 32-bit integers versus 16-bit integers versus 64-bit ones etc. just use int in all their code cause "it's easier". Well, if I were a programmer that didn't need to deal with low-level stuffs like uint32_t and friends I'd just slap int everywhere without thinking about the bytes I could be saving by using short or char or the appropriate ?int*_t type.earlz wrote:This is very true.. Why do they have int?
Well well... maybe programmers should be made more aware..Troy Martin wrote:Cause generally, programmers that don't have a f**king clue about 32-bit integers versus 16-bit integers versus 64-bit ones etc. just use int in all their code cause "it's easier". Well, if I were a programmer that didn't need to deal with low-level stuffs like uint32_t and friends I'd just slap int everywhere without thinking about the bytes I could be saving by using short or char or the appropriate ?int*_t type.earlz wrote:This is very true.. Why do they have int?
Code: Select all
char *longstring;
...
int i;
for(i=0;i<strlen(longstring);i++){
putc(longstring[i]);
}
Any decent compiler (MSVC's, and GCC, in the least) will throw a warning about the conversion from unsigned to signed (strlen returns a size_t). So your example is invalid, because programmers are made aware that their code is faulty.Code: Select all
for(i=0;i<strlen(longstring);i++){ putc(longstring[i]); }
Actually, the reason why we "still have" int is for backwards compatibility (with the current standard, too). If you'd like to make a variant of C that doesn't have int, short, etc... - go ahead. But it's non-standard C. Just use stdint.h, which was introduced in C99.Cause generally, programmers that don't have a f**king clue about 32-bit integers versus 16-bit integers versus 64-bit ones etc. just use int in all their code cause "it's easier"
I still want to implement a language where you doTroy Martin wrote:Looks like what we need to do is tell the world that int and friends are deprecated and we should all use the wonders of platform independent variable types.
Code: Select all
int:8 byte;
int:16 word;
int:21 wtf;
Oh come on earlz, read through the topic again! Use uint16_t!earlz wrote:Anyone know how to specify a 16bit number in C where int and long are 64 bits? (presumably short would be 32)
I think it's an unofficial extension on some compilers (likely MSVC) but as far as I know it's nonstandard. Might be standard in C1x, but as of the latest draft, I couldn't find anything about it. Googling it displays something about GCC 3.1 on the PowerPC having uint128_t.And any hope of getting a uint128_t anytime soon?