iub wrote:
i'm writting my hobby OS.first i used turbo c which is 16 bit compiler.
now i'm using djgpp.there is lot of confusion about size of different data types in djgpp.
please tell me what is size of integer and other data types in djgpp.
The C standard allows some variance on the size of ints; it defines a short int as being at least 16 bitsd wide, and a long int being at least 32 bits wide, and that short and long may never be the same size. An int may be any size at least as wide as a short, and no wider than a long. Thus the issue depends on the compiler. This allows compiler writers to optimize the default int to match the size of the CPU's data width.
The C compiler packaged with DJGPP is a port of gcc, which uses a 32 bit integer by default.
please also explain difference in 16 and 32 bit compilers.is it something to do with size of integers.
Not exactly. What 16-bit and 32-bit refer to is the data width of the CPU, and specificially the width of the data and/or address registers. This means that the CPU can hold a value of up to that size in a single register, or can have an flat address space of up to that size. A 16-bit register can have values from 0 to 65535 (or from -32767 to 32767 for signed numbers), while a 32-bit register can hold values from 0 to 4294967295 (or -2147483647 to 2147483647 signed).
The older x86 CPUs (8086, 8088, 80186) only had 16-bit data registers, and had a complex method of addressing (called segmentation) that combined two 16 registers to get a 20 bit (1 megabyte) address space. This method meant that memory was broken into overlapped sections called segments, each of which could be only 16 bits (64 kilobytes) in size. To use more memory than 64K, you had to use several registers at a time; at any given moment you could use one segment for code (the CS segment), one for the stack (the SS segment), and two for data (the DS segment and the ES segment), for a total of 256K; using even more than that would require you to fiddle with the segments, a tricky process that required a complicated compiler.
Later, the 80286 added a 16-bit protected mode, which extended the address space to 24 bits, but the registers themselves were still all 16 bits, and each segment could still only be up to 64K in size.
Finally, when the 80386 came out, it extended all the data and address registers (but not the segment registers) to 32 bits, and added another new mode, 32-bit protected, which could address a full 4 gigabytes in a single segment. Because this makes programming much simpler, virtually every system after 1995 or so uses 32-bit protected mode. However, for reasons of backwards compatibility, every PC always starts in 16-bit real mode (the original mode from the 8086).
What this has to do with the compiler is that the 32-bit mode performs CPU instructions in a different way than the 16-bit modes do, so code has to be specifically compiled for the mode it is to run in. so a 32-bit compiler is one that produces code for 32-bit protected mode, while a 16-bit compiler is one that produces code for real mode (or, less often, for 16-bit protected mode). Also, a 16-bit program can't use as much memory as a 32-bit program, and there are severe limitations on how it can use the memory it does have (because of the segment limits). Conversely, 32-bit program can't use certain system instructions which are restricted in p-mode; a 32-bit compiler has to avoid using those instructions in the generated code.
The Pentium Pro and later can actually use 36 bit addresses (that is, up to 64G), but setting it up is extremely difficult and thus it is generally not used; since few motherboards will bank more than 2G of memory, it is not a serious issue anyway, at least not yet. ::)