Memory Allocation
-
- Member
- Posts: 48
- Joined: Mon Aug 13, 2007 2:30 pm
Memory Allocation
Do you have to get the total RAM size in order to initialize the memory manager? I think I am lost...but if I have the right idea, how do you get the total RAM size of the platform?
Please help,
Joseph
Please help,
Joseph
-
- Member
- Posts: 48
- Joined: Mon Aug 13, 2007 2:30 pm
Thanks, as I did not see this in the wiki.Solar wrote:How Do I Determine The Amount Of RAM.
I can see that. Mediawiki's search engine sucks.JoeTheProgrammer wrote:Thanks, as I did not see this in the wiki.Solar wrote:How Do I Determine The Amount Of RAM.
C8H10N4O2 | #446691 | Trust the nodes.
I have no idea. I was merely postulating - hence the inquisitive tone and question marksCmpXchg wrote:Yeah, it probably would. But what if the required BIOS functions are not supported? Do you think it never attempts to probe RAM directly?JamesM wrote:DOS is realmode, isn't it? So wouldn't that driver just pull the memory map from the BIOS?
CmpXchg wrote: I noticed that an old DOS driver called HIMEM.SYS is capable of determining the amount of RAM on any machine, so it must contain a reliable algorithm.
Uh, no. HIMEM.SYS is the Extended Memory driver. I'm not sure exactly what mode the later versions of DOS actually did run in -- but with the "Huge Memory Model" and HIMEM.SYS, they were capable of addressing somewhere between 64MB and 4GB with "far pointers". (Yes, the "near" pointers were only 16 bit.)JamesM wrote: DOS is realmode, isn't it? So wouldn't that driver just pull the memory map from the BIOS?
DOS, however, is completely built on using BIOS functions to do all the hard work. I really doubt it uses manual probing at any point. I suspect that HIMEM.SYS exclusively used INT0x15 calls (up to E801, I'd bet).
- CmpXchg
- Member
- Posts: 61
- Joined: Mon Apr 28, 2008 12:14 pm
- Location: Petrozavodsk, Russia during school months, Vänersborg Sweden in the summertime
Thanks for your knowledgeable reply, bewing!
But what do you mean by "Huge Memory Model"?
As far as I know (or guess ), DOS itself utilizes real mode, but as soon as HIMEM.SYS loads, it switches the system into Protected mode, enables A20 and becomes kind of supervisor. As the result, the entire DOS session runs in V86, but it goes unnoticed for the most programs.
This fact turned my attention when I tried to read CR0 under DOS - the PE bit was set(!), which can only mean that the system ran under V86.
EDIT: sorry, I was mistaken. HIMEM.SYS doesn't switch the machine into protected mode, it merely enables A20 to allow the DOS kernel to be moved into HMA (that memory area of 65520 bytes starting at 0x10000h, the highest region process can access in real mode). But what really enters Protected Mode (and then starts a V86 task) is the memory manager EMM386.
I quite agree that HIMEM uses only BIOS functions to get memory size.
But what do you mean by "Huge Memory Model"?
As far as I know (or guess ), DOS itself utilizes real mode, but as soon as HIMEM.SYS loads, it switches the system into Protected mode, enables A20 and becomes kind of supervisor. As the result, the entire DOS session runs in V86, but it goes unnoticed for the most programs.
This fact turned my attention when I tried to read CR0 under DOS - the PE bit was set(!), which can only mean that the system ran under V86.
EDIT: sorry, I was mistaken. HIMEM.SYS doesn't switch the machine into protected mode, it merely enables A20 to allow the DOS kernel to be moved into HMA (that memory area of 65520 bytes starting at 0x10000h, the highest region process can access in real mode). But what really enters Protected Mode (and then starts a V86 task) is the memory manager EMM386.
I quite agree that HIMEM uses only BIOS functions to get memory size.
Compilers under DOS could use one of four possible "memory models". The choice affected the way that segment registers were handled in the assembled output.
1) Compact. The only choice for .COM programs. All code and data is contained in one 64K page, so all the segment registers are set to exactly the same values at load time, and never changed. Uses 16bit pointers.
2) Tiny. Code is on a separate page from data, but code < 64K and data < 64K. Again, the segment registers are never modified. Uses 16bit pointers.
3) Standard. Can access all low mem (640K), using segment:offset pointers for every memory access.
4) Huge. Uses 32bit flat memory pointers for all memory accesses.
I still use M$ QuickC to make .COM files whenever I need a quickie utility to do some little thing, and run it in a DOS window.
1) Compact. The only choice for .COM programs. All code and data is contained in one 64K page, so all the segment registers are set to exactly the same values at load time, and never changed. Uses 16bit pointers.
2) Tiny. Code is on a separate page from data, but code < 64K and data < 64K. Again, the segment registers are never modified. Uses 16bit pointers.
3) Standard. Can access all low mem (640K), using segment:offset pointers for every memory access.
4) Huge. Uses 32bit flat memory pointers for all memory accesses.
I still use M$ QuickC to make .COM files whenever I need a quickie utility to do some little thing, and run it in a DOS window.
Not really. It usually uses one data segment of 64KB, with a heap allowing you to allocate more memory. To access structures on the heap, one uses pointers and pointer dereferencing, but 'normal' global variables are stored in the data segment, which is still limited to 64KB. As for code, it depends on the language and compiler used.bewing wrote:3) Standard. Can access all low mem (640K), using segment:offset pointers for every memory access.
Only available with a DOS extender, I presume? Or using unreal mode?4) Huge. Uses 32bit flat memory pointers for all memory accesses.
JAL