Page 1 of 1

Reserved memory in Protected mode with no Paging

Posted: Sat Apr 06, 2013 4:28 pm
by rickbutton
I have been experimenting with the idea of a protected mode OS that doesn't use paging, but still letting the kernel manage memory.

(I know none of this makes practical sense in a real operating system, just want to see if it works well)

Instead of assigning each process a virtual address space, when the processes request memory, they are given a chunk out of one big heap that is managed inside the kernel. Without paging, this means that there could be several chunks of physical memory that are not contiguous (reserved areas). I have experimented with using multiple heaps to cover all of the available to use memory, but it seems flakey and hard to manage.

My question is that would it be an OK solution to detect the largest region of available memory (usually the section above 1MB with some small reserved at the end) and use that as the heap, disregarding the lower memory?

Re: Reserved memory in Protected mode with no Paging

Posted: Sat Apr 06, 2013 5:09 pm
by Mikemk
In other words, you want to program for real mode in pmode?

Re: Reserved memory in Protected mode with no Paging

Posted: Sat Apr 06, 2013 5:10 pm
by rickbutton
I guess what I am asking is that if it is safe to do that from protected mode? As if there is some quirk in protected mode that I haven't found that changes the characteristics of that chunk of memory?

Re: Reserved memory in Protected mode with no Paging

Posted: Sat Apr 06, 2013 5:20 pm
by Mikemk
Let me get this straight:
You want to use protected mode where the kernel can keep an eye on and restrict program activities.
You then want have all the programs run with the same memory and permissions to access other programs memory, thereby defeating the purpose of entering pmode in the first place.
And you want to do this so that you don't have to use the restrictions you turned on.

Re: Reserved memory in Protected mode with no Paging

Posted: Sat Apr 06, 2013 5:27 pm
by rickbutton
No, you misunderstood me. I am entering protected mode to gain access to the full range of memory. I don't want to enter protected mode for any of the 'protected' level features other than the wider memory range. I was asking if there were any quirks in memory addressing that makes it not acceptable to address 1MB-END of memory without paging in protected mode. I didn't think there would be, just making sure before I spend a long time debugging a problem I didn't foresee.

Re: Reserved memory in Protected mode with no Paging

Posted: Sat Apr 06, 2013 8:14 pm
by Mikemk
Protected mode doesn't extend memory. A20 Line

Re: Reserved memory in Protected mode with no Paging

Posted: Sat Apr 06, 2013 9:58 pm
by Kazinsal
m12 wrote:Protected mode doesn't extend memory. A20 Line
The A20 line is just an AND gate on address line 20. With it on, in real mode, you're still limited to a segment limit of FFFF. 32-bit protected mode segments raise the segment limit to a maximum of FFFFFFFF. You can do protected mode without A20. Every other MiB is just going to be wired to the MiB before it. You'll still get 2 GiB of working address space.

Re: Reserved memory in Protected mode with no Paging

Posted: Sat Apr 06, 2013 11:19 pm
by Brendan
Hi,
rickbutton wrote:My question is that would it be an OK solution to detect the largest region of available memory (usually the section above 1MB with some small reserved at the end) and use that as the heap, disregarding the lower memory?
You could write "global heap" code that's only capable of handling one area of physically contiguous RAM, so that your global heap code only needs to care about "start address and size" for one area. However; if you do that it should be easy to have one "global heap" for each different area of physically contiguous RAM where each separate global heap only needs to care about "start address and size" for one area.

Also note that as soon as anything frees any RAM you're going to be working with a fragmented global heap anyway. For example, imagine if one process allocates 1 MiB, then a second process allocates 1 MiB, then a third process allocates another 1 MiB; then the second process terminates leaving a piece of free memory between 2 allocated pieces. If processes can have multiple segments (instead of one segment that grows/shrinks) the fragmentation problem gets worse (e.g. when a process terminates you can end up with 100 different pieces of free memory scattered between allocated pieces).

You can de-fragment a global heap (by moving entire segments); but that's messy (because you'd have to make sure no CPU is using the segment when you move it) and slow. For example, you might have 1 MiB of free memory, then a 2 GiB segment, then another 1 MiB of free memory; and if someone wants to allocate 2 MiB of memory you'd have to shift 2 GiB of RAM.

If processes can only have one segment that grows/shrinks (instead of multiple segments that don't grow/shrink) you'll need to move segments around more frequently (e.g. if a process wants another 1 MiB you'd have to shift segments around rather than just finding a 1 MiB piece of free RAM from anywhere) and those segments will be larger (e.g. a large 123 MiB piece instead of 123 smaller 1 MiB pieces).

There is a way to avoid all of the fragmentation and "moving segments" problems. If you split the heap into many fixed sized segments; then because everything is the same size you'd never need to move them around. In this case a process would have to use some sort of lookup table to keep track of its segments. For example, all segments might be 4 KiB; and a process might allocate 31488 segments to store 123 MiB of data, and do something like "segment = table[virtual_address >> 12]; offset = virtual_address & 0x00000FFF;" to find the segment and offset of a piece of data at a specific virtual address. This would make the OS's memory management a lot faster, but would also be painful for processes to use. Fortunately, the CPU has special support for this that avoids the need for processes to deal with the hassle of doing those lookups manually, where processes can just use "virtual_address" directly. Basically; by using the CPU's special support, the OS's memory management becomes a lot faster (no fragmentation or "moving segments" problems) and there's no hassle for processes. ;)


Cheers,

Brendan

Re: Reserved memory in Protected mode with no Paging

Posted: Sun Apr 07, 2013 3:48 am
by bluemoon
The other approach is to have "standard size" heap for each process(see Brendan description above), while also support lock buffers for flexible sizes buffers - so that while the buffer is not locked by application the kernel is allowed to perform defragment or alter the address of the buffer.

Bottom line - use paging to avoid all those complex and non-trivial designs.

Re: Reserved memory in Protected mode with no Paging

Posted: Sun Apr 07, 2013 3:06 pm
by DavidCooper
Alternatively, design things in such a way that you can run without paging until you hit problems of memory fragmentation and then switch paging on to cure it when the advantages outweigh the disadvantages, thereby potentially giving the user better performance when the workload on the machine is low and losing nothing when it is high. Be careful not to tie things up in any kind of complexity that inhibits performace when the switch is made to using paging. All apps in my OS have to be able to move their data when asked to do so by the kernel in order to eliminate fragmentation, but there may come a point at which occasional delays caused by that become too frequent or where more than 3GiB is required, at which time paging should be switched on and there will be no more requests to apps to move their data.

To keep the load on memory down, you should also design apps in such a way that users can close them at any time in the certain knowledge that they can reopen them and get straight back to the point where they left off - it's apps that don't behave that way that cause the biggest problem as it encourages users to keep everything open all the time, leading to them working with a permanently clogged-up machine where things have to be swapped out to disk every time something new is opened.