Page 1 of 2

4 GB Protected Mode.. hm..

Posted: Wed Jun 20, 2007 6:34 am
by ComputerPsi
Okay, so I have heard that when you run out of RAM, the system uses the hard drive for memory. So what happens when the hard drive runs out of space? Do you get a general protection fault or something? Do you tell the system which part of the hard drive it could access? :P

Re: 4 GB Protected Mode.. hm..

Posted: Wed Jun 20, 2007 6:45 am
by Tyler
ComputerPsi wrote:Okay, so I have heard that when you run out of RAM, the system uses the hard drive for memory. So what happens when the hard drive runs out of space? Do you get a general protection fault or something? Do you tell the system which part of the hard drive it could access? :P
The "System" Doesn't mean computer, it means Operating System. The Operating System has to deal with allocating Memory and Hard drive space to Pages, so no processor exceptions ever happen.

Posted: Wed Jun 20, 2007 7:26 am
by ComputerPsi
okay.. If in the GDT, if I allocate and use 4 GB for something when I only have 1 GB of memory, what happens?

Posted: Wed Jun 20, 2007 7:33 am
by os64dev
nothing, it will crash misserably, unless you as a os developper have implemented virtual memory and the it will allocate 1GiB of real memory and 3 GiB of harddisk memory. Needless to say that the harddisk memory is slow. Also if you are running a 32 bit OS you will not be able to use the whole 4 GiB afaik.

Posted: Wed Jun 20, 2007 7:38 am
by Tyler
ComputerPsi wrote:okay.. If in the GDT, if I allocate and use 4 GB for something when I only have 1 GB of memory, what happens?
Well you can't do any serious Paging to Hard drive without paging, so this really is not related. But chances are if you try to access any memory you dont have within that 4GB Physical Address space, you may be lucky enough to get a fault. Alternatively your system does nothing and you don't get your data back, or my personal favourite, it seems to write fine but what you have really done is sent the explode command to all your PCI devices.

Of course if you mean that you are using paging and have a Virtual Address Space with a GDT covering the entire space, then you will simply get page faults until you point the Virtual Pages to Physical Pages.

Posted: Wed Jun 20, 2007 11:54 am
by bewing
I'm wondering if it's possible to use RL memory compression, rather than paging to hard disk, to free up memory pages, under most realworld circumstances.

Posted: Wed Jun 20, 2007 2:57 pm
by Candy
bewing wrote:I'm wondering if it's possible to use RL memory compression, rather than paging to hard disk, to free up memory pages, under most realworld circumstances.
Nope. Typical memory usage would perform fairly bad under memory compression and the only kind of "compression" that would work with RLE would be zeroes - which are handled by only actually giving the process a page when it writes to it.

Posted: Wed Jun 20, 2007 3:46 pm
by Combuster
You could try huffman encoding. It can do fair compression under most circumstances and it encodes/decodes in linear time. The part of the equation that remains unknown is how well it actually performs, i.e. if the extra CPU time is worth the decrease in disk accesses...

Posted: Wed Jun 20, 2007 3:59 pm
by jnc100
The other problem is that using some of your memory as a compressed store reduces the amount of working memory, thus increasing the likelihood that you need to compress/uncompress data, as well as increasing the number of tlb invalidations/refills.

edit: this might be useful

Regards,
John.

Posted: Wed Jun 20, 2007 4:28 pm
by Colonel Kernel
Candy wrote:Nope. Typical memory usage would perform fairly bad under memory compression and the only kind of "compression" that would work with RLE would be zeroes - which are handled by only actually giving the process a page when it writes to it.
So if a process reads from a zero-filled page, it can be reading from a read-only, globally shared, "canonical zero page"? Kinda neat idea... :)

Posted: Wed Jun 20, 2007 5:45 pm
by Kevin McGuire
ComputerPsi wrote:okay.. If in the GDT, if I allocate and use 4 GB for something when I only have 1 GB of memory, what happens?
A general protection fault occurs (GPF).

The standard method to implement paging is using virtual memory aka paging, and giving each process it's own address space. There are also multiple ways you can store a swap file: partition, whole disk, or file. The most straight forward and apparently the most widely used mechanism for detecting when to swap memory to disk is by waiting for the amount of used memory to exceed a threshold (say ninety-percent for a example). In Linux for example a thread is woken which will start to find pages that are cold (or have not been accessed recently by a process). These threads are written to disk, the page is unmapped, and a special flag could be set to remember that this page was swapped to disk (for later when it is accessed again and becomes less cold).

They way you implement disk swapping is by using virtual memory. The idea is to create a separate virtual memory directory and associated tables for every process. Each time you switch to another process you switch to it's page directory by (re)loading the processor register CR3. You do not change the CR3 when switching between the threads of a process.

When paging is enabled and a valid directory is loaded any memory access will reference a entry in this directory and the correct table linked in the directory. The entry will hold a physical address in RAM.

:idea: A common method is to map the kernel to the exact same place in all of the processes' virtual memory. So that each time a CR3 is reloaded the kernel still remains in at the exact same (virtual) memory address.

So if you only had one gigabyte of RAM then:
  • It would take a while to fill this RAM up as the system loaded, or you used the system and started working with applications.
  • If a threshold of used RAM was reached (lets say ninety percent), then you would immediately start looking for pages in the system that have not been accessed recently or ever, but allocated by the processes. By not being accessed I mean a flag is not set in the table entry for the virtual memory map.
You would then open a file on the hard disk, or use a partition, and write that page (4096 bytes) to disk. Then unmap it from that process, store some information pertaining to the fact that you swapped that page to disk, and finally add that page to the list, stack, or whatever mechanism you use to represent free pages so that you maintain a system that never reaches one-hundred percent RAM usage until you run out of hard disk space (file or partition space).

If you swap a page from a process to disk and that process goes to access that page; Lets say five minutes later. Then what happens is a page fault is generated, and when handling this page fault exception you check if the page was swapped or if the application is just accessing memory it never had (bugs). If you swapped it to disk then you simply read and write that page back into memory and remap it back to the application. You might not use the same physical page when remapping. Of course the general idea is that simultaneously during this you are also keeping the system's used memory under a threshold (ninety percent given in a example above).


You come down to having two seperate systems in your kernel.
A periodic thread or routine to swap memory to disk.
  • check used memory is below set threshold (maybe ninety-percent); check could be performed during system calls for memory allocation or you might schedule this as a thread or call this as routine.
  • search for pages when memory is above set threshold that have their accessed flag not set, which hopefully means the process allocated these but never had the chance to use them, or is not using them often or has recently.
  • unmap pages found during search from the process they are mapped in.
  • write unmaped pages to disk and add physical page to list of free pages.
The page fault exception handler.
  • check if address is a or inside a page that was swapped to disk.
  • if page was swapped to disk, then grab a free page from the free page list in the memory manager. read and write data from disk back to this page and remap page into process.
  • if page was not swapped, then do normal error routine for a program trying to access memory that did not exist (was not allocated).
You might use a more advanced routine for searching (detecting pages that are not used often) rather than simply checking the accessed bit for a entry in a page directory.

You also might set a flag on the process that just tried to access a swapped page if you are inside the page fault exception handler, since most likely you have blocked the scheduler from running at this moment. You might then assigned the job to your periodic checking thread (above). Once the periodic checking thread loads and maps the page that was a access was attempted on, because it was swapped, then you can unset the flag and the scheduler can starting running this process when it's turn comes up.

1. Move cold (unused but allocated) pages to disk, and free the page into the system's free memory list.
2. When a swapped page is accessed allocate a new page, load the data from disk back onto the page, and map the page back into the process and continue like normal.

Posted: Wed Jun 20, 2007 8:01 pm
by Aali
actually, most of the time you dont get any kind of exception for trying to access non-existent memory

your data will just go to god-knows-where and the cpu will go on like nothing ever happened

Posted: Wed Jun 20, 2007 8:20 pm
by Kevin McGuire
You are right.

Hmm. I just thought about that. I do not think I have ever tried writing to non-existent physical memory on a real computer. I just tried and it seems to not go anywhere, at least it does not wrap around back to zero.. I wonder what happens to it when the processor sends the address and data off towards the RAM?

Posted: Thu Jun 21, 2007 6:44 am
by Tyler
Kevin McGuire wrote:You are right.

Hmm. I just thought about that. I do not think I have ever tried writing to non-existent physical memory on a real computer. I just tried and it seems to not go anywhere, at least it does not wrap around back to zero.. I wonder what happens to it when the processor sends the address and data off towards the RAM?
I doubt the address ever gets sent to RAM... probably just gets dropped as none existent by the Memory Controller in the Chipset. Don''t forget that not only RAM is mapped to memory, and it is possible the Memory Controller simply ignores commands to memory where there are no devices attached.

Posted: Thu Jun 21, 2007 7:25 am
by Brendan
Hi,
Kevin McGuire wrote:I do not think I have ever tried writing to non-existent physical memory on a real computer. I just tried and it seems to not go anywhere, at least it does not wrap around back to zero.. I wonder what happens to it when the processor sends the address and data off towards the RAM?
For more traditional systems, the CPU sends the transaction to the memory controller ("northbridge") and if it doesn't hit RAM (or other special areas, like AGP) then the memory controller forwards it on to the PCI host controller.

For AMD's hyper-transport there's extra steps... The CPU uses a routing table to determine where the transaction should be sent, and either sends it to it's own memory controller, a remote node's memory controller, or to an I/O hub (which may either be local or remote). In any case, if the transaction doesn't hit any RAM it's ends up on the PCI bus (via. the I/O hub).

Once a transaction gets to PCI, settings in bridges determine if the transaction passes from one bus-segment through the bridge to another bus-segment, but if no PCI device (including bridges) claims the transaction on the bus-segment, then (IIRC) the bridge upstream of that bus-segment is responsible for sending an "abort" and terminating the transaction.


Cheers,

Brendan