4 GB Protected Mode.. hm..
-
- Member
- Posts: 83
- Joined: Fri Oct 22, 2004 11:00 pm
4 GB Protected Mode.. hm..
Okay, so I have heard that when you run out of RAM, the system uses the hard drive for memory. So what happens when the hard drive runs out of space? Do you get a general protection fault or something? Do you tell the system which part of the hard drive it could access?
Anything is possible if you put your mind to it.
ComputerPsi
ComputerPsi
Re: 4 GB Protected Mode.. hm..
The "System" Doesn't mean computer, it means Operating System. The Operating System has to deal with allocating Memory and Hard drive space to Pages, so no processor exceptions ever happen.ComputerPsi wrote:Okay, so I have heard that when you run out of RAM, the system uses the hard drive for memory. So what happens when the hard drive runs out of space? Do you get a general protection fault or something? Do you tell the system which part of the hard drive it could access?
-
- Member
- Posts: 83
- Joined: Fri Oct 22, 2004 11:00 pm
nothing, it will crash misserably, unless you as a os developper have implemented virtual memory and the it will allocate 1GiB of real memory and 3 GiB of harddisk memory. Needless to say that the harddisk memory is slow. Also if you are running a 32 bit OS you will not be able to use the whole 4 GiB afaik.
Author of COBOS
Well you can't do any serious Paging to Hard drive without paging, so this really is not related. But chances are if you try to access any memory you dont have within that 4GB Physical Address space, you may be lucky enough to get a fault. Alternatively your system does nothing and you don't get your data back, or my personal favourite, it seems to write fine but what you have really done is sent the explode command to all your PCI devices.ComputerPsi wrote:okay.. If in the GDT, if I allocate and use 4 GB for something when I only have 1 GB of memory, what happens?
Of course if you mean that you are using paging and have a Virtual Address Space with a GDT covering the entire space, then you will simply get page faults until you point the Virtual Pages to Physical Pages.
Nope. Typical memory usage would perform fairly bad under memory compression and the only kind of "compression" that would work with RLE would be zeroes - which are handled by only actually giving the process a page when it writes to it.bewing wrote:I'm wondering if it's possible to use RL memory compression, rather than paging to hard disk, to free up memory pages, under most realworld circumstances.
The other problem is that using some of your memory as a compressed store reduces the amount of working memory, thus increasing the likelihood that you need to compress/uncompress data, as well as increasing the number of tlb invalidations/refills.
edit: this might be useful
Regards,
John.
edit: this might be useful
Regards,
John.
- Colonel Kernel
- Member
- Posts: 1437
- Joined: Tue Oct 17, 2006 6:06 pm
- Location: Vancouver, BC, Canada
- Contact:
So if a process reads from a zero-filled page, it can be reading from a read-only, globally shared, "canonical zero page"? Kinda neat idea...Candy wrote:Nope. Typical memory usage would perform fairly bad under memory compression and the only kind of "compression" that would work with RLE would be zeroes - which are handled by only actually giving the process a page when it writes to it.
Top three reasons why my OS project died:
- Too much overtime at work
- Got married
- My brain got stuck in an infinite loop while trying to design the memory manager
- Kevin McGuire
- Member
- Posts: 843
- Joined: Tue Nov 09, 2004 12:00 am
- Location: United States
- Contact:
A general protection fault occurs (GPF).ComputerPsi wrote:okay.. If in the GDT, if I allocate and use 4 GB for something when I only have 1 GB of memory, what happens?
The standard method to implement paging is using virtual memory aka paging, and giving each process it's own address space. There are also multiple ways you can store a swap file: partition, whole disk, or file. The most straight forward and apparently the most widely used mechanism for detecting when to swap memory to disk is by waiting for the amount of used memory to exceed a threshold (say ninety-percent for a example). In Linux for example a thread is woken which will start to find pages that are cold (or have not been accessed recently by a process). These threads are written to disk, the page is unmapped, and a special flag could be set to remember that this page was swapped to disk (for later when it is accessed again and becomes less cold).
They way you implement disk swapping is by using virtual memory. The idea is to create a separate virtual memory directory and associated tables for every process. Each time you switch to another process you switch to it's page directory by (re)loading the processor register CR3. You do not change the CR3 when switching between the threads of a process.
When paging is enabled and a valid directory is loaded any memory access will reference a entry in this directory and the correct table linked in the directory. The entry will hold a physical address in RAM.
A common method is to map the kernel to the exact same place in all of the processes' virtual memory. So that each time a CR3 is reloaded the kernel still remains in at the exact same (virtual) memory address.
So if you only had one gigabyte of RAM then:
- It would take a while to fill this RAM up as the system loaded, or you used the system and started working with applications.
- If a threshold of used RAM was reached (lets say ninety percent), then you would immediately start looking for pages in the system that have not been accessed recently or ever, but allocated by the processes. By not being accessed I mean a flag is not set in the table entry for the virtual memory map.
If you swap a page from a process to disk and that process goes to access that page; Lets say five minutes later. Then what happens is a page fault is generated, and when handling this page fault exception you check if the page was swapped or if the application is just accessing memory it never had (bugs). If you swapped it to disk then you simply read and write that page back into memory and remap it back to the application. You might not use the same physical page when remapping. Of course the general idea is that simultaneously during this you are also keeping the system's used memory under a threshold (ninety percent given in a example above).
You come down to having two seperate systems in your kernel.
A periodic thread or routine to swap memory to disk.
- check used memory is below set threshold (maybe ninety-percent); check could be performed during system calls for memory allocation or you might schedule this as a thread or call this as routine.
- search for pages when memory is above set threshold that have their accessed flag not set, which hopefully means the process allocated these but never had the chance to use them, or is not using them often or has recently.
- unmap pages found during search from the process they are mapped in.
- write unmaped pages to disk and add physical page to list of free pages.
- check if address is a or inside a page that was swapped to disk.
- if page was swapped to disk, then grab a free page from the free page list in the memory manager. read and write data from disk back to this page and remap page into process.
- if page was not swapped, then do normal error routine for a program trying to access memory that did not exist (was not allocated).
You also might set a flag on the process that just tried to access a swapped page if you are inside the page fault exception handler, since most likely you have blocked the scheduler from running at this moment. You might then assigned the job to your periodic checking thread (above). Once the periodic checking thread loads and maps the page that was a access was attempted on, because it was swapped, then you can unset the flag and the scheduler can starting running this process when it's turn comes up.
1. Move cold (unused but allocated) pages to disk, and free the page into the system's free memory list.
2. When a swapped page is accessed allocate a new page, load the data from disk back onto the page, and map the page back into the process and continue like normal.
- Kevin McGuire
- Member
- Posts: 843
- Joined: Tue Nov 09, 2004 12:00 am
- Location: United States
- Contact:
You are right.
Hmm. I just thought about that. I do not think I have ever tried writing to non-existent physical memory on a real computer. I just tried and it seems to not go anywhere, at least it does not wrap around back to zero.. I wonder what happens to it when the processor sends the address and data off towards the RAM?
Hmm. I just thought about that. I do not think I have ever tried writing to non-existent physical memory on a real computer. I just tried and it seems to not go anywhere, at least it does not wrap around back to zero.. I wonder what happens to it when the processor sends the address and data off towards the RAM?
I doubt the address ever gets sent to RAM... probably just gets dropped as none existent by the Memory Controller in the Chipset. Don''t forget that not only RAM is mapped to memory, and it is possible the Memory Controller simply ignores commands to memory where there are no devices attached.Kevin McGuire wrote:You are right.
Hmm. I just thought about that. I do not think I have ever tried writing to non-existent physical memory on a real computer. I just tried and it seems to not go anywhere, at least it does not wrap around back to zero.. I wonder what happens to it when the processor sends the address and data off towards the RAM?
Hi,
For AMD's hyper-transport there's extra steps... The CPU uses a routing table to determine where the transaction should be sent, and either sends it to it's own memory controller, a remote node's memory controller, or to an I/O hub (which may either be local or remote). In any case, if the transaction doesn't hit any RAM it's ends up on the PCI bus (via. the I/O hub).
Once a transaction gets to PCI, settings in bridges determine if the transaction passes from one bus-segment through the bridge to another bus-segment, but if no PCI device (including bridges) claims the transaction on the bus-segment, then (IIRC) the bridge upstream of that bus-segment is responsible for sending an "abort" and terminating the transaction.
Cheers,
Brendan
For more traditional systems, the CPU sends the transaction to the memory controller ("northbridge") and if it doesn't hit RAM (or other special areas, like AGP) then the memory controller forwards it on to the PCI host controller.Kevin McGuire wrote:I do not think I have ever tried writing to non-existent physical memory on a real computer. I just tried and it seems to not go anywhere, at least it does not wrap around back to zero.. I wonder what happens to it when the processor sends the address and data off towards the RAM?
For AMD's hyper-transport there's extra steps... The CPU uses a routing table to determine where the transaction should be sent, and either sends it to it's own memory controller, a remote node's memory controller, or to an I/O hub (which may either be local or remote). In any case, if the transaction doesn't hit any RAM it's ends up on the PCI bus (via. the I/O hub).
Once a transaction gets to PCI, settings in bridges determine if the transaction passes from one bus-segment through the bridge to another bus-segment, but if no PCI device (including bridges) claims the transaction on the bus-segment, then (IIRC) the bridge upstream of that bus-segment is responsible for sending an "abort" and terminating the transaction.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.