Memory hogging in an OS without paging
- Colonel Kernel
- Member
- Posts: 1437
- Joined: Tue Oct 17, 2006 6:06 pm
- Location: Vancouver, BC, Canada
- Contact:
Memory hogging in an OS without paging
I've been thinking about memory management again (ok, for the past 6 months, but anyway)...
I was thinking about OSes that don't use paging, in particular about how they might protect themselves against malicious processes that try to allocate all available memory. With paging, the OS always has the option to steal physical pages from a process, writing them out to disk as necessary. I guess you could call this "invisible revocation". In an exokernel, the kernel can notify libOSes that it intends to revoke physical pages and then take them by force after a timeout period. The exokernel papers call this "visible revocation". But what if an OS has neither (maybe Palm OS and WinCE work this way? I'm not sure...)? How does such an OS best protect itself against memory-hogging processes?
I was thinking about OSes that don't use paging, in particular about how they might protect themselves against malicious processes that try to allocate all available memory. With paging, the OS always has the option to steal physical pages from a process, writing them out to disk as necessary. I guess you could call this "invisible revocation". In an exokernel, the kernel can notify libOSes that it intends to revoke physical pages and then take them by force after a timeout period. The exokernel papers call this "visible revocation". But what if an OS has neither (maybe Palm OS and WinCE work this way? I'm not sure...)? How does such an OS best protect itself against memory-hogging processes?
Top three reasons why my OS project died:
- Too much overtime at work
- Got married
- My brain got stuck in an infinite loop while trying to design the memory manager
-
- Member
- Posts: 1600
- Joined: Wed Oct 18, 2006 11:59 am
- Location: Vienna/Austria
- Contact:
Re:Memory hogging in an OS without paging
memory usage hysteresis?
so we aren't cruel to processes experience some peeks of memory usage in their lifetime?
Just an idea.
memory quota could be another one. IIRC MAC OS til 9 used to do this.
stay safe.
so we aren't cruel to processes experience some peeks of memory usage in their lifetime?
Just an idea.
memory quota could be another one. IIRC MAC OS til 9 used to do this.
stay safe.
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
BlueillusionOS iso image
Re:Memory hogging in an OS without paging
I'm not entirely sure, but I think the three are the result of an enumeration of all options:
- Allow all allocations
- Don't take back
- memory hogs are problem
- Take back
- without warning
-swapping out XYZ
- with warning, with timeout
-notify & timeout, kick out XYZ
- with warning, without timeout
- degenerates to the first, since processes can plain ignore you
- Limit processes to a given amount
- All of the above options apply again, but without the memory hog problem
So I think these are the main cases, and that there are mainly three choices:
- Don't take anything back, assume processes will not hog
- Take it back after asking & waiting, hoping processes will not hog
- Just take it back, assuming processes will hog
What middle way are you trying to find?
You can use a form of supervisor or hypervisor to overview the allocations and to create and maintain segment declarations for the segments that are present.
I would strongly suggest you use paging though, it makes it a whole lot easier to make a bit of the underlying subsystem disappear (and it is in fact what it was designed for).
- Allow all allocations
- Don't take back
- memory hogs are problem
- Take back
- without warning
-swapping out XYZ
- with warning, with timeout
-notify & timeout, kick out XYZ
- with warning, without timeout
- degenerates to the first, since processes can plain ignore you
- Limit processes to a given amount
- All of the above options apply again, but without the memory hog problem
So I think these are the main cases, and that there are mainly three choices:
- Don't take anything back, assume processes will not hog
- Take it back after asking & waiting, hoping processes will not hog
- Just take it back, assuming processes will hog
What middle way are you trying to find?
You can use a form of supervisor or hypervisor to overview the allocations and to create and maintain segment declarations for the segments that are present.
I would strongly suggest you use paging though, it makes it a whole lot easier to make a bit of the underlying subsystem disappear (and it is in fact what it was designed for).
- Colonel Kernel
- Member
- Posts: 1437
- Joined: Tue Oct 17, 2006 6:06 pm
- Location: Vancouver, BC, Canada
- Contact:
Re:Memory hogging in an OS without paging
That makes a lot of sense as a policy. My question was more about mechanism. Without paging (i.e. -- the ability to save the contents of dirty pages that the OS is about to give to some other process) or a "visible revocation" protocol, what other mechanisms can be used? The quota is one, although it attacks the problem quite early by preventing a process from allocating "too much" memory in the first place. But this might be cruel, as you said. I'm thinking more along the lines of how to correct a bad situation once a process has already allocated nearly all memory...beyond infinity wrote: memory usage hysteresis?
so we aren't cruel to processes experience some peeks of memory usage in their lifetime?
Just an idea.
Top three reasons why my OS project died:
- Too much overtime at work
- Got married
- My brain got stuck in an infinite loop while trying to design the memory manager
-
- Member
- Posts: 1600
- Joined: Wed Oct 18, 2006 11:59 am
- Location: Vienna/Austria
- Contact:
Re:Memory hogging in an OS without paging
It would come down to
1. have the process return allocated memory as soon as possible - that's the task of the heap management library (malloc et al -> go, find adjacent free chunks, merge them and if they are at the bottom of the heap, return them)
2. if a process/thread keeps on hogging memory, we do have a bug, don't we? It's beyond our control why that thing allocates such a lot of memory. So it has to be debugged. If the process allocates memory beyond what is available: kick it like beckham (remove it), as the problem isn't resolvable otherwise with our given set of tools.
3. Other processes can't be created in low-memory situations.
In my opinion this breaks down to a: buggy code or b: allocation intense application. In either case this would require the programmer to rethink his algorithm/allocation stuff so the program doesn't turn out to be a memory hog. The OS can only do so much to provide a stable and fair operating environment for every process in the system. If some process/thread doesn't obey the rules it needs to be punished, period.
have the os recover safely from out of memory situations, so that everything else can continue working without having to worry.
Do you already have some design drafts for your memory management stuff? It'd be worth the time to draft the stuff. I've done so to get the thinking stuff circling in my grey matter sorted out and straight.
1. have the process return allocated memory as soon as possible - that's the task of the heap management library (malloc et al -> go, find adjacent free chunks, merge them and if they are at the bottom of the heap, return them)
2. if a process/thread keeps on hogging memory, we do have a bug, don't we? It's beyond our control why that thing allocates such a lot of memory. So it has to be debugged. If the process allocates memory beyond what is available: kick it like beckham (remove it), as the problem isn't resolvable otherwise with our given set of tools.
3. Other processes can't be created in low-memory situations.
In my opinion this breaks down to a: buggy code or b: allocation intense application. In either case this would require the programmer to rethink his algorithm/allocation stuff so the program doesn't turn out to be a memory hog. The OS can only do so much to provide a stable and fair operating environment for every process in the system. If some process/thread doesn't obey the rules it needs to be punished, period.
have the os recover safely from out of memory situations, so that everything else can continue working without having to worry.
Do you already have some design drafts for your memory management stuff? It'd be worth the time to draft the stuff. I've done so to get the thinking stuff circling in my grey matter sorted out and straight.
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
BlueillusionOS iso image
Re:Memory hogging in an OS without paging
Without paging, you cannot do "intermediate" solutions. A process runs (and allocates) or it is terminated (with / without warning). Paging is the "intermediate".
You can quota the amount of memory a process can allocate, and you could do it total or relative: AmigaOS didn't allow a process to allocate the last X bytes of available memory, so that even if things went south you could still start some tool to recover your system (a new process could still be started).
You can quota the amount of memory a process can allocate, and you could do it total or relative: AmigaOS didn't allow a process to allocate the last X bytes of available memory, so that even if things went south you could still start some tool to recover your system (a new process could still be started).
Every good solution is obvious once you've found it.
-
- Member
- Posts: 1600
- Joined: Wed Oct 18, 2006 11:59 am
- Location: Vienna/Austria
- Contact:
Re:Memory hogging in an OS without paging
OK, I revise my statement and say, that processes not using up all the last bytes of memory can still be started - as you say - to start up a recovery tool. It's always best, if ppl with different expertise come together talking and sharing thoughts.
Other proposal: to introduce a kernel level debugger which jumps in if "things go south" so you can kick the hog. One doesn't need to start an extra process to invoke such a tool. Just -> it doesn't fit inside the micro kernel paradigm -> but anyway, it'd be cool.
Other proposal: to introduce a kernel level debugger which jumps in if "things go south" so you can kick the hog. One doesn't need to start an extra process to invoke such a tool. Just -> it doesn't fit inside the micro kernel paradigm -> but anyway, it'd be cool.
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
BlueillusionOS iso image
- Pype.Clicker
- Member
- Posts: 5964
- Joined: Wed Oct 18, 2006 2:31 am
- Location: In a galaxy, far, far away
- Contact:
Re:Memory hogging in an OS without paging
"Your system is running low on memory. Please save documents and close some applications" is what Embeddix Linux says on my Zaurus ...Colonel Kernel wrote: But what if an OS has neither (maybe Palm OS and WinCE work this way? I'm not sure...)? How does such an OS best protect itself against memory-hogging processes?
- Colonel Kernel
- Member
- Posts: 1437
- Joined: Tue Oct 17, 2006 6:06 pm
- Location: Vancouver, BC, Canada
- Contact:
Re:Memory hogging in an OS without paging
Thanks for all the thoughts. I'm not so much designing as gathering requirements, and the underlying question I was asking myself was "how paranoid should I be?" Based on your suggestions, I think the answer depends on the use case of my OS (which is still indeterminate ):
- Embedded system: Hogging is most likely the result of a bug, not a malicious process, since the environment should be strictly controlled. Debugging aids will help the most here. Low paranoia.
- Desktop/PDA: Hogging could be a bug or an attack. The environment is relatively uncontrolled, but you can always pop up a "Kill this process?" dialog box as Pype's PDA did, as long as there is enough memory left to do so. Medium paranoia.
- Server: Hogging could be either due to a bug or due to maliciousness, although the environment should be more tightly controlled than a desktop. However, there won't necessarily be someone sitting there to hit the "kill" button if something goes wrong. I'm not sure what to do in this case to prevent denial of service attacks. Probably quotas are the answer. High paranoia.
Top three reasons why my OS project died:
- Too much overtime at work
- Got married
- My brain got stuck in an infinite loop while trying to design the memory manager
Re:Memory hogging in an OS without paging
Singularity indeed does not seem to have implemented the swapping to disk process. However, I don't see why they could not add that. They may simply not need it yet, memory wise, since it is a research OS.
High paranoia might be a good thing for a server, but I'm not sure if that is true memory wise.
For example: it seems Microsoft SQL Server seems to allocate most of available memory (if the databases are large enough) to keep large buffers for performance reasons.
I was thinking a bit about how you could maintain a history of each executable and the memory it uses so you could detect if it was behaving differently. However I'm not sure that would be such a wise idea either. I can come up with a number of scenarios where processes might need more memory now than on a previous run (especially if the number of users increase, for example).
Whether or not you implement a swapping system may not be so interesting at all. Eventually you will run out of memory, virtual or real. It would probably be unwise to allow the swapfile to expand to the maximum harddisk size.
So if you have for example 1 GB of actual memory and a 2 GB swapfile your boundary will be 3 GB of memory, instead of 1. The limit has just moved.
Interesting discussion!
High paranoia might be a good thing for a server, but I'm not sure if that is true memory wise.
For example: it seems Microsoft SQL Server seems to allocate most of available memory (if the databases are large enough) to keep large buffers for performance reasons.
I was thinking a bit about how you could maintain a history of each executable and the memory it uses so you could detect if it was behaving differently. However I'm not sure that would be such a wise idea either. I can come up with a number of scenarios where processes might need more memory now than on a previous run (especially if the number of users increase, for example).
Whether or not you implement a swapping system may not be so interesting at all. Eventually you will run out of memory, virtual or real. It would probably be unwise to allow the swapfile to expand to the maximum harddisk size.
So if you have for example 1 GB of actual memory and a 2 GB swapfile your boundary will be 3 GB of memory, instead of 1. The limit has just moved.
Interesting discussion!
Re:Memory hogging in an OS without paging
Question: Do they use paging hardware in singularity?Rob wrote: Singularity indeed does not seem to have implemented the swapping to disk process. However, I don't see why they could not add that. They may simply not need it yet, memory wise, since it is a research OS.
The problem is that implementing swapping without paging hardware gets quite problematic in terms of efficiency. You could do this at object level (rather than page level), when you want to swap out an object you'd have to inform everyone, who holds a reference to that object, that you want to swap it out. When the application accesses it again, it would have to check if the object is present, and request swapping it in again - and this is where the problem lies: You'd have to check that too often (at every single access - unless you implement some locking mechanism....).
The other possibility would be to use the paging hardware (when it is present), and create a single, indentity-mapped address space. But I guess that singularity should also run on embedded platforms, which hardly have paging hardware AFAIK.
cheers Joe
- Pype.Clicker
- Member
- Posts: 5964
- Joined: Wed Oct 18, 2006 2:31 am
- Location: In a galaxy, far, far away
- Contact:
Re:Memory hogging in an OS without paging
well, the 'present' bit in GDT is there for that very purpose: allowing swapping of complete "logical" objects -- though i don't remind of an OS that actually used it. Maybe windows 3.1 (in 16 bits only mode) did.
Re:Memory hogging in an OS without paging
OS/2 was the main OS that used segments, and I think they might've used it. Also, the bit was (iirc) a duplicate of a similar bit on the VAX, which didn't have paging so it was used a lot more on that one. Could've been a different old beast, but it's at least a beast you don't expect to see working.Pype.Clicker wrote: well, the 'present' bit in GDT is there for that very purpose: allowing swapping of complete "logical" objects -- though i don't remind of an OS that actually used it. Maybe windows 3.1 (in 16 bits only mode) did.
- Colonel Kernel
- Member
- Posts: 1437
- Joined: Tue Oct 17, 2006 6:06 pm
- Location: Vancouver, BC, Canada
- Contact:
Re:Memory hogging in an OS without paging
Yes. AFAIK, they map everything into a single address space and the whole system runs in ring 0. It need not be identity-mapped, although I guess it probably is. They mark a few pages starting at 0 as not present to help trap null pointer dereferences, but I haven't heard of any other ways in which they use the MMU.JoeKayzA wrote:Question: Do they use paging hardware in singularity?
Top three reasons why my OS project died:
- Too much overtime at work
- Got married
- My brain got stuck in an infinite loop while trying to design the memory manager
- Pype.Clicker
- Member
- Posts: 5964
- Joined: Wed Oct 18, 2006 2:31 am
- Location: In a galaxy, far, far away
- Contact:
Re:Memory hogging in an OS without paging
a microsoft researcher i once met was working on such single-space protections mechanism. One of the thing he used was to modify the 'present' mappings as thread were tunnelled from one component to another. E.g. as the thread leaves the 'application' context to run code from the TCP component, parts from the TCP component were now mapped as present (they were previously marked absent so that the application cannot harm them), and as the thread executed IP component, TCP pages were hidden again and IP pages revealed. When the thread finally returned to the application, everything came back to "normal".
i dunno if they integrated that into singularity or not, however.
i dunno if they integrated that into singularity or not, however.