What do you think of trying to simplify things by assigning a process 8, 16, or even 32 and 64Megabytes of at once when it starts running?
I can't think of many more ways of easily assigning memory for things like disk access, and MOST important, assigning a predictable area for the GUI of the application (what about applications with many child windows and flying tool bars?)...
So, what do you think. Wouldn't it be prudent? Anyway, isn't graphical GUI data always takes up a huge amount of memory? And the disk data as well?
What do you think to allocate 16 to 32Mb per process?
- Kevin McGuire
- Member
- Posts: 843
- Joined: Tue Nov 09, 2004 12:00 am
- Location: United States
- Contact:
I think you have become lost at what you really want to ask for a question.
http://www.osdev.org/phpBB2/viewtopic.php?t=13703(Memory Usage Question)
Or:
You are not providing us with the details of how you're GUI and disk handling work.
http://www.osdev.org/phpBB2/viewtopic.php?t=13703(Memory Usage Question)
Or:
You are not providing us with the details of how you're GUI and disk handling work.
Basically the intention is to keep a pre-allocated area for a given application, that would hold space for anything from the stack, the GUI data, the disk data, and any other thing that needs a "handler" and requires memory for the process itself.
I know that if each process would take up 64Mb preallocated there would be too much wasted memory, so I have preferred it to be 8Mb, AND use virtual memory to disk for the most part, so that in most situations the application's memory usage be at most 50%, being the most heavy wasting things the disk buffers, maybe network, and on the top the GUI (just think about having to parse a enormous HTML document on the web with lots of images, dynamically generated and removed layers of objects, and so on).
I don't want to use things like linked lists or things like that, at least by now, to try avoiding memory fragmentation, which I consider too much to handle for my work which is hardly in an incipient stage.
By using on-disk virtual memory I could swap as I can so that one same application can do more things without relatively too many problems.
With this basic ideas, I see that if I have some 256Mb of RAM and every application takes up 8Mb invariably, I could have up to 32 applications running simultaneously, which I guess it's more than what virtually anyone gets to execute at once while keeping decent system speed.
I know that if each process would take up 64Mb preallocated there would be too much wasted memory, so I have preferred it to be 8Mb, AND use virtual memory to disk for the most part, so that in most situations the application's memory usage be at most 50%, being the most heavy wasting things the disk buffers, maybe network, and on the top the GUI (just think about having to parse a enormous HTML document on the web with lots of images, dynamically generated and removed layers of objects, and so on).
I don't want to use things like linked lists or things like that, at least by now, to try avoiding memory fragmentation, which I consider too much to handle for my work which is hardly in an incipient stage.
By using on-disk virtual memory I could swap as I can so that one same application can do more things without relatively too many problems.
With this basic ideas, I see that if I have some 256Mb of RAM and every application takes up 8Mb invariably, I could have up to 32 applications running simultaneously, which I guess it's more than what virtually anyone gets to execute at once while keeping decent system speed.
I think your method would end up being more difficult, and definitely more limited, than just using normal dynamic memory allocation.
Why are you worried about fragmentation or linked lists? Linked lists are simple, and you're probably going to need them in quite a few parts of your operating system. And unless I'm misunderstanding, why does it matter if your physical memory is "fragmented"?
Why are you worried about fragmentation or linked lists? Linked lists are simple, and you're probably going to need them in quite a few parts of your operating system. And unless I'm misunderstanding, why does it matter if your physical memory is "fragmented"?
I don't see a point pre-allocating physical memory. You can just as well pre-allocate virtual memory (as much as you want, if you don't care about keeping promises, or as much as you have swap space, if you do) and then map pages in on demand basis. That means you have a region for stack (say 1MB?) but if the user never uses more than 4kB stack, then just a single page gets mapped in.
Even if you didn't really do swapping, it makes sense to not map in pages before the program actually touches them. You don't even need to do much of anything: just mark into the page tables that at some address the page is allocated but not yet present. When that page is touched, in the page fault handler allocate a real, physical page, zero it, and map it in. That's it.
You can safely let processes allocate as much memory as you have physical pages that aren't needed by kernel. If you let them allocate more (on the basis that they probably won't need it all) and run out of pages (and can't swap to disk), you just kill a random process (with some heuristics to avoid killing the most important ones) or panic, which ever you prefer.
Now, it makes sense to give them pages on demand even if you only allow safe allocation, even if you don't swap to disk, because any of those pages that haven't been needed yet are free for use for stuff like buffer caches; any clean page can be freed at any time, so you don't need to count those as used.
Memory fragmentation is a non-issue when dealing with physical pages in virtual memory environment. As for avoiding linked lists.. even if you fear your malloc causes your kernel heap to become fragmented, you can always preallocate the list-nodes and store unused nodes into a freelist. You can't easily use the cells for any other purpose that way (not easily anyway), but when you allocate a large number of them at once, there's no more fragmentation than if you allocated an array.
Even if you didn't really do swapping, it makes sense to not map in pages before the program actually touches them. You don't even need to do much of anything: just mark into the page tables that at some address the page is allocated but not yet present. When that page is touched, in the page fault handler allocate a real, physical page, zero it, and map it in. That's it.
You can safely let processes allocate as much memory as you have physical pages that aren't needed by kernel. If you let them allocate more (on the basis that they probably won't need it all) and run out of pages (and can't swap to disk), you just kill a random process (with some heuristics to avoid killing the most important ones) or panic, which ever you prefer.
Now, it makes sense to give them pages on demand even if you only allow safe allocation, even if you don't swap to disk, because any of those pages that haven't been needed yet are free for use for stuff like buffer caches; any clean page can be freed at any time, so you don't need to count those as used.
Memory fragmentation is a non-issue when dealing with physical pages in virtual memory environment. As for avoiding linked lists.. even if you fear your malloc causes your kernel heap to become fragmented, you can always preallocate the list-nodes and store unused nodes into a freelist. You can't easily use the cells for any other purpose that way (not easily anyway), but when you allocate a large number of them at once, there's no more fragmentation than if you allocated an array.
The real problem with goto is not with the control transfer, but with environments. Properly tail-recursive closures get both right.
I think the fragmentation issue here is potential heap-fragmentation from allocating nodes for the linked lists. I don't think that's such a huge issue. If the number of list-nodes doesn't vary much, one could preallocate them in larger chunks in order to get them all to one place. If the number varies wildly, then the large number of small free holes can probably be at least partially coalesced into larger blocks..Andrew275 wrote:Why are you worried about fragmentation or linked lists? Linked lists are simple, and you're probably going to need them in quite a few parts of your operating system. And unless I'm misunderstanding, why does it matter if your physical memory is "fragmented"?
Then again, stuff like kernel heap fragmentation probably starts being an issue mostly when the system runs longer than a few hours (or days) at a time. As long as there are regular reboots anyway, any fragmentation you might have accumulated gets fixed by the next reboot anyway.
If the fragmentation is problem even after short periods of use, there's probably some allocation pattern that finds the weak-points of the malloc used. One can then fix either the pattern or the malloc. Until such a problem is identified, I wouldn't bother trying to fix one.
![Smile :)](./images/smilies/icon_smile.gif)
The real problem with goto is not with the control transfer, but with environments. Properly tail-recursive closures get both right.