Discuss!

Code: Select all
-----BEGIN GEEK CODE BLOCK-----
Version: 3.1
GCS/M/MU d- s:- a--- C++++ UL P L++ E--- W+++ N+ w++ M- V+ PS+ Y+ PE- PGP t-- 5- X R- tv b DI-- D+ G e h! r++ y+
------END GEEK CODE BLOCK------
I tend to use the flags in page table entries as much as possible. For "present" pages there's at least 3 available bits that are combined to give one of the following page types:senaus wrote:I'm redesigning the address space support in my kernel, and I'm wondering which is the best/most popular option for tracking memory regions.
Discuss!
I don't think swap space size is the biggest issue with the above scheme. I think the biggest issue is that if you need to bring a page in from swap for read, but it's never modified, you end up writing them back into swap anyway if you don't keep track of the swap block when a page is present. In a sense every page then is dirty (assuming you map pages on demand.. if not, then any non-zero page is always dirty).Brendan wrote: All other values mean "not present page that was sent to swap", and contains the "block number" in swap space. For 32-bit paging this gives me a maximum (combined) swap space size of "(2^31 - 3) * 4 KiB", or almost 8192 GiB of swap space. For PAE and long mode, page table entries are 64-bit and the maximum amount of swap space is insanely huge (it's almost 34359738368 TiB).
This would be a pretty big problem... In my MM design (yet to be implemented for lack of timemystran wrote:I don't think swap space size is the biggest issue with the above scheme. I think the biggest issue is that if you need to bring a page in from swap for read, but it's never modified, you end up writing them back into swap anyway if you don't keep track of the swap block when a page is present. In a sense every page then is dirty (assuming you map pages on demand.. if not, then any non-zero page is always dirty).
I think the lack of shared memory and shared library support in BCOS is a feature, not a bug (Brendan, correct me if I'm wrong). Like Singularity, it is designed as a "sealed process architecture" (although with traditional hardware protection, unlike Singularity).mystran wrote:Once you have a way to remember swap blocks for in-memory pages, you can use the same method for doing memory mapped files. If several processes map the same file (and you allow this) you'll (in a sane implementation) end up having shared memory as well, which allows shared libraries as a special case (though allowing CoW as well can simplify them).
I wouldn't really call it a "shadow page directory"... actually, the PFDB in my kernel is based on NT's. You might be thinking of "prototype PTEs", which are like "shadow PTEs" I guess. They're used to help keep PTEs that point to shared pages consistent. I never really understood the details... I'm not planning to implement shared memory either.mystran wrote:Anyway, I remember having read that Windows NT style for the in-memory swap address problem is to use a separate "shadow page directory" with the swap addresses. No idea about the details though, so I could have understood it totally wrong.
Inside the kernel there's "kernel modules". One of these keeps track of page usage (a "page usage manager?"). It tells the linear memory manager which pages to evict from RAM (and which pages are "high usage" and should be corrected if a process is migrated from one NUMA node to another, but that's another story).mystran wrote:I don't think swap space size is the biggest issue with the above scheme. I think the biggest issue is that if you need to bring a page in from swap for read, but it's never modified, you end up writing them back into swap anyway if you don't keep track of the swap block when a page is present. In a sense every page then is dirty (assuming you map pages on demand.. if not, then any non-zero page is always dirty).Brendan wrote:All other values mean "not present page that was sent to swap", and contains the "block number" in swap space. For 32-bit paging this gives me a maximum (combined) swap space size of "(2^31 - 3) * 4 KiB", or almost 8192 GiB of swap space. For PAE and long mode, page table entries are 64-bit and the maximum amount of swap space is insanely huge (it's almost 34359738368 TiB).
A file can be opened as read-only any number of times. If a file is opened as read/write then (conceptually) a virtual copy of the file is created that is atomically written to the file system as a new version of the old file when the file is closed. The same file can be opened as read/write any number of times to create any number of new versions of that file. For legacy file systems (file systems that don't support versioning, like FAT) the OS refuses to allow the file to be opened as read/write if it's already opened as read-only or read/write, and refuses to allow it to be open as read-only if it's already opened as read/write.mystran wrote:Once you have a way to remember swap blocks for in-memory pages, you can use the same method for doing memory mapped files. If several processes map the same file (and you allow this) you'll (in a sane implementation) end up having shared memory as well, which allows shared libraries as a special case (though allowing CoW as well can simplify them).
Code: Select all
-----BEGIN GEEK CODE BLOCK-----
Version: 3.1
GCS/M/MU d- s:- a--- C++++ UL P L++ E--- W+++ N+ w++ M- V+ PS+ Y+ PE- PGP t-- 5- X R- tv b DI-- D+ G e h! r++ y+
------END GEEK CODE BLOCK------
On second thought, there are multiple stacks. Allocate them like dynamic arrays. Provide no guarantees about the address space on stacks other than that ESP and EBP will be adjusted accordingly. Re-address stack pages and adjust ESP and EBP if an address space collision occurs. Guarantee dynamic array addresses except across allocations.For this reason the dynamic arrays are placed in the middle of the (64-bit) address space and the stack at the top 0xFFFF_FFFF_FFFF_FFFF_FFFF_FFFF_FFFF_FFFF. This makes it impossible to achieve a stack overflow condition with rationally chosen program maximums (which are tested on allocation of real resources, and exist to detect overflows, infinite recursion, etc.