I'm getting very confused about how to manage my virtual address space. I've gotten my head wrapped around paging, and mapping, and I'm now trying to figure out how to implement a malloc function that will allow me to create allocations that are less than 4KB in size. As it stands now, I have a good physical memory manager based off a bitmap, and a good page fault handler. When something tries to read from, or write to, a virtual address that isn't mapped the page fault handler will check the bitmap for a free page of physical memory, map it to the appropriate virtual address, and allow things to continue happily along their way. I'm wanting to get things set up so I can move in to multitasking, and allow each process to have its own page directory so it'll have a unique virtual address space. I'm not really sure how to proceed with this. I've previously written a first fit memory manager for a flat memory model that kept track of which address ranges were free using a linked list. I thought that a virtual memory manager would be similar. I'm getting conflicting information from multiple sources, and I'm hoping the forum can give me a definitive explanation of what's going on. Some sources are saying the kernel can't make allocations that are less than 4KB in size, others are saying that's what malloc is for. One place is telling me to set up a heap, and another is saying that heaps are an obsolete concept because we now have paging. I'm also quite confused about the concept of kmalloc, and what that's all about.
Here's my paging code.
Here's the header file for my paging code. The definitions at the bottom are functions I'm thinking about adding.
Here's a header I'm working on for my virtual memory manager.
I'm writing a higher half kernel for a 32-bit system with non-PAE paging. I followed the Higher Half Bare Bones Tutorial to get the higher half stuff set up, and when the kernel runs my paging initialization function I switch things over to 4KB paging. I'm setting up one page directory, and one page table, for the kernel, getting the bitmap set up, and getting recursive mapping set up. Since I'm programming for a 32-bit system I know I'll have a theoretical maximum of 4GB of physical memory, and used that as an upper limit for the size of my bitmap. I have the page directory and page table in the physical memory that's right after my kernel. The bitmap gets set up in the physical memory right after the page table.
Here's how the virtual memory mappings for my kernel are set up:
0xC000 0000 => PD[768] = mappings for the kernel, kernel page directory, and kernel page table.
0xFFB0 0000 => PD[1022], PT[768] = mappings for the bitmap for the physical memory manager.
0xFFC0 0000 => PD[1023] = recursive mappings.
What I've been considering doing for the virtual memory manager is creating a linked list to keep track of which virtual addresses are free. Each node on the list would store a pointer to a free virtual address, and a value telling the virtual memory manager how large the free section is. Whenever something calls malloc I would iterate over the list to find a portion of virtual memory address space large enough for the allocation, manipulate the list to keep it current, and return a pointer to the beginning of the allocation. I've already done something similar (before I got in to paging) for a flat memory model, and the code for that can be seen here. The linked list would occupy some random location in physical memory, but be mapped to 0xFF80 0000. As the virtual memory addresses get fragmented more "free" nodes would be added to the list, and it would "grow" towards the bitmap's virtual address range. Nodes of adjacent free addresses would occasionally be compacted together to limit the growth of the list. I am slightly concerned that as the virtual address space becomes fragmented the list could clobber the bitmap by growing into its virtual address range, but there are ways to prevent this from happening. Basically, the list would occupy random pages of physical memory as needed, but use a continuous range of virtual addresses. Each process would have a separate page directory, separate list, and separate recursive mapping, but the kernel and bitmap mappings would remain constant for every page directory. By doing this each process would be spawned with its own virtual address space that was unique to itself, and could make arbitrarily small allocations of virtual address space on demand. No pages of physical memory would be mapped until something actually tried to write to a virtual address, and then the page fault handler would take care of it. I'm thinking of calling all of the virtual addresses below 0xC000 0000 "user space," and everything above that "kernel space." The kernel would manage the task switching by keeping some information about each process mapped in kernel space, and use that to determine which page directory (or virtual address space) to switch to once a process's time slice had expired.
I sure this will work as I'm intending, but worried that I'm missing some large concept related to virtual memory management. Does anyone have a better explanation of the concept? Are there any big "gotchas" I'm missing?
Confusion regarding VMMs, and allocations of less than 4KB
Re: Confusion regarding VMMs, and allocations of less than 4
Hi,
Physical memory management includes allocating and freeing physical pages. For most allocations you don't care what the physical address is (because you're using paging anyway), but for some cases (device drivers that need buffers) there are restrictions on the physical address (e.g. maybe it's a 32-bit device and can't handle physical addresses above 4 GiB). This mostly just means you have several different ways to allocate physical page/s.
The virtual memory manager's job is to manage virtual address space/s. This includes creating and destroying virtual address spaces. A virtual address space is mostly a collection of areas of different types. One area might be "no access", another area might be "executable but not writable", another area might be "memory mapped file", another area might be "allocate on demand" and another area might be shared between 2 processes. In addition; the virtual memory manager is responsible for upholding the illusion that there's an "infinite" amount of RAM and that everything is in RAM. This means doing tricks like sending data to swap space when you're running low on actual RAM, fetching data from swap space and from files when they're accessed (to make it look like the data was in RAM all the time when it's not), etc. Of course (for efficiency and due to hardware limitations) areas of the virtual address space must be a multiple of the page size.
When a process is started, you'd create a new virtual address space, then your "executable loader" might load the executable's header and examine it, and ask for one part of the executable file to be memory mapped as "executable, not writeable" (corresponding to the executable's ".text" section) and another part of the file to be memory mapped as "not executable, writeable" (corresponding to the executable's ".data" section). The loader might also ask for a third area to be "allocate on demand" (corresponding to the executable's ".bss" section). Then it might ask for a fourth area (for the executable's stack). The remainder of the virtual address space will be left as "no access" to help catch bugs. Of course typically there's shared libraries that get loaded too, and they end up wanting more areas of the virtual address space.
The heap is something that belongs in the process - it's has nothing to do with the kernel. Different processes (written in different languages) will have completely different approaches for heap. For example, one C program might use a standard library that includes malloc/free; another C program might implement its own "object pools" and have 10 different "allocObject" and "freeObject" functions and not use malloc/free at all; a garbage collected language might have "new" and garbage collection; something written in assembly might use the raw virtual memory without any heap; something written in a functional language might use lots of stack and not use a heap; etc.
For something like malloc/free; the process's startup code might ask the virtual memory manager for an initial "allocate on demand" area. For allocations it would try to find an unused part of the area and take a piece of it, and if it can't find an unused part of the area it'd ask the virtual memory manager to create a new "allocate on demand" area (or to increase the size of the existing "allocate on demand" area). If it runs out of virtual address space then it returns an error (e.g. "null"). For de-allocations it would remember that the piece being freed is now "unused" (so it can be recycled); and maybe if a lot of the "allocate on demand" area is unused it might ask the virtual memory manager to make part of the area "no access" again. The virtual memory manager just creates the "allocate on demand" areas and "no access" areas when it's told to, without knowing or caring which parts of those areas are used or what they're used for.
Now; the kernel itself is like a program - depending on how its designed it might want kmalloc/kfree, it might want new and garbage collection, it might have 10 different "object managers", it might use the raw virtual memory manager "as is", etc. Whatever it happens to want (if anything) it has to implement itself.
I personally think that the virtual memory manager should just do what it's been told to do; where whatever calls the virtual memory manager (e.g. the process' heap manager) tells the virtual memory manager which virtual address. For example, you might have a "change_area_type(void *start, unsigned int pages, int new_area_type)" function that is used to change the type of arbitrary area of the virtual address space from whatever it used to be to whatever you want it to be now (including changing an area from "no access" to "allocate on demand" and also including changing an area from "allocate on demand" to "no access"); where the virtual memory manager never needs to find a suitable area because its always told where.
Cheers,
Brendan
Let's start from the start. There are 3 completely different things - physical memory management, virtual memory management, and heap.PatSanf wrote:I'm getting very confused about how to manage my virtual address space. I've gotten my head wrapped around paging, and mapping, and I'm now trying to figure out how to implement a malloc function that will allow me to create allocations that are less than 4KB in size. As it stands now, I have a good physical memory manager based off a bitmap, and a good page fault handler. When something tries to read from, or write to, a virtual address that isn't mapped the page fault handler will check the bitmap for a free page of physical memory, map it to the appropriate virtual address, and allow things to continue happily along their way. I'm wanting to get things set up so I can move in to multitasking, and allow each process to have its own page directory so it'll have a unique virtual address space. I'm not really sure how to proceed with this. I've previously written a first fit memory manager for a flat memory model that kept track of which address ranges were free using a linked list. I thought that a virtual memory manager would be similar. I'm getting conflicting information from multiple sources, and I'm hoping the forum can give me a definitive explanation of what's going on. Some sources are saying the kernel can't make allocations that are less than 4KB in size, others are saying that's what malloc is for. One place is telling me to set up a heap, and another is saying that heaps are an obsolete concept because we now have paging. I'm also quite confused about the concept of kmalloc, and what that's all about.
Physical memory management includes allocating and freeing physical pages. For most allocations you don't care what the physical address is (because you're using paging anyway), but for some cases (device drivers that need buffers) there are restrictions on the physical address (e.g. maybe it's a 32-bit device and can't handle physical addresses above 4 GiB). This mostly just means you have several different ways to allocate physical page/s.
The virtual memory manager's job is to manage virtual address space/s. This includes creating and destroying virtual address spaces. A virtual address space is mostly a collection of areas of different types. One area might be "no access", another area might be "executable but not writable", another area might be "memory mapped file", another area might be "allocate on demand" and another area might be shared between 2 processes. In addition; the virtual memory manager is responsible for upholding the illusion that there's an "infinite" amount of RAM and that everything is in RAM. This means doing tricks like sending data to swap space when you're running low on actual RAM, fetching data from swap space and from files when they're accessed (to make it look like the data was in RAM all the time when it's not), etc. Of course (for efficiency and due to hardware limitations) areas of the virtual address space must be a multiple of the page size.
When a process is started, you'd create a new virtual address space, then your "executable loader" might load the executable's header and examine it, and ask for one part of the executable file to be memory mapped as "executable, not writeable" (corresponding to the executable's ".text" section) and another part of the file to be memory mapped as "not executable, writeable" (corresponding to the executable's ".data" section). The loader might also ask for a third area to be "allocate on demand" (corresponding to the executable's ".bss" section). Then it might ask for a fourth area (for the executable's stack). The remainder of the virtual address space will be left as "no access" to help catch bugs. Of course typically there's shared libraries that get loaded too, and they end up wanting more areas of the virtual address space.
The heap is something that belongs in the process - it's has nothing to do with the kernel. Different processes (written in different languages) will have completely different approaches for heap. For example, one C program might use a standard library that includes malloc/free; another C program might implement its own "object pools" and have 10 different "allocObject" and "freeObject" functions and not use malloc/free at all; a garbage collected language might have "new" and garbage collection; something written in assembly might use the raw virtual memory without any heap; something written in a functional language might use lots of stack and not use a heap; etc.
For something like malloc/free; the process's startup code might ask the virtual memory manager for an initial "allocate on demand" area. For allocations it would try to find an unused part of the area and take a piece of it, and if it can't find an unused part of the area it'd ask the virtual memory manager to create a new "allocate on demand" area (or to increase the size of the existing "allocate on demand" area). If it runs out of virtual address space then it returns an error (e.g. "null"). For de-allocations it would remember that the piece being freed is now "unused" (so it can be recycled); and maybe if a lot of the "allocate on demand" area is unused it might ask the virtual memory manager to make part of the area "no access" again. The virtual memory manager just creates the "allocate on demand" areas and "no access" areas when it's told to, without knowing or caring which parts of those areas are used or what they're used for.
Now; the kernel itself is like a program - depending on how its designed it might want kmalloc/kfree, it might want new and garbage collection, it might have 10 different "object managers", it might use the raw virtual memory manager "as is", etc. Whatever it happens to want (if anything) it has to implement itself.
You'll have to decide who is responsible for determining which areas of the virtual address space are used for what. For example; would a process ask the virtual memory manager to make 1234 pages at virtual address 0x12345678 become "allocate on demand"; or would a process ask the virtual memory manager to make 1234 pages at "wherever you like" become "allocate on demand"? For another example; would a process ask the virtual memory manager to make 1234 pages at virtual address 0x12345678 become "no access"; or would a process ask the virtual memory manager to make 1234 pages at "wherever you like" become "no access"?PatSanf wrote:What I've been considering doing for the virtual memory manager is creating a linked list to keep track of which virtual addresses are free.
I personally think that the virtual memory manager should just do what it's been told to do; where whatever calls the virtual memory manager (e.g. the process' heap manager) tells the virtual memory manager which virtual address. For example, you might have a "change_area_type(void *start, unsigned int pages, int new_area_type)" function that is used to change the type of arbitrary area of the virtual address space from whatever it used to be to whatever you want it to be now (including changing an area from "no access" to "allocate on demand" and also including changing an area from "allocate on demand" to "no access"); where the virtual memory manager never needs to find a suitable area because its always told where.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Confusion regarding VMMs, and allocations of less than 4
I can understand the reasoning behind wanting this. Are the permissions for the various memory areas managed by the attributes for a page on the page tables? For example, if I'm wanting to load executable code into a specific page would I leave the R/W bit unset so it can only be read from? What's the typical way to enforce these privileges when something tries to access memory through a virtual address?Brendan wrote:When a process is started, you'd create a new virtual address space, then your "executable loader" might load the executable's header and examine it, and ask for one part of the executable file to be memory mapped as "executable, not writeable" (corresponding to the executable's ".text" section) and another part of the file to be memory mapped as "not executable, writeable" (corresponding to the executable's ".data" section). The loader might also ask for a third area to be "allocate on demand" (corresponding to the executable's ".bss" section). Then it might ask for a fourth area (for the executable's stack). The remainder of the virtual address space will be left as "no access" to help catch bugs. Of course typically there's shared libraries that get loaded too, and they end up wanting more areas of the virtual address space.
What's the heap for each process actually for? I'm thinking that's one concept I'm having trouble understanding. I'm stuck somewhere between the heap being an allocate on demand area, and confusion. Is it just a portion of memory allocated to a process that it can use for whatever it wants?Brendan wrote:The heap is something that belongs in the process - it's has nothing to do with the kernel. Different processes (written in different languages) will have completely different approaches for heap. For example, one C program might use a standard library that includes malloc/free; another C program might implement its own "object pools" and have 10 different "allocObject" and "freeObject" functions and not use malloc/free at all; a garbage collected language might have "new" and garbage collection; something written in assembly might use the raw virtual memory without any heap; something written in a functional language might use lots of stack and not use a heap; etc
What I keep coming back to is a set up where the kernel would use kmalloc to allocate a set amount of memory to a process when it initially spawns that would include an "allocate on demand" area that malloc would work out of. When malloc had allocated all of the demand area it would then call kmalloc to have more memory allocated to it, and continue from there. I keep being told that this is wrong, and getting confused. The distinction between kmalloc, and malloc, is getting lost on me.Brendan wrote:For something like malloc/free; the process's startup code might ask the virtual memory manager for an initial "allocate on demand" area. For allocations it would try to find an unused part of the area and take a piece of it, and if it can't find an unused part of the area it'd ask the virtual memory manager to create a new "allocate on demand" area (or to increase the size of the existing "allocate on demand" area). If it runs out of virtual address space then it returns an error (e.g. "null"). For de-allocations it would remember that the piece being freed is now "unused" (so it can be recycled); and maybe if a lot of the "allocate on demand" area is unused it might ask the virtual memory manager to make part of the area "no access" again. The virtual memory manager just creates the "allocate on demand" areas and "no access" areas when it's told to, without knowing or caring which parts of those areas are used or what they're used for.
Now; the kernel itself is like a program - depending on how its designed it might want kmalloc/kfree, it might want new and garbage collection, it might have 10 different "object managers", it might use the raw virtual memory manager "as is", etc. Whatever it happens to want (if anything) it has to implement itself.
This is something else that's confusing me. With the way I'm thinking about proceeding each virtual address space would have the kernel mapped to it, and anything running in a unique virtual address space would be able to just go read and write to addresses in kernel space. I'm thinking that once I get in to user mode then anything wanting to access kernel space would need to use a system call, or it would cause a page fault. Is this the case, or do I need to have something that tells a process that it shouldn't attempt to access virtual addresses above a certain range?Brendan wrote:You'll have to decide who is responsible for determining which areas of the virtual address space are used for what.
Re: Confusion regarding VMMs, and allocations of less than 4
Hi,
Also note that it's possible to use trickery to mimic permissions that the hardware doesn't actually support. For a simple example, when a page fault occurs (for an "allocate on demand" area) your page fault handler could check if the page fault was caused by EIP and (if it was) check if the area is supposed to be executable or not, and catch some (but not all) "executing non-executable data" bugs even though the CPU doesn't support the "no execute" page permission. Of course there's also much more advanced techniques (with higher overhead) that can be used to detect all "executing non-executable data" bugs even though the CPU doesn't support the "no execute" page permission (but you'd have to consider if it's worth the overhead in that case). In similar ways, with some trickery you could (e.g.) support "write only" pages (which probably isn't too useful, but it's still possible) even though 80x86 doesn't support it.
For dynamically allocated memory, in some cases you can put it on the stack. This works fine if the data doesn't need to remain after the function returns and cleans up its stack. However, if the data does need to remain after the function returns you need some other way to dynamically allocate memory. That is what "heap" is for.
Of course you're dynamically allocating "virtual memory" (which may or may not be real/physical memory and can just be an illusion like "allocate on demand" instead).
Cheers,
Brendan
Yes. Page table entries have a flag to control whether a page can be written to or not, and (for some CPUs that support it, which is all modern CPUs when using PAE or long mode) also have another bit that controls whether a page can be executed or not.PatSanf wrote:I can understand the reasoning behind wanting this. Are the permissions for the various memory areas managed by the attributes for a page on the page tables? For example, if I'm wanting to load executable code into a specific page would I leave the R/W bit unset so it can only be read from? What's the typical way to enforce these privileges when something tries to access memory through a virtual address?Brendan wrote:When a process is started, you'd create a new virtual address space, then your "executable loader" might load the executable's header and examine it, and ask for one part of the executable file to be memory mapped as "executable, not writeable" (corresponding to the executable's ".text" section) and another part of the file to be memory mapped as "not executable, writeable" (corresponding to the executable's ".data" section). The loader might also ask for a third area to be "allocate on demand" (corresponding to the executable's ".bss" section). Then it might ask for a fourth area (for the executable's stack). The remainder of the virtual address space will be left as "no access" to help catch bugs. Of course typically there's shared libraries that get loaded too, and they end up wanting more areas of the virtual address space.
Also note that it's possible to use trickery to mimic permissions that the hardware doesn't actually support. For a simple example, when a page fault occurs (for an "allocate on demand" area) your page fault handler could check if the page fault was caused by EIP and (if it was) check if the area is supposed to be executable or not, and catch some (but not all) "executing non-executable data" bugs even though the CPU doesn't support the "no execute" page permission. Of course there's also much more advanced techniques (with higher overhead) that can be used to detect all "executing non-executable data" bugs even though the CPU doesn't support the "no execute" page permission (but you'd have to consider if it's worth the overhead in that case). In similar ways, with some trickery you could (e.g.) support "write only" pages (which probably isn't too useful, but it's still possible) even though 80x86 doesn't support it.
All useful programs need to store data somewhere (and often it's a lot of different types of data all over the place for a large number of reasons). You can split data into 2 categories - either the compiler knows exactly how much space is needed at compile time and reserves space for it (e.g. in the ".data" or ".bss" section), which is called "statically allocated"; or you can't know how much memory will be needed (or if memory will be needed at all) until run-time (e.g. maybe it's user input, maybe it depends on the size of a file, etc), which is called "dynamically allocated".PatSanf wrote:What's the heap for each process actually for? I'm thinking that's one concept I'm having trouble understanding. I'm stuck somewhere between the heap being an allocate on demand area, and confusion. Is it just a portion of memory allocated to a process that it can use for whatever it wants?Brendan wrote:The heap is something that belongs in the process - it's has nothing to do with the kernel. Different processes (written in different languages) will have completely different approaches for heap. For example, one C program might use a standard library that includes malloc/free; another C program might implement its own "object pools" and have 10 different "allocObject" and "freeObject" functions and not use malloc/free at all; a garbage collected language might have "new" and garbage collection; something written in assembly might use the raw virtual memory without any heap; something written in a functional language might use lots of stack and not use a heap; etc
For dynamically allocated memory, in some cases you can put it on the stack. This works fine if the data doesn't need to remain after the function returns and cleans up its stack. However, if the data does need to remain after the function returns you need some other way to dynamically allocate memory. That is what "heap" is for.
Of course you're dynamically allocating "virtual memory" (which may or may not be real/physical memory and can just be an illusion like "allocate on demand" instead).
The process's "malloc" should use the kernel's virtual memory manager to allocate more virtual memory (and should never be able to try to use the kernel's "kmalloc" without being terminated for attempting to violate security). The kernel's "kmalloc" should also use the kernel's virtual memory manager to allocate more virtual memory. There shouldn't be anything confusing about this.PatSanf wrote:What I keep coming back to is a set up where the kernel would use kmalloc to allocate a set amount of memory to a process when it initially spawns that would include an "allocate on demand" area that malloc would work out of. When malloc had allocated all of the demand area it would then call kmalloc to have more memory allocated to it, and continue from there. I keep being told that this is wrong, and getting confused. The distinction between kmalloc, and malloc, is getting lost on me.Brendan wrote:For something like malloc/free; the process's startup code might ask the virtual memory manager for an initial "allocate on demand" area. For allocations it would try to find an unused part of the area and take a piece of it, and if it can't find an unused part of the area it'd ask the virtual memory manager to create a new "allocate on demand" area (or to increase the size of the existing "allocate on demand" area). If it runs out of virtual address space then it returns an error (e.g. "null"). For de-allocations it would remember that the piece being freed is now "unused" (so it can be recycled); and maybe if a lot of the "allocate on demand" area is unused it might ask the virtual memory manager to make part of the area "no access" again. The virtual memory manager just creates the "allocate on demand" areas and "no access" areas when it's told to, without knowing or caring which parts of those areas are used or what they're used for.
Now; the kernel itself is like a program - depending on how its designed it might want kmalloc/kfree, it might want new and garbage collection, it might have 10 different "object managers", it might use the raw virtual memory manager "as is", etc. Whatever it happens to want (if anything) it has to implement itself.
Page table entries (and page directory entries, etc) also have a "user or supervisor" flag that controls whether page/s can be accessed by CPL=3 code. This should be used to ensure that normal processes (running at CPL=3) can not access anything in kernel space.PatSanf wrote:This is something else that's confusing me. With the way I'm thinking about proceeding each virtual address space would have the kernel mapped to it, and anything running in a unique virtual address space would be able to just go read and write to addresses in kernel space. I'm thinking that once I get in to user mode then anything wanting to access kernel space would need to use a system call, or it would cause a page fault. Is this the case, or do I need to have something that tells a process that it shouldn't attempt to access virtual addresses above a certain range?Brendan wrote:You'll have to decide who is responsible for determining which areas of the virtual address space are used for what.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
- eryjus
- Member
- Posts: 286
- Joined: Fri Oct 21, 2011 9:47 pm
- Libera.chat IRC: eryjus
- Location: Tustin, CA USA
Re: Confusion regarding VMMs, and allocations of less than 4
I use kmalloc to allocate small bits of memory that will only be accessed from within the kernel. When I need to allocate larger amounts of memory space, I interface with the VMM directly (such as loading a program into memory to execute). This design is planned to include the runtime libraries, which will hold the malloc (and similar) function. When more process memory is needed, the malloc heap manager will know where the ending address is and allocate memory in the next page -- interfacing with the VMM to get a frame for that location.Brendan wrote:The process's "malloc" should use the kernel's virtual memory manager to allocate more virtual memory (and should never be able to try to use the kernel's "kmalloc" without being terminated for attempting to violate security). The kernel's "kmalloc" should also use the kernel's virtual memory manager to allocate more virtual memory. There shouldn't be anything confusing about this.PatSanf wrote:What I keep coming back to is a set up where the kernel would use kmalloc to allocate a set amount of memory to a process when it initially spawns that would include an "allocate on demand" area that malloc would work out of. When malloc had allocated all of the demand area it would then call kmalloc to have more memory allocated to it, and continue from there. I keep being told that this is wrong, and getting confused. The distinction between kmalloc, and malloc, is getting lost on me.Brendan wrote:For something like malloc/free; the process's startup code might ask the virtual memory manager for an initial "allocate on demand" area. For allocations it would try to find an unused part of the area and take a piece of it, and if it can't find an unused part of the area it'd ask the virtual memory manager to create a new "allocate on demand" area (or to increase the size of the existing "allocate on demand" area). If it runs out of virtual address space then it returns an error (e.g. "null"). For de-allocations it would remember that the piece being freed is now "unused" (so it can be recycled); and maybe if a lot of the "allocate on demand" area is unused it might ask the virtual memory manager to make part of the area "no access" again. The virtual memory manager just creates the "allocate on demand" areas and "no access" areas when it's told to, without knowing or caring which parts of those areas are used or what they're used for.
Now; the kernel itself is like a program - depending on how its designed it might want kmalloc/kfree, it might want new and garbage collection, it might have 10 different "object managers", it might use the raw virtual memory manager "as is", etc. Whatever it happens to want (if anything) it has to implement itself.
Adam
The name is fitting: Century Hobby OS -- At this rate, it's gonna take me that long!
Read about my mistakes and missteps with this iteration: Journal
"Sometimes things just don't make sense until you figure them out." -- Phil Stahlheber
The name is fitting: Century Hobby OS -- At this rate, it's gonna take me that long!
Read about my mistakes and missteps with this iteration: Journal
"Sometimes things just don't make sense until you figure them out." -- Phil Stahlheber
Re: Confusion regarding VMMs, and allocations of less than 4
I have not thought this through so I am not sure whether this is the best solution (theoretically) or not. However, I like this for practical reasons. It goes without saying that handling physical and virtual memory is the foundation on which everything is built. If we like to have a system that is not useless, we cannot afford to have bugs in physical/virtual memory management. Writing bug free code is hard and the only realistic way to write bug free code is to keep "units" (the exact definition is another story) small and simple. Having too many features in one unit (e.g. virtual memory manager) will drastically decrease the chances of having bug free code.Brendan wrote:I personally think that the virtual memory manager should just do what it's been told to do; where whatever calls the virtual memory manager (e.g. the process' heap manager) tells the virtual memory manager which virtual address.