Will my flat memory design approach work out?
Posted: Mon Apr 28, 2014 4:41 am
Hey guys!
My kernel is getting a little more advanced now, and I just wanted to hear a few opinions about my memory design (which is the only thing that is very different to the most approaches I see around here). First thing to know, my kernel is supposed to run a virtual machine. That means that I don't need process security, so all my processes run in ring 0 (what also gives me better portability to other architectures, later).
When it comes to memory management, I do this:
- there is a physical memory manager that gives me free page-aligned chunks for whatever I need them. this manager only gives me memory above ~16MB (you'll read further on about why it is like this) and reads what is available from the GRUB memory map.
- there is a virtual memory manager, which implements paging. it creates one global directory, which is shared by all processes. the reason for this is, that i want all my processes to share the same memory and like this have direct access to (synchronized) memory areas.
My memory is then split in the following areas:
LOWER MEMORY 0x00000000 -> 0x00100000:
the lower memory megabyte is identity-mapped. there is a so-called "lower memory allocator" that operates in the free space there, and gives me for example stacks for the VM86 processes
KERNEL MEMORY 0x00100000 -> ca. 0x01500000:
this memory is also identity-mapped. it contains the kernel binary, the ramdisk modules and there are (currently) ~16MB reserved as a kernel heap. this kernel heap is managed by the "kernel memory allocator", that gives me the kernel stacks for processes. the end of this area is calculated from a linker symbol (endKernel) + size of modules + size of kernel heap.
HEAP AREA ca. 0x01500000 -> whats requested:
this is where the global heap starts. the area from here on is virtual-mapped and grows everytime sbrk() wants more heap. then the sbrk (syscall) asks the physical memory manager for another page and appends it to get a contiguous memory area.
DMA AREA area from 0xFFFFF000 growing downward:
here I'm creating virtual mappings for DMA. for example, if the linear framebuffer of VESA wants to be at 0xFC000000, I'm calling my function: createDma(0xFC000000, sizeOfFrameBuffer). this function then returns me a virtual address that points to exactly this area and i can write my pixels. (only con imho: its currently not dynamic, so any mapped DMA-area cannot be unmapped / is this a con?) also, i have to add a check here, if the physical addresses i want to access via DMA area already mapped as memory, if so, i have to copy the pages somewhere else and update the mapping (shouldn't be a problem, but might get slow because the entire directory must be scanned. just a little fiddling).
Now my question is mainly - will this work out for anything that might come? By now I did not have any case that would interfere with this my model, DMA is possible, VM86 is possible (though the ~470KB might run out if somebody does weird VM86 calls - what shouldn't happen^^, they are mainly for VESA).
Any suggestions / critics / warnings / clarification of possible problems are greatly appreciated.
Thank you!
My kernel is getting a little more advanced now, and I just wanted to hear a few opinions about my memory design (which is the only thing that is very different to the most approaches I see around here). First thing to know, my kernel is supposed to run a virtual machine. That means that I don't need process security, so all my processes run in ring 0 (what also gives me better portability to other architectures, later).
When it comes to memory management, I do this:
- there is a physical memory manager that gives me free page-aligned chunks for whatever I need them. this manager only gives me memory above ~16MB (you'll read further on about why it is like this) and reads what is available from the GRUB memory map.
- there is a virtual memory manager, which implements paging. it creates one global directory, which is shared by all processes. the reason for this is, that i want all my processes to share the same memory and like this have direct access to (synchronized) memory areas.
My memory is then split in the following areas:
LOWER MEMORY 0x00000000 -> 0x00100000:
the lower memory megabyte is identity-mapped. there is a so-called "lower memory allocator" that operates in the free space there, and gives me for example stacks for the VM86 processes
KERNEL MEMORY 0x00100000 -> ca. 0x01500000:
this memory is also identity-mapped. it contains the kernel binary, the ramdisk modules and there are (currently) ~16MB reserved as a kernel heap. this kernel heap is managed by the "kernel memory allocator", that gives me the kernel stacks for processes. the end of this area is calculated from a linker symbol (endKernel) + size of modules + size of kernel heap.
HEAP AREA ca. 0x01500000 -> whats requested:
this is where the global heap starts. the area from here on is virtual-mapped and grows everytime sbrk() wants more heap. then the sbrk (syscall) asks the physical memory manager for another page and appends it to get a contiguous memory area.
DMA AREA area from 0xFFFFF000 growing downward:
here I'm creating virtual mappings for DMA. for example, if the linear framebuffer of VESA wants to be at 0xFC000000, I'm calling my function: createDma(0xFC000000, sizeOfFrameBuffer). this function then returns me a virtual address that points to exactly this area and i can write my pixels. (only con imho: its currently not dynamic, so any mapped DMA-area cannot be unmapped / is this a con?) also, i have to add a check here, if the physical addresses i want to access via DMA area already mapped as memory, if so, i have to copy the pages somewhere else and update the mapping (shouldn't be a problem, but might get slow because the entire directory must be scanned. just a little fiddling).
Now my question is mainly - will this work out for anything that might come? By now I did not have any case that would interfere with this my model, DMA is possible, VM86 is possible (though the ~470KB might run out if somebody does weird VM86 calls - what shouldn't happen^^, they are mainly for VESA).
Any suggestions / critics / warnings / clarification of possible problems are greatly appreciated.
Thank you!