Page 1 of 1
Limine virtual memory mapping issues
Posted: Thu Jun 08, 2023 6:00 am
by Satomatic
So I've been writing a 64 bit kernel using the limine bootloader. While trying to allocate pages for the vmm page table I've run into an issue. A lot of the memory which limine marks as available in the memory map isn't mapped to a virtual address in the bootloaders page table, obviously resulting in a page fault. It seems there are huge gaps of non mapped memory in the page table which is marked as available in the memory map.
If any of what I've said makes sense, does anyone have any ideas why this might be?
Thanks
Re: Limine virtual memory mapping issues
Posted: Thu Jun 15, 2023 12:07 pm
by Octocontrabass
It could be a bug in Limine, but first, how did you verify that Limine isn't mapping everything correctly?
The QEMU monitor has some built-in commands ("info mem" and "info tlb") that can dump the contents of the page tables. You can use your debugger to halt QEMU at your kernel's entry point and check the page tables to see if Limine's page tables are correct.
Re: Limine virtual memory mapping issues
Posted: Thu Jun 15, 2023 5:00 pm
by thewrongchristian
Satomatic wrote:So I've been writing a 64 bit kernel using the limine bootloader. While trying to allocate pages for the vmm page table I've run into an issue. A lot of the memory which limine marks as available in the memory map isn't mapped to a virtual address in the bootloaders page table, obviously resulting in a page fault. It seems there are huge gaps of non mapped memory in the page table which is marked as available in the memory map.
If any of what I've said makes sense, does anyone have any ideas why this might be?
Thanks
Not familiar with Lamine, other than what is documented
here, but that file makes no mention of mapping the available memory.
It looks like available memory is requested using LIMINE_MEMMAP_REQUEST, which returns a response with a variable number of entries, with each memory having some base, size and type (one type being LIMINE_MEMMAP_USABLE).
Your kernel needs to map that, surely? Lamine only says following about MMU state:
At handoff, the kernel will be properly loaded and mapped with appropriate MMU permissions at the requested virtual memory address (provided it is at or above 0xffffffff80000000).
The page table as passed to the kernel is the Lamine page table in bootloader reclaimable memory.
Your kernel should make its own page table, in its own memory, and map physical memory how you see fit.
Re: Limine virtual memory mapping issues
Posted: Thu Jun 15, 2023 5:02 pm
by Octocontrabass
thewrongchristian wrote:Not familiar with Lamine, other than what is documented
here, but that file makes no mention of mapping the available memory.
Check the "
entry memory layout" section.
Re: Limine virtual memory mapping issues
Posted: Thu Jun 15, 2023 5:10 pm
by thewrongchristian
Octocontrabass wrote:thewrongchristian wrote:Not familiar with Lamine, other than what is documented
here, but that file makes no mention of mapping the available memory.
Check the "
entry memory layout" section.
So is that what the HHDM feature is? A direct map of the entirety of physical memory a-la Linux direct map?
It's not very clear how that feature works. There is a virtual base address for the mapped region, but how is it sized?
And given that the bootstrap page table should be considered transient, why would you depend on such a feature in the first place?
Re: Limine virtual memory mapping issues
Posted: Thu Jun 15, 2023 5:21 pm
by Octocontrabass
thewrongchristian wrote:So is that what the HHDM feature is? A direct map of the entirety of physical memory a-la Linux direct map?
Yes. Additionally, all usable memory is identity-mapped.
thewrongchristian wrote:It's not very clear how that feature works. There is a virtual base address for the mapped region, but how is it sized?
It's just identity-mapping with an offset. The size is all usable memory.
thewrongchristian wrote:And given that the bootstrap page table should be considered transient, why would you depend on such a feature in the first place?
Why not? The bootloader tells you which memory to avoid overwriting while you're using the bootloader's page tables.
Re: Limine virtual memory mapping issues
Posted: Thu Jun 15, 2023 5:42 pm
by thewrongchristian
Octocontrabass wrote:
thewrongchristian wrote:And given that the bootstrap page table should be considered transient, why would you depend on such a feature in the first place?
Why not? The bootloader tells you which memory to avoid overwriting while you're using the bootloader's page tables.
Once the bootloader has done its job, the kernel is in control and owns the machine (SMI and other hypervisor not-withstanding.)
I thought it was accepted practice to not rely on environment passed to you by the bootloader? Copy what you need to preserve, but otherwise make your own environment.
My early forays into kernel mode were hampered because I was relying on the multiboot GDT, which I could use when QEMU loaded my multiboot kernel, but failed horribly on real hardware loaded with grub.
Re: Limine virtual memory mapping issues
Posted: Thu Jun 15, 2023 5:58 pm
by Octocontrabass
thewrongchristian wrote:I thought it was accepted practice to not rely on environment passed to you by the bootloader?
Only when the bootloader doesn't guarantee what that environment will be.
thewrongchristian wrote:My early forays into kernel mode were hampered because I was relying on the multiboot GDT, which I could use when QEMU loaded my multiboot kernel, but failed horribly on real hardware loaded with grub.
The Multiboot specification says you can't rely on the GDT. The Limine specification tells you exactly what will be in the GDT.
Re: Limine virtual memory mapping issues
Posted: Sat Jun 17, 2023 12:09 pm
by Satomatic
Octocontrabass wrote:It could be a bug in Limine, but first, how did you verify that Limine isn't mapping everything correctly?
The QEMU monitor has some built-in commands ("info mem" and "info tlb") that can dump the contents of the page tables. You can use your debugger to halt QEMU at your kernel's entry point and check the page tables to see if Limine's page tables are correct.
I've already used "info mem" & "info tlb", it seems most of the memory in virtually mapped, framebuffer, kernel code, etc... But huge chunks of memory which is supposed to be usable isn't mapped.
I've rewritten my code and don't get page faults any more, not sure what I changed to make it work better, but I do now get general protection faults; However, that seems to be a whole separate issue.
Re: Limine virtual memory mapping issues
Posted: Sat Jun 17, 2023 12:11 pm
by Satomatic
And also for what it's worth, these page faults occur before I set the cr3 so I'm certain it's the limine page table.
Re: Limine virtual memory mapping issues
Posted: Sat Jun 17, 2023 3:12 pm
by Octocontrabass
Satomatic wrote:But huge chunks of memory which is supposed to be usable isn't mapped.
Either you're misinterpreting the memory map or you're overwriting Limine's page tables.
The page tables are located in bootloader-reclaimable memory, so you can't use any of that memory until you set up your own page tables.
Re: Limine virtual memory mapping issues
Posted: Sat Jun 17, 2023 3:58 pm
by Satomatic
So I've been able to write my own page table which does fully map memory. Checking "info tlb" afterwards shows a much larger and complete map. I believe limine has it's own reasons for not virtually mapping a lot of the space, perhaps to keep the page table quite small, after all it doesn't need to map much more than the kernel, framebuffer, and various other memory areas.
I believe the issue I was having was to do with my physical memory allocator, more specifically accidentally writing over the bitmap and not having checks if the memory being allocated was actually r/w. I rewrote the code and looked over some other GitHub projects using limine to get a rough idea on how vmm works.
Thank you guys for the help, I'm new to 64bit osdev and am still learning and understanding some concepts regarding 4-level paging.