If any of what I've said makes sense, does anyone have any ideas why this might be?
Thanks

Not familiar with Lamine, other than what is documented here, but that file makes no mention of mapping the available memory.Satomatic wrote:So I've been writing a 64 bit kernel using the limine bootloader. While trying to allocate pages for the vmm page table I've run into an issue. A lot of the memory which limine marks as available in the memory map isn't mapped to a virtual address in the bootloaders page table, obviously resulting in a page fault. It seems there are huge gaps of non mapped memory in the page table which is marked as available in the memory map.
If any of what I've said makes sense, does anyone have any ideas why this might be?
Thanks
The page table as passed to the kernel is the Lamine page table in bootloader reclaimable memory.At handoff, the kernel will be properly loaded and mapped with appropriate MMU permissions at the requested virtual memory address (provided it is at or above 0xffffffff80000000).
Check the "entry memory layout" section.thewrongchristian wrote:Not familiar with Lamine, other than what is documented here, but that file makes no mention of mapping the available memory.
So is that what the HHDM feature is? A direct map of the entirety of physical memory a-la Linux direct map?Octocontrabass wrote:Check the "entry memory layout" section.thewrongchristian wrote:Not familiar with Lamine, other than what is documented here, but that file makes no mention of mapping the available memory.
Yes. Additionally, all usable memory is identity-mapped.thewrongchristian wrote:So is that what the HHDM feature is? A direct map of the entirety of physical memory a-la Linux direct map?
It's just identity-mapping with an offset. The size is all usable memory.thewrongchristian wrote:It's not very clear how that feature works. There is a virtual base address for the mapped region, but how is it sized?
Why not? The bootloader tells you which memory to avoid overwriting while you're using the bootloader's page tables.thewrongchristian wrote:And given that the bootstrap page table should be considered transient, why would you depend on such a feature in the first place?
Once the bootloader has done its job, the kernel is in control and owns the machine (SMI and other hypervisor not-withstanding.)Octocontrabass wrote:Why not? The bootloader tells you which memory to avoid overwriting while you're using the bootloader's page tables.thewrongchristian wrote:And given that the bootstrap page table should be considered transient, why would you depend on such a feature in the first place?
Only when the bootloader doesn't guarantee what that environment will be.thewrongchristian wrote:I thought it was accepted practice to not rely on environment passed to you by the bootloader?
The Multiboot specification says you can't rely on the GDT. The Limine specification tells you exactly what will be in the GDT.thewrongchristian wrote:My early forays into kernel mode were hampered because I was relying on the multiboot GDT, which I could use when QEMU loaded my multiboot kernel, but failed horribly on real hardware loaded with grub.
I've already used "info mem" & "info tlb", it seems most of the memory in virtually mapped, framebuffer, kernel code, etc... But huge chunks of memory which is supposed to be usable isn't mapped.Octocontrabass wrote:It could be a bug in Limine, but first, how did you verify that Limine isn't mapping everything correctly?
The QEMU monitor has some built-in commands ("info mem" and "info tlb") that can dump the contents of the page tables. You can use your debugger to halt QEMU at your kernel's entry point and check the page tables to see if Limine's page tables are correct.
Either you're misinterpreting the memory map or you're overwriting Limine's page tables.Satomatic wrote:But huge chunks of memory which is supposed to be usable isn't mapped.