Hi,
I would like to ask an experienced person about Paging in Long Mode. I have implemented a small MMU and PMM, everything gone fine but I got a weird PF at address 0xFFFFFFFF80000000, which is my start point of my higher half kernel (I used Limine to boot it). Can anyone help me?
Here is my project: https://github.com/NeonLightions/susOS, MMU is in kernel/arch/x86_64/mmu.c and PMM is in kernel/misc/pmm.c. Thank you very much for reading my post!
Keep getting Page Fault at address 0xFFFFFFFF80000000
-
- Member
- Posts: 5562
- Joined: Mon Mar 25, 2013 7:01 pm
Re: Keep getting Page Fault at address 0xFFFFFFFF80000000
My crystal ball says you're mapping the wrong pages for your kernel.
Use "info mem" and "info tlb" in the QEMU monitor to check the page tables. Check both before and after you set CR3 to see the difference between the bootloader's page tables and your page tables.
Use "info mem" and "info tlb" in the QEMU monitor to check the page tables. Check both before and after you set CR3 to see the difference between the bootloader's page tables and your page tables.
Re: Keep getting Page Fault at address 0xFFFFFFFF80000000
I have tried to use "info mem" and "info tlb" as you said. And then I realized: It didn't fault when switching PML4, it faulted when CPU executed these lines of code: (in kernel/arch/x86_64/mmu.c)Octocontrabass wrote:My crystal ball says you're mapping the wrong pages for your kernel.
Use "info mem" and "info tlb" in the QEMU monitor to check the page tables. Check both before and after you set CR3 to see the difference between the bootloader's page tables and your page tables.
Code: Select all
uint64_t addr = HIGH_MAP_REGION;
uint64_t kernel_addr = kernel_base;
for (; addr < (uint64_t) mmu_map_from_physical(kernel_end); addr += PAGE_SIZE)
{
mmu_set_page(kernel_pml, addr, kernel_addr, PAGE_PRESENT | PAGE_WRITABLE);
kernel_addr += PAGE_SIZE;
}
Re: Keep getting Page Fault at address 0xFFFFFFFF80000000
Step it in a debugger, find out which instruction is causing the fault. It may be an issue quite deep down the call stack in your PMM. If you've got a page-fault handler that can tell you about the error, it's worth checking the saved RIP value and taking a look at the instructions at and around that location; this is probably easier to do than stepping, but only if you have the handler working.passerby wrote:It didn't fault when switching PML4, it faulted when CPU executed these lines of code: (in kernel/arch/x86_64/mmu.c)
Parse the paging structures it uses, you can find them the same way the CPU does.passerby wrote:And can I know how does Limine map higher half kernel?
Re: Keep getting Page Fault at address 0xFFFFFFFF80000000
Thanks for your advice, but can I ask a question? When I was reading the Limine PROTOCOL.md, I found a request tag called Higher Half Direct Map, what is that mean? I'm confusing, the RIP points to 0xffffffff80000000, and HHDM tag's offset field contains 0xffff800000000000, can you explain this for me?Barry wrote:Step it in a debugger, find out which instruction is causing the fault. It may be an issue quite deep down the call stack in your PMM. If you've got a page-fault handler that can tell you about the error, it's worth checking the saved RIP value and taking a look at the instructions at and around that location; this is probably easier to do than stepping, but only if you have the handler working.passerby wrote:It didn't fault when switching PML4, it faulted when CPU executed these lines of code: (in kernel/arch/x86_64/mmu.c)Parse the paging structures it uses, you can find them the same way the CPU does.passerby wrote:And can I know how does Limine map higher half kernel?
Re: Keep getting Page Fault at address 0xFFFFFFFF80000000
It looks like you are using a virtual address for kernel_pml. It needs to be a physical address.
-
- Member
- Posts: 5562
- Joined: Mon Mar 25, 2013 7:01 pm
Re: Keep getting Page Fault at address 0xFFFFFFFF80000000
Limine can tell you both the physical and virtual address where the kernel is loaded.passerby wrote:And can I know how does Limine map higher half kernel?
Limine maps all memory twice: once at the physical address (identity mapped), and once in the higher half. In the higher half, all memory is mapped with a fixed offset between physical and virtual address so you can easily calculate between the two. The HHDM tag tells you that offset.passerby wrote:When I was reading the Limine PROTOCOL.md, I found a request tag called Higher Half Direct Map, what is that mean?