Started my OS a little while back, so far I am only able to output strings and numbers (both hex and decimal). However, I began trying to
create a GDT to eventually get some MM and interrupts going. I was trying to understand the code from Bran's tutorial which encoded a
GDT entry. I'll post it below although im sure many of you have probably seen it before:
Code: Select all
entry->base_low = (base & 0xFFFF);
entry->base_mid = (base >> 16) & 0xFF;
entry->base_high = (base >> 24) & 0xFF;
entry->limit_low = (limit & 0xFFFF);
entry->granularity = ((limit >> 16) & 0x0F);
entry->granularity |= (gran & 0xF0);
entry->access = access;
when I did the computation by hand, using the values which were passed to the function I got different values
than what the entry's actually contain.
For example when I computed (Where 'KERNEL_CODE_SEG' is 0x9A):
Code: Select all
encode(0x0, 0xFFFFFFFF, KERNEL_CODE_SEG, 0xCF, &gdt[1]);
Code: Select all
static void encode(uint64_t base, uint64_t limit, uint8_t access, uint8_t gran, gdt_entry_t * entry)
Code: Select all
base_mid = 0x0
base_low = 0x0 > All of these were what the function returned
base_high = 0x0
limit_low = 0xFFFF
granularity = 0xCF > The function returns 0xFFCF
access = 0x9A > The function returns 0xFC9A