Enabling compiler optimizations ruins the kernel

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
devc1
Member
Member
Posts: 439
Joined: Fri Feb 11, 2022 4:55 am
Location: behind the keyboard

Re: Enabling compiler optimizations ruins the kernel

Post by devc1 »

When I enabled compiler warnings, now it shows a mess of messages lol.

Code: Select all

inc\CPU/process.h(66): warning C4255: 'IoWait': no function prototype given: converting '()' to '(void)'    
inc\Management/device.h(25): warning C4820: '_DEVICE_OBJECT': '4' bytes padding added after data member 'Present'
inc\Management/device.h(37): warning C4820: '_DEVICE_OBJECT': '4' bytes padding added after data member 'DeviceType'
inc\Management/device.h(44): warning C4820: '_DEVICE_OBJECT': '4' bytes padding added after data member 'PciAccessType'
inc\Management/device.h(48): warning C4820: '_DEVICE_OBJECT': '4' bytes padding added after data member 'DeviceInitialized'
inc\interrupt_manager/idt.h(159): warning C4255: 'GlobalInterruptDescriptorInitialize': no function prototype given: converting '()' to '(void)'
inc\interrupt_manager/idt.h(160): warning C4255: 'GlobalInterruptDescriptorLoad': no function prototype given: converting '()' to '(void)'
inc\interrupt_manager/idt.h(161): warning C4255: 'remap_pic': no function prototype given: converting '()' to '(void)
inc\CPU/cpu.h(296): warning C4255: 'GetCurrentProcessorId': no function prototype given: converting '()' to '(void)'
inc\CPU/cpu.h(297): warning C4255: '__getCR2': no function prototype given: converting '()' to '(void)'     
inc\CPU/cpu.h(299): warning C4255: '__getRFLAGS': no function prototype given: converting '()' to '(void)'  
src/kinit.c(8): warning C4255: 'FirmwareControlRelease': no function prototype given: converting '()' to '(void)'
src/kinit.c(13): warning C4255: 'KernelHeapInitialize': no function prototype given: converting '()' to '(void)'
src/kinit.c(67): warning C4255: 'KernelPagingInitialize': no function prototype given: converting '()' to '(void)'
src/kinit.c(101): warning C4242: '=': conversion from 'unsigned int' to 'volatile unsigned short', possible loss of data
src/kinit.c(122): warning C4255: '__KernelRelocate': no function prototype given: converting '()' to '(void)'
src/kinit.c(128): warning C4133: '<=': incompatible types - from 'void *' to 'char *'
src/kinit.c(154): warning C4255: 'InitFeatures': no function prototype given: converting '()' to '(void)'   
src/kinit.c(158): warning C4255: 'KeInitOptimizedComputing': no function prototype given: converting '()' to '(void)'

... (∞+ Warnings)
Octocontrabass
Member
Member
Posts: 5563
Joined: Mon Mar 25, 2013 7:01 pm

Re: Enabling compiler optimizations ruins the kernel

Post by Octocontrabass »

Sounds like you have a lot of issues you need to fix.

Most of those are C4255, and that's an easy fix: declare functions that take no arguments using "(void)" instead of "()".
devc1
Member
Member
Posts: 439
Joined: Fri Feb 11, 2022 4:55 am
Location: behind the keyboard

Re: Enabling compiler optimizations ruins the kernel

Post by devc1 »

Another warning I get has something ro do with spectre mitigations :

Code: Select all

C:\Users\loukd\Desktop\__OS\kernel\src\fs\fs.c(93) : warning C5045: Compiler will insert Spectre mitigation for memory load if /Qspectre switch specified
C:\Users\loukd\Desktop\__OS\kernel\src\fs\fs.c(92) : note: index 'i' range checked by comparison on this line
C:\Users\loukd\Desktop\__OS\kernel\src\fs\fs.c(93) : note: feeds call on this line
C:\Users\loukd\Desktop\__OS\kernel\src\fs\fs.c(180) : warning C5045: Compiler will insert Spectre mitigation for memory load if /Qspectre switch specified
The warnings are on these lines :

Code: Select all

92 - for (UINT64 i = 0; i < UNITS_PER_LIST; i++) {
93 -            if (List->Files[i].Open && List->Files[i].PathLength == Len &&
94 -                wstrcmp_nocs(List->Files[i].Path, Path, Len)
95 -                ) {
96 -                return &List->Files[i];
97 -            }
98 -        }
I have millions of other warnings : (
it will be a hard time fixing those.
Octocontrabass
Member
Member
Posts: 5563
Joined: Mon Mar 25, 2013 7:01 pm

Re: Enabling compiler optimizations ruins the kernel

Post by Octocontrabass »

C5045 tells you which code will become slower if you use /Qspectre. You can disable that warning if you're not using /Qspectre or if you don't care how /Qspectre impacts performance.
devc1
Member
Member
Posts: 439
Joined: Fri Feb 11, 2022 4:55 am
Location: behind the keyboard

Re: Enabling compiler optimizations ruins the kernel

Post by devc1 »

I did disable some warnings like the Spectre mitigations, struct alignment and unreferenced function parameters, now the majority of the warnings are :

Code: Select all

warning C4334: '<<': result of 32-bit shift implicitly converted to 64 bits (was 64-bit shift intended?)
warning C4189: 'ListHead': local variable is initialized but not referenced
warning C4245: '=': conversion from 'int' to 'unsigned char', signed/unsigned mismatch
warning C4189: 'Buff': local variable is initialized but not referenced
warning C4242: 'initializing': conversion from 'UINT64' to 'UINT', possible loss of data
warning C4706: assignment within conditional expression
warning C4057: 'function': '__int64 *' differs in indirection to slightly different base types from 'UINT64 *
it doesn't seem so hard to fix them.
devc1
Member
Member
Posts: 439
Joined: Fri Feb 11, 2022 4:55 am
Location: behind the keyboard

Re: Enabling compiler optimizations ruins the kernel

Post by devc1 »

I have finally fixed all the warnings at level 4.
When I enable global optimizations it still triple faults, but the compiler keep showing this warning on almost every function :

Code: Select all

warning C4711: function 'XXX' selected for automatic inline expansion
Edit : it turns out that general optimizations enable Inline Function expansion, (when disabled the Auto inlining is also disabled).
I disabled the Inline Function Expansion and it still triple faults, so the issue is always with general optimizations.

This is my command line :

Code: Select all

set CFLAGS= /O2 /Gr /GS- /wd4710 /wd4213 /Wall /wd4820 /wd4200 /wd4152 /wd4100 /wd5045 /wd4189 /wd4702 /wd4255 /Ilib /I../libc/drv/inc /I../UEFI/gnu-efi/inc /I../UEFI/gnu-efi/inc/x86_64 /Iinc /Ilib
%COMPILE% "msvcrt.lib" "libvcruntime.lib" %srcfiles% %OBJFILES% /Fo:x86_64/ %CFLAGS% /Fe:oskrnlx64.exe /LD /link /OPT:LBR,REF /DLL /MACHINE:x64 /NODEFAULTLIB /SUBSYSTEM:native /ENTRY:KrnlEntry /FIXED:no /DYNAMICBASE /LARGEADDRESSAWARE

Octocontrabass
Member
Member
Posts: 5563
Joined: Mon Mar 25, 2013 7:01 pm

Re: Enabling compiler optimizations ruins the kernel

Post by Octocontrabass »

devc1 wrote:When I enable global optimizations it still triple faults, but the compiler keep showing this warning on almost every function :
You can disable warning C4711 if you don't care about the compiler automatically inlining functions.
devc1 wrote:I disabled the Inline Function Expansion and it still triple faults, so the issue is always with general optimizations.
The issue is with your code. Have you figured out where in your code the triple fault occurs?
devc1 wrote:This is my command line :
It looks like you're linking against Microsoft's C runtime. Are you sure that's a good idea? It may not work correctly in a freestanding environment.
devc1
Member
Member
Posts: 439
Joined: Fri Feb 11, 2022 4:55 am
Location: behind the keyboard

Re: Enabling compiler optimizations ruins the kernel

Post by devc1 »

The tripple fault occurs when I call GetBezierPoint(). Even removing the call the tripple fault occurs on other things, I tried removing everything in the kernel entry point and just executing this and it tripple faults.

What do you mean by linking against C runtime ?

The _SSE/AVX_ComputeBezier functions does not get even called.

I can make my OS public in github for some time so you can check ?

Removing "msvcrt" and "libvcruntime" which are C++ libraries result in undefined references (memset, memcpy...). I will try and create them myself in assembly
Octocontrabass
Member
Member
Posts: 5563
Joined: Mon Mar 25, 2013 7:01 pm

Re: Enabling compiler optimizations ruins the kernel

Post by Octocontrabass »

devc1 wrote:The tripple fault occurs when I call GetBezierPoint(). Even removing the call the tripple fault occurs on other things, I tried removing everything in the kernel entry point and just executing this and it tripple faults.
Okay. Which faults occur that lead up to the triple fault? Where do those faults occur? QEMU with "-d int" can help you here. I'm sure you have some equivalent of objdump and addr2line you could use as well.
devc1 wrote:What do you mean by linking against C runtime ?
I mean msvcrt.lib and libvcruntime.lib.
devc1 wrote:I can make my OS public for some time so you can check ?
Okay, but you still have to learn how to debug it yourself.
devc1 wrote:Removing "msvcrt" and "libvcruntime" which are C++ libraries result in undefined references (memset, memcpy...). I will try and create them myself in assembly
I would share mine, but I wrote them using a mix of C and GCC inline assembly, so you won't be able to use them.
devc1
Member
Member
Posts: 439
Joined: Fri Feb 11, 2022 4:55 am
Location: behind the keyboard

Re: Enabling compiler optimizations ruins the kernel

Post by devc1 »

I already made them with nasm and rep stos/movs/cmps. And now there is no undefined references.
Now I'll try to debug this thing.
After a lookup (not really debugging) :
My assembly functions show different results with and without optimization, for example _SSE_AllocatePhysicalPage returns this :
Without /O2 : 0x12C4000
With /O2 : (Seems like false result) 0x1356000006000 and adds some other writing with SystemDebugPrint

Octocontrabass, here is the repo (a set and a mess of code that maybe only I understand it) : https://github.com/NXTdevosc1/__OS

remember, fork it fast and steal my AHCI driver, FAT32 driver, EHCI driver, kernel and everything : ) I am kidding, I will reset it to private after a short amount of hours.

kernel/src contains all the kernel, you will not need anything else.

I will update it now.
devc1
Member
Member
Posts: 439
Joined: Fri Feb 11, 2022 4:55 am
Location: behind the keyboard

Re: Enabling compiler optimizations ruins the kernel

Post by devc1 »

I will add this :

The kernel boots up into KrnlEntry which is inside src/KRNLHDR.asm, this KrnlEntry just sets up some control registers, Execute Disable Support and jumps into the entry point _start() in src/kernel.c.

There is nothing wrong with GetBezierPoint, every other function is messed up with these optimizations.

I still didn't code my new memory manager and IPC.

The Efi bootloader is in EfiBoot/EfiEntry.c
the Legacy BIOS bootloader is in LEGACY/
the FAT32 DLL is in drivers, and the image creating program is in LEGACY/imgsetup.c
Octocontrabass
Member
Member
Posts: 5563
Joined: Mon Mar 25, 2013 7:01 pm

Re: Enabling compiler optimizations ruins the kernel

Post by Octocontrabass »

devc1 wrote:There is nothing wrong with GetBezierPoint, every other function is messed up with these optimizations.
It sounds like there is something wrong with GetBezierPoint, and every other function too: you're not following the ABI.

I looked at one function you wrote in assembly and saw that it modifies RSI and RDI. You must restore those registers to their original values before you return to the caller.
devc1
Member
Member
Posts: 439
Joined: Fri Feb 11, 2022 4:55 am
Location: behind the keyboard

Re: Enabling compiler optimizations ruins the kernel

Post by devc1 »

I also think that my memset/cpy/cmp implementations should also restore these registers. I will try now

I completely forgot about rsi and rdi
devc1
Member
Member
Posts: 439
Joined: Fri Feb 11, 2022 4:55 am
Location: behind the keyboard

Re: Enabling compiler optimizations ruins the kernel

Post by devc1 »

You're right, just restoring rdi and rsi in memset/memcpy and memcmp did fix the random output. Now I will have to do that for the other functions !
devc1
Member
Member
Posts: 439
Joined: Fri Feb 11, 2022 4:55 am
Location: behind the keyboard

Re: Enabling compiler optimizations ruins the kernel

Post by devc1 »

I also found out that the tripple fault at GetBezierPoint is not because the call to the function, but because :

Code: Select all

VCPU shutdown request
x/10i $eip
0x00156e74:  0f 29 74 24 20           movaps   %xmm6, 0x20(%rsp)
0x00156e79:  44 8b c3                 movl     %ebx, %r8d
0x00156e7c:  48 8b cf                 movq     %rdi, %rcx
0x00156e7f:  49 c1 e0 02              shlq     $2, %r8
0x00156e83:  0f 28 f3                 movaps   %xmm3, %xmm6
0x00156e86:  e8 63 8f ff ff           callq    0x14fdee
0x00156e8b:  8b 05 7b 21 01 00        movl     0x1217b(%rip), %eax
0x00156e91:  85 c0                    testl    %eax, %eax
0x00156e93:  75 1c                    jne      0x156eb1
movaps,
(a global optimization to use aligned instructions when pointers are expected to be aligned),

the stack pointer (RSP) wasn't 16 bytes aligned, I tried to add 8 bytes on the stack address (in KrnlEntry) and it worked.
So what causes this stack misalignment ? is it also the ABI ?

The stack is reserved with a 4096 byte alignment on NASM.

I updated the repo so you guys could check.

edit :
Well instead of calling _start, I jump to it. The compiler expects a setup like this :
Image
which is unaligned, I will just add 8 to rsp then jump to the function, thanks !
Post Reply