Enabling compiler optimizations ruins the kernel

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
rdos
Member
Member
Posts: 3296
Joined: Wed Oct 01, 2008 1:55 pm

Re: Enabling compiler optimizations ruins the kernel

Post by rdos »

devc1 wrote:What a DLL has to do with system calls ?

A DLL (Dynamic link library) is just like the static library you link to your programs using ld. But the dynamic library does not get linked on compile time, it gets linked by the OS in runtime.

How is that beneficial ? Well this solves a lot of problems, for example security updates or syscalls and other changes in the kernel will make other apps unusable. Instead of recompiling all the apps, I can just edit and recompile the DLL on My new OS version and every app will just work fine as previous.

Notice how the NT Kernel changes its tables, system calls and functions in almost each version. Without that, applications will become unusable on the new version.
I cannot see any advantage of Windows way of handing syscalls. They define the API based on a set of DLLs that export some given functions. For one, this means they must implement every function they ever had, otherwise applications will get unresolved imports. This is bloating the system given that many old functions might not be used. My syscalls will return with CY (error) if a syscall is not supported, and there is no need for any static code handling obsolete syscalls.
rdos
Member
Member
Posts: 3296
Joined: Wed Oct 01, 2008 1:55 pm

Re: Enabling compiler optimizations ruins the kernel

Post by rdos »

devc1 wrote:This is sooo inefficient according to my optimizing plans.

Why ? Because system calls are slow.
How I am fixing it ? These DLLs will contain heap management, IPC and other features implemented in user space and if a program decides to ruin them it will only damage the process and not the OS.
Heap management is generally a libc issue, not something you put in DLLs.
devc1 wrote: for e.g. instead of a syscall at every malloc, the DLL will contain the tables (at an isolated area) and if it needs more pages then it will syscall.
Why would you want to implement malloc with syscalls? This should be linked to the application and part of the runtime library.
devc1 wrote: Why a syscall at every IPC Send/Get request when we can just share some pages between processes and communicate without system calls.
Doesn't seem to be related to DLLs. You provide syscalls to create shared memory areas and then can create whatever IPC mechanisms you think are useful in shared memory.
nexos
Member
Member
Posts: 1081
Joined: Tue Feb 18, 2020 3:29 pm
Libera.chat IRC: nexos

Re: Enabling compiler optimizations ruins the kernel

Post by nexos »

devc1 wrote:Why a syscall at every IPC Send/Get request when we can just share some pages between processes and communicate without system calls.
That's not possible. You're gonna need to context switch at some point during an IPC request no matter what you do. You'll also need IPC to adjust mappings of pages so you can share them.
"How did you do this?"
"It's very simple — you read the protocol and write the code." - Bill Joy
Projects: NexNix | libnex | nnpkg
devc1
Member
Member
Posts: 439
Joined: Fri Feb 11, 2022 4:55 am
Location: behind the keyboard

Re: Enabling compiler optimizations ruins the kernel

Post by devc1 »

rdos : you are just repeating what I said, the process will do a syscall only if it needs more pages of memory.

for IPC : My design is to have multiple 64 message entry lists for each connection (or process/thread). Where the sender can just allocate a message with the bsf instruction in a UnallocatedMessageBitmap, and the receiver can read also with "bsfq r64/m64" in a PendingMessageBitmap.
The connection will happen with a syscall, and the kernel will allocate the shared IPC resources.
Thus this design can be asynchronous at a level, and of course you can make synchronous communication at the same time.
devc1
Member
Member
Posts: 439
Joined: Fri Feb 11, 2022 4:55 am
Location: behind the keyboard

Re: Enabling compiler optimizations ruins the kernel

Post by devc1 »

iansjack wrote:
devc1 wrote:for e.g. instead of a syscall at every malloc, the DLL will contain the tables (at an isolated area) and if it needs more pages then it will syscall.
Does any OS use system calls for a (user process) malloc? Other than extending heap space, if it runs out, why would you need a system call for this?
This was just curiosity, however I figured it out myself. So what I said was right, I will only syscall if I need more pages.
Last edited by devc1 on Wed Sep 28, 2022 2:21 pm, edited 2 times in total.
User avatar
iansjack
Member
Member
Posts: 4703
Joined: Sat Mar 31, 2012 3:07 am
Location: Chichester, UK

Re: Enabling compiler optimizations ruins the kernel

Post by iansjack »

Nice as it is to try and rewrite history to make yourself look better, you actually said:
instead of a syscall at every malloc
That just doesn’t happen.
User avatar
AndrewAPrice
Member
Member
Posts: 2300
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: Enabling compiler optimizations ruins the kernel

Post by AndrewAPrice »

iansjack wrote:Enabling all warnings is a good start. A quick Google will find other tools.
Turning on warnings is great for your own code. Warnings aren't so great when compiling third party libraries ported to my OS.
My OS is Perception.
kzinti
Member
Member
Posts: 898
Joined: Mon Feb 02, 2015 7:11 pm

Re: Enabling compiler optimizations ruins the kernel

Post by kzinti »

AndrewAPrice wrote:Turning on warnings is great for your own code. Warnings aren't so great when compiling third party libraries ported to my OS.
Then just turn them on for your own code?
nexos
Member
Member
Posts: 1081
Joined: Tue Feb 18, 2020 3:29 pm
Libera.chat IRC: nexos

Re: Enabling compiler optimizations ruins the kernel

Post by nexos »

devc1 wrote:

Code: Select all

STATUS Iansjack(){
listentoIansjack()
Understood()
IansjackSaysYouAreTryingToLookGood()
return ERROR_HE_IS_RIDICULOUS;
// I'm kidding.. :)
}
Really? iansjack is heads and tails above us all in knowledge, you better listen to him. It's quite insulting for a rookie OSDev'er to talk to one of the most senior forum members that way. I know I would treat would he says with respect.
"How did you do this?"
"It's very simple — you read the protocol and write the code." - Bill Joy
Projects: NexNix | libnex | nnpkg
nexos
Member
Member
Posts: 1081
Joined: Tue Feb 18, 2020 3:29 pm
Libera.chat IRC: nexos

Re: Enabling compiler optimizations ruins the kernel

Post by nexos »

devc1 wrote:My design is to have multiple 64 message entry lists
Whoa, arbitrary limits like this are just asking for trouble.

And, your design doesn't specify how process block to wait for incoming messages or block to wait for the receiver to be ready. That has to be done with syscalls.
"How did you do this?"
"It's very simple — you read the protocol and write the code." - Bill Joy
Projects: NexNix | libnex | nnpkg
nullplan
Member
Member
Posts: 1790
Joined: Wed Aug 30, 2017 8:24 am

Re: Enabling compiler optimizations ruins the kernel

Post by nullplan »

AndrewAPrice wrote:Turning on warnings is great for your own code. Warnings aren't so great when compiling third party libraries ported to my OS.
What, you don't have any standards for third-party code added to your OS?
Carpe diem!
devc1
Member
Member
Posts: 439
Joined: Fri Feb 11, 2022 4:55 am
Location: behind the keyboard

Re: Enabling compiler optimizations ruins the kernel

Post by devc1 »

I just want to know why is this impossible ? I can't really understand your arguments.

It is just like a simple addition that after re-thinking about it, it seems not so usable.

The addition is the message queue that you can (if the send req is async) spam multiple messages on it then Block optionnaly (you mean by that Task Switch ?). This can be faster in some situations like the Window Manager when sending screen and cursor changes that happen at the same time.

This mechanism is used on most devices.

The thing will be simply like this:

Code: Select all

struct _IPC_MESSAGE_QUEUE
{
// Bitmaps for fast fetches
UINT64 UnallocatedMessages;
UINT64 PendingMessages;
// 64 asynchronous messages (can be expanded)
MSG Messages[64]
}
nexos
Member
Member
Posts: 1081
Joined: Tue Feb 18, 2020 3:29 pm
Libera.chat IRC: nexos

Re: Enabling compiler optimizations ruins the kernel

Post by nexos »

But what happens if a server has a lot of incoming requests? Arbitrary limits would not be good here.

One thing to remember is that many OS design choice do seem preposterous, but, once you developed an OS or studied existing ones extensively. you'll see why there designed this way.
"How did you do this?"
"It's very simple — you read the protocol and write the code." - Bill Joy
Projects: NexNix | libnex | nnpkg
User avatar
iansjack
Member
Member
Posts: 4703
Joined: Sat Mar 31, 2012 3:07 am
Location: Chichester, UK

Re: Enabling compiler optimizations ruins the kernel

Post by iansjack »

nexos wrote:But what happens if a server has a lot of incoming requests? Arbitrary limits would not be good here.
A linked list, rather than an array, is the better way to handle this sort of situation.
devc1
Member
Member
Posts: 439
Joined: Fri Feb 11, 2022 4:55 am
Location: behind the keyboard

Re: Enabling compiler optimizations ruins the kernel

Post by devc1 »

Thanks for your suggestions, I will try to implement a linked list. It seems better, one of the reasons that come to mind is that the app may get stuck if the message list is full and the server isn't responding, but in a case like this where the server chooses to not respond, the OS will keep allocating linked lists until it crashes.
Post Reply