Syscalls

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
Post Reply
Vega

Syscalls

Post by Vega »

Which is the best way of doing syscalls? Call Gates? S-interrupts? And how many syscalls can my system have? I can have about 210 software interrupts.. and how many call gates/other types, can I have? Cause if I want to write lots and lots of APIs (like windows), I will need more than just 200 syscalls? Would also be very nice with a link to a page explaining how to set up call gates
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re:Syscalls

Post by Solar »

"Best" depends on what you want to achieve. Speed? Security? SMP capability? Extendability? Portability? What CPU are you targeting?

You won't need a syscall for every function in your OS API; you only need one for every kernel function. While a C/C++ standard library has hundreds of functions, you only need a dozen or so syscalls to implement them. It would also be possible to "multiplex" e.g. a call gate by using parameters.

As for "how to set up call gates"... make sure you have the Intel manuals close at hand. While tutorial smight be all and well, having the "real" manual is to be preferred.
Every good solution is obvious once you've found it.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re:Syscalls

Post by Brendan »

Hi,

If parameters are transferred via registers, and you use eax to hold the function number you could support ALL methods (software int, callgates, sysenter and syscall). If the CPU doesn't support sysenter and/or syscall you can emulate them within the invalid opcode exception handler.

For example:

Code: Select all

     section .data

%define MAXAPIFUNCTION 4

APItable:
     dd firstFunction
     dd APIundefined
     dd secondFunction
     dd APIundefined

     section .text


APIundefined:
     mov eax,errFuncNotDefined
     ret


APIsoftIntHandler:
     cmp eax,MAXAPIFUNCTION
     ja .notdefined
     call [APItable+eax*4]
     iretd

.notdefined
     mov eax,errFuncNotDefined
     iretd


APIcallGateHandler:
     cmp eax,MAXAPIFUNCTION
     ja .notdefined
     call [APItable+eax*4]
     retf

.notdefined
     mov eax,errFuncNotDefined
     retf


APIsysenterHandler:
     cmp eax,MAXAPIFUNCTION
     ja .notdefined
     call [APItable+eax*4]
     sysexit

.notdefined
     mov eax,errFuncNotDefined
     sysexit

APIsyscallHandler:
     cmp eax,MAXAPIFUNCTION
     ja .notdefined
     mov [kernelStackTop],esp
     mov esp,kernelStack-4
     sti
     call [APItable+eax*4]
     pop esp
     sysret

.notdefined
     mov eax,errFuncNotDefined
     sysret
The registers before the application makes the system call would be:
eax = function number
ebx = input_parameter1
ecx = unused (return esp for sysenter)
edx = unused (return eip for sysenter)
esi = input_parameter2
edi = input_parameter3
ebp = input_parameter4

And on return:
eax = error code
ebx = output_parameter1
esi = output_parameter2
edi = output_parameter3
ebp = output_parameter4

In this way the application would be free to use which-ever method is best for the situation. Code that is only run once could use the software interrupt (to reduce the size of the executable). Where the highest possible speed is required the application could check for the fastest way and use that. Alternatively code could be optimized for a specific CPU when compiled, e.g.:

Code: Select all

%macro APIcall 1
  mov eax,%1
  %ifdef INTEL
     push ecx
     push edx
     mov ecx,esp
     mov edx,%%l1
     sysenter
     pop edx
     pop ecx
  %elifdef AMD
     syscall
  %elifdef CALLGATE
     call APICALLGATESEL:0x00000000
  %else
     int APIINT
%endif
%%l1:
%endmacro

Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
kataklinger
Member
Member
Posts: 381
Joined: Fri Nov 04, 2005 12:00 am
Location: Serbia

Re:Syscalls

Post by kataklinger »

Brendan wrote:
If the CPU doesn't support sysenter and/or syscall you can emulate them within the invalid opcode exception handler.
If you try to write something in SYSENTER MSRs you'll get GP Fault first.

and what about speed?
IA-32 Intel Architecture
Software Developer?s
Manual
Volume 2:


The SYSENTER instruction is optimized to provide the
maximum performance for system calls from user code running at privilege level 3 to operating
system or executive procedures running at privilege level 0.
And also they call it 'fast system call' if you emulate it with UD fault then it is not so fast.
Dreamsmith

Re:Syscalls

Post by Dreamsmith »

kataklinger wrote:
Brendan wrote:If the CPU doesn't support sysenter and/or syscall you can emulate them within the invalid opcode exception handler.
If you try to write something in SYSENTER MSRs you'll get GP Fault first.
Then don't. The idea is, the userland code can call SYSENTER regardless. The OS obviouls needs to modify its own behavior based on the actual hardware present. The idea is non-OS software shouldn't have to (unless you're designing an exokernel).
kataklinger wrote: and what about speed?
IA-32 Intel Architecture
Software Developer?s
Manual
Volume 2:


The SYSENTER instruction is optimized to provide the
maximum performance for system calls from user code running at privilege level 3 to operating
system or executive procedures running at privilege level 0.
And also they call it 'fast system call' if you emulate it with UD fault then it is not so fast.
Now the speed question is a very good question. I'd be interested in any hard figured anyone has on this, too. But how much slower can it be to handle system calls through the #UD fault vector than through the INT 0x80 vector? My off-the-cuff reaction to this idea is, not bad -- on hardware that supports it, you syscalls go very fast, and on hardware that doesn't, they're about the same speed as if you were using some other syscall method like trap gates or call gates, no?
User avatar
kataklinger
Member
Member
Posts: 381
Joined: Fri Nov 04, 2005 12:00 am
Location: Serbia

Re:Syscalls

Post by kataklinger »

Well, UD handler will be more complex and more slower. And your OS will be less portable.
I will stick to software interupts because they make my life easier.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re:Syscalls

Post by Brendan »

Hi,
kataklinger wrote: Well, UD handler will be more complex and more slower. And your OS will be less portable.
I will stick to software interupts because they make my life easier.
Emulating sysenter and/or syscall would be slower. My original idea was that applications could use what-ever they think is best for the situation. For example, if you're writing a utility that measures the cache bandwidth on Athlon CPUs then you probably won't care about emulation on non-Athlon CPUs. If an application has a function that must have the highest performance on all 80x86 CPUs it could check what the CPU supports and have 3 different versions of the function. Also software can be distributed as source code and optimized/compiled for the target machine, so that the fastest method is used each time it's compiled.

Regardless of which method/s software uses the software would still run on all CPUs. IMHO code designed for computer X that runs slowly on computer Y is better than code designed for computer X that crashes on computer Y. In addition, code designed/optimized for computer X may be better than generic code that doesn't take advantage of extra CPU features. This is why games have "recommended system requirements".

At the end of the day application programmers can still decide to "stick to software interupts because they make my life easier" if they want.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Dreamsmith

Re:Syscalls

Post by Dreamsmith »

kataklinger wrote: Well, UD handler will be more complex and more slower. And your OS will be less portable.
I don't see why it would be much more complex, nor more than a few cycles slower (one more memory access and a comparison would be all you'd need). And it wouldn't be ANY more or less portable -- setting the IDT vector for INT 0x80 isn't any more portable than setting the IDT vector for #UD.
Post Reply