Application callbacks within OS interrupts
Application callbacks within OS interrupts
In the current BareMetal OS setup the network interrupt is triggered whenever a packet is received and then stores that packet within a ring buffer. It is up to the application to query that buffer to see if there is anything that it needs to deal with. Issues will arise when there are a lot of packets being received and the application is not checking. Mainly older packets in the ring buffer will start to be overwritten with newer packets.
One idea that I am looking into is having the application install a network function that would be called by the OS network interrupt. I want to get rid of the OS network buffer and send the data on to the application as soon as possible. What do others think of this approach?
Thanks,
-Ian
One idea that I am looking into is having the application install a network function that would be called by the OS network interrupt. I want to get rid of the OS network buffer and send the data on to the application as soon as possible. What do others think of this approach?
Thanks,
-Ian
BareMetal OS - http://www.returninfinity.com/
Mono-tasking 64-bit OS for x86-64 based computers, written entirely in Assembly
Mono-tasking 64-bit OS for x86-64 based computers, written entirely in Assembly
Re: Application callbacks within OS interrupts
What about signaling the process? It seems SIGIO was created for this purpose.
The application could register it's network callback as a standard signal handler, no special api required.
Code: Select all
SIGPOLL Term Pollable event (Sys V). Synonym of SIGIO
SIGIO 23,29,22 Term I/O now possible (4.2BSD)
Re: Application callbacks within OS interrupts
I have done this. What I do is that I have a syscall that uses two arguments, what event to listen to (in your case a network interrupt) and what callback function it should call whenever that event occurs. When the interrupt occurs the kernel will look if any of the running tasks have registered to this interrupt event and in that case switch to that task and run the callback function.
Fudge - Simplicity, clarity and speed.
http://github.com/Jezze/fudge/
http://github.com/Jezze/fudge/
Re: Application callbacks within OS interrupts
Direct callbacks from IRQs to userland will kill your interrupt performance. I wouldn't allow any IRQ to directly trigger userland code. The proper way to do it is to make userland wait for some event that the IRQ triggers. The IRQ at a minimum need to clear the immediate condition that caused the IRQ. If we are talking network cards or similar, there should be some (preferently kernel-level) code that makes sure that a packet is removed from the network card buffer ring and queued somewhere else so that buffers won't run out. At least unless the network card has an option to let the OS define number of buffers.
Re: Application callbacks within OS interrupts
Jezze, sounds like a good idea. The current plan is to use callback for network as well as disk IO.
rdos, BareMetal OS runs everything in ring 0 with only 1 application running at a time. That one application can have multiple "threads" as well which are limited to one per core. The current system is using a ring buffer to remove packets from the Ethernet card buffer. In this situation the application has to poll the ring buffer to see if something is there or not.
Thanks,
-Ian
rdos, BareMetal OS runs everything in ring 0 with only 1 application running at a time. That one application can have multiple "threads" as well which are limited to one per core. The current system is using a ring buffer to remove packets from the Ethernet card buffer. In this situation the application has to poll the ring buffer to see if something is there or not.
Thanks,
-Ian
BareMetal OS - http://www.returninfinity.com/
Mono-tasking 64-bit OS for x86-64 based computers, written entirely in Assembly
Mono-tasking 64-bit OS for x86-64 based computers, written entirely in Assembly
Re: Application callbacks within OS interrupts
To comment on the performance problem. Yes it could be a problem but on the other hand it is not so much different to say a microkernel with drivers running in userspace. I've seen some kernels out there with decent performance.
In this type of problem where you have a ring buffer filling up my personal opinion is that I rather see a system slowdown instead of having to handle the problem with data loss.
The options are probably:
1. You say the buffer is full (result: data loss)
2. You let the buffer overwrite the older data (result: data loss)
3. You allocate more memory for the buffer (result: out of memory)
4. You guarantee all data must be taken care of (result: longer irq handling time, slower performance)
If you can guarantee that whenever the buffer receives data it will directly be forwarded to the user application requesting it from the start the buffer should never become full. Only way to do this, afaik, is to make it so that when an interrupt occurs and the buffer is written to, the scheduler will put the user application that requested the data first in queue, let it take care of the data and during that time the application can not be preempted. In ReturnInfinity's case there is no scheduler so even better.
In this type of problem where you have a ring buffer filling up my personal opinion is that I rather see a system slowdown instead of having to handle the problem with data loss.
The options are probably:
1. You say the buffer is full (result: data loss)
2. You let the buffer overwrite the older data (result: data loss)
3. You allocate more memory for the buffer (result: out of memory)
4. You guarantee all data must be taken care of (result: longer irq handling time, slower performance)
If you can guarantee that whenever the buffer receives data it will directly be forwarded to the user application requesting it from the start the buffer should never become full. Only way to do this, afaik, is to make it so that when an interrupt occurs and the buffer is written to, the scheduler will put the user application that requested the data first in queue, let it take care of the data and during that time the application can not be preempted. In ReturnInfinity's case there is no scheduler so even better.
Fudge - Simplicity, clarity and speed.
http://github.com/Jezze/fudge/
http://github.com/Jezze/fudge/
Re: Application callbacks within OS interrupts
@ReturnInfinity: After reading your post I saw that you have the possibility of having many running threads in the one and only program you are running. I could give you a suggestion that if you will implement the approach we talked about with callbacks you could add a callback when the PIC fires an interrupt so the application could basically handle it's own thread management and not the kernel. Just a thought though. I do something like this so that I would not need to have a thread implementation.
Fudge - Simplicity, clarity and speed.
http://github.com/Jezze/fudge/
http://github.com/Jezze/fudge/
Re: Application callbacks within OS interrupts
Just curious. How does Jezze's syscall differ from a classic signal(signum,sighandler) call?
http://man7.org/linux/man-pages/man2/signal.2.html
Also has two arguments:
signal number = "event to listen to"
signal handler = "callback function it should call whenever that event occurs"
No offense, I just don't see the difference.
About performance: if you use signals (or any other messaging oriented soultion), there's no problem, since the irq handler only sets some flags in thread control block and moves on (it won't run the userspace callback nor wait until it finishes). This means network irqs can raise rapidly (no need to mask and wait for slow userspace and unmask it again), and if you allocate a buffer big enough, there'll be no data loss either.
http://man7.org/linux/man-pages/man2/signal.2.html
Also has two arguments:
signal number = "event to listen to"
signal handler = "callback function it should call whenever that event occurs"
No offense, I just don't see the difference.
About performance: if you use signals (or any other messaging oriented soultion), there's no problem, since the irq handler only sets some flags in thread control block and moves on (it won't run the userspace callback nor wait until it finishes). This means network irqs can raise rapidly (no need to mask and wait for slow userspace and unmask it again), and if you allocate a buffer big enough, there'll be no data loss either.
- Owen
- Member
- Posts: 1700
- Joined: Fri Jun 13, 2008 3:21 pm
- Location: Cambridge, United Kingdom
- Contact:
Re: Application callbacks within OS interrupts
The problem I see with this is the same one with signal handlers, and with interrupt handlers in general: You have to be very careful what context you share with the normal execution flow. For example, one can't use the memory allocator in a signal handler, because somebody might already be using it (unless you block the signals whenever its in use) and, unlike with threads, you can't wait for the code you just interrupted to finish!
Re: Application callbacks within OS interrupts
No turdus you are right. They are pretty much the same as signals as far as I know how signals work.
Also I think we are talking the same language it's just I haven't learned the lingo yet or haven't been clear enough about it in my own head. Yeah I don't mask the interrupt or anything, I just make sure no rescheduling can occur during the time the signal flag is set and the task assigned to handle the signal is running. As soon as the task is complete the scheduler can continue as normal.
Irony of this is that, damn, this could potentially lead to data loss.
Also I think we are talking the same language it's just I haven't learned the lingo yet or haven't been clear enough about it in my own head. Yeah I don't mask the interrupt or anything, I just make sure no rescheduling can occur during the time the signal flag is set and the task assigned to handle the signal is running. As soon as the task is complete the scheduler can continue as normal.
Irony of this is that, damn, this could potentially lead to data loss.
Fudge - Simplicity, clarity and speed.
http://github.com/Jezze/fudge/
http://github.com/Jezze/fudge/
Re: Application callbacks within OS interrupts
@Jezze: okay! Don't worry, there's no perfect soultion, if you want performance you have to sacrifice something (dataloss-free for now)
@Owen: there are lockless allocation algorithms, that can be used from signal handlers as well. I'll look it up for you as soon as I got home.
Edit: here you are. That's the best alogithm I've found (there are many, like jemalloc, nbmalloc etc, this is imho the best, and uses Hoard structures btw): http://www.research.ibm.com/people/m/mi ... i-2004.pdf
@Owen: there are lockless allocation algorithms, that can be used from signal handlers as well. I'll look it up for you as soon as I got home.
Edit: here you are. That's the best alogithm I've found (there are many, like jemalloc, nbmalloc etc, this is imho the best, and uses Hoard structures btw): http://www.research.ibm.com/people/m/mi ... i-2004.pdf