Page 1 of 2

Interrupt handlers from an OS standpoint

Posted: Sat Nov 22, 2014 8:52 pm
by blasthash
Hey everyone,

First off, I decided to post this here as I thought it wasn't as 'targeted' as threads are in the mainline OS development subforum. Forgive me, if that doesn't appear to be the case.

Obviously, in real mode interrupts are specified by the 256-entry vector table at the beginning of memory (starting at 0h). I know how to then arrange for an interrupt servicing routine's address to be placed in that table such that on the trigger of that interrupt, that routine is executed. However, what I don't understand is how to make this situation "track" or map over to the current running process, even if there aren't multiple processes in action. Let me try to paint a little picture in code to illustrate what I'm talking about; I may be describing poorly something others of you know very well.

I'm right now writing the keyboard POST/initializer for an operating system I'm trying to run on a homebrew AT that I discuss a little bit in the first post here. All goes well until I hit the point where I need to start interacting with the device.

Code: Select all

8042_DEV_CONF:
SEI                  # Enable interrupts
                     # Send device commands here.
HLT                  # Halt and wait for interrupt.
Obviously, the device will respond with an interrupt when it puts data into the 8042 buffer, in which case the ISR can put the data into AX or AL or something of that nature for when it returns. Because the CPU is stalled and waiting for the interrupt via the HLT instruction, it's a no-brainer to figure out that all I would need to do in the next line of code is read out of that register and the data is there.

Making such an ISR a minute bit more complex would be to pump the 8042 buffer to memory and then include a pointer to the base of the copied buffer stack, in which case reading would only be trivially more difficult. This would allow for me to begin to pave the way on implementation of things like scanf and such.

Here's what I see. If an ISR like that is used, then execution in a scenario where the code is not designed to halt and wait for the interrupt would just overwrite it and not accomplish the task. Suppose I'm testing a driver in my OS by having it take network data and print it to the screen (heh, not anytime soon), and I want the Escape key to cancel the program. That's fine, but when the ISR returns with the key in AL or pointer or however, it'll just be overwritten by the program, which won't quit.

Doing much deliberation on this, I realized I need two stages of interrupt handling. The interrupt service routine, which determines what a program or process does with an event, and the interrupt handler, which manages the event on the hardware level, focuses the data and asserts that process ISR.

I thought a little bit on how to implement it, but stopped when I realized that the sheer magnitude of it just made my brain hurt and I would need to consult others for advice. What I thought of in terms of how it would work was the new process coming into effect would retarget the interrupt handler to that process's ISR, if it employed them.

So, the initial discussion focuses at two things -

First, is this the manner in which these things work? These are just my thoughts on how I thought such a scenario would be implemented, but I have no idea if I'm on the right track or if I'm dead wrong.

Second of all, can anyone help me start to figure out how to implement such a thing, if it is indeed the correct way of doing things?

Re: Interrupt handlers from an OS standpoint

Posted: Sat Nov 22, 2014 11:13 pm
by SpyderTL
You are on the right track.

Unfortunately, you are getting into the core design of your OS, so there isn't really a "right" way to do this.

However, we can give you some pointers... You can use them, or come up with your own solution.

The way I look at it, the OS is responsible for "translating" between the hardware and the software. It needs to "understand" the data and the events coming from the hardware, and translate that into data and events that the applications can understand. Your job is to make life easy for the application, by taking all of the different types of system components, and connected devices, and make them all look "similar" to the application.

So, try to think about the application, and if you were writing an application, think about what you would want that "interface" to look like. You can start by thinking about the simplest possible solution, and add more functionality over time, as needed.

As for the interrupt handlers, the application could really care less about the technical details. It really only cares about the events and the data at the abstract device level. It wants to know when a key is pressed, when a mouse is moved, or when a network packet arrives on a specific port. So whatever you can do to provide this information in a simple, reliable way that "hides" all of the mundane details about interrupts, addresses, buffers, etc.

Typically, the OS handles the interrupts, which in turn notifies any drivers responsible for communicating with any devices on that interrupt "number", and the driver talks to the device and sends the OS an event containing the details of the event, and then the OS sends an event to any applications that have requested those types of events.

But it's your OS, and you can set it up however you want. There are plenty of open source projects out there that you can use as a guide, or you can come up with your own solutions.

Let us know if there is anything we can do to help.

EDIT: Just some suggestions about your specific interrupt handler approach above...

You probably don't want to HLT your running application while it's waiting for an event from a device in the long run, but for now, it's fine. However, you also probably don't want to assume that immediately after the HLT instruction, you will have meaningful data in your registers, because ANY interrupt will restart your code after the HLT command, not just the keyboard events. For instance, the PIT timer interrupt will fire roughly 18 times a second, by default. Each of these events will resume your program after a HLT instruction. You will either need to check the keyboard controller after your HLT command and see if there is data sitting in it, or you will need your interrupt handler to check the keyboard controller, and pull the data out of the keyboard buffer, and store it somewhere safe, for the application to find later.

Re: Interrupt handlers from an OS standpoint

Posted: Sun Nov 23, 2014 12:02 am
by blasthash
SpyderTL wrote:You are on the right track.

Unfortunately, you are getting into the core design of your OS, so there isn't really a "right" way to do this.

However, we can give you some pointers... You can use them, or come up with your own solution.

The way I look at it, the OS is responsible for "translating" between the hardware and the software. It needs to "understand" the data and the events coming from the hardware, and translate that into data and events that the applications can understand. Your job is to make life easy for the application, by taking all of the different types of system components, and connected devices, and make them all look "similar" to the application.

So, try to think about the application, and if you were writing an application, think about what you would want that "interface" to look like. You can start by thinking about the simplest possible solution, and add more functionality over time, as needed.

As for the interrupt handlers, the application could really care less about the technical details. It really only cares about the events and the data at the abstract device level. It wants to know when a key is pressed, when a mouse is moved, or when a network packet arrives on a specific port. So whatever you can do to provide this information in a simple, reliable way that "hides" all of the mundane details about interrupts, addresses, buffers, etc.

Typically, the OS handles the interrupts, which in turn notifies any drivers responsible for communicating with any devices on that interrupt "number", and the driver talks to the device and sends the OS an event containing the details of the event, and then the OS sends an event to any applications that have requested those types of events.

But it's your OS, and you can set it up however you want. There are plenty of open source projects out there that you can use as a guide, or you can come up with your own solutions.

Let us know if there is anything we can do to help.
That works well to help keep everything in perspective.

Can you give me some pointers as to how you'd go about implementing such a thing? What I was thinking about was having an arrangement where the interrupt handler in the OS in turn calls a function and provides an entrypoint argument that consists of the data, for example, something like

Code: Select all

void (*process_handler) (char* data)
where the pointer provides the location of the data, and the void pointer itself serves as a reference to the routine in the handler. I thought about working this where when the new process or task is spawned, the OS retargets the process_handler by equating it to the address of the new process's service routine. It's shaky, but my gut says it should work.

First, can you tell me what you think about running the interrupt handling in this way, and second - can you explain, very generally, how task switching would work? It seems all the new docs out there tend to focus on newer generation processors which doesn't bode well for my '286 AT :x

Re: Interrupt handlers from an OS standpoint

Posted: Sun Nov 23, 2014 12:45 am
by SpyderTL
My gut tells me that your approach should work well for a single application, but may become an issue when you start trying to run two applications at once.

Lets focus on the multi-threaded issue first. The type of processor you are using makes little or no difference to the way multiple threads work. One thing to keep in mind here is that one processor can only run one program at a time. (Modern processors actually can, but they must be configured and enabled by the OS, so you can safely ignore this functionality for now...)

So, keeping in mind that your CPU can only run one application (or thread) at a time, how does multi-threading work? Before we answer that, let's also assume that we want to support running multiple applications/threads with little or no impact on the application itself. In other words, a running program doesn't need to know that it is running in a multi-threaded environment. As far as it knows, it has complete control of the CPU from the time the application starts to the time that it finishes.

We also don't want to "trust" the application to share the CPU with the other applications, because most programs won't. So how do we pull this off with all of these limitations? The OS needs a way to "freeze" an application and "unfreeze" and application without it noticing anything at all.

As I mentioned above, by default, you will get roughly 18 interrupts a second from the PIT timer. Each time this interrupt is handled, the currently running application is "interrupted" (or "frozen"), and the address of the next instruction is stored on the stack. Then the CPU finds the address of the correct interrupt handler, and sets the IP register to that address, and continues running from there.

That interrupt handler is responsible for "handling" the interrupt, and then setting the IP register back to the original program's next instruction.

So, what if the interrupt handler loaded a different address into IP, instead? The CPU would simply "resume" (or "unfreeze") using that address, instead. So, let's say that we had two programs running. The first program would run just fine until the PIT timer interrupt fires. Then the IP register would be stored on the stack, and the interrupt handler address would be copied to the IP register, and the CPU would start executing the handler. The handler could replace the address on the stack with the "next" instruction address from the second program, and instruct the CPU to resume the application.

The second application would run until the next PIT timer interrupt, which would store the new "next" instruction address to the stack, and call the interrupt handler again. The handler could swap the "next" instruction address for the first application on the stack, and then resume the application again.

Using this flip/flop approach, each application would be paused and resumed around 9 times a second. This would probably by pretty noticeable to the user, so you probably don't want to use the default PIT timer duration of 18 times a second. You can actually reprogram the PIT timer to interrupt the CPU more often, up to about 20,000 times a second, I believe. However, at this rate, your applications will get almost no CPU time, and your interrupt handler will get nearly 100% of your CPU time, so a value in the 1000-2000 interrupts per second is probably more appropriate.

So the only problem left is that both running applications will be using the CPU and its registers. Simply pausing one application and resuming the other will mean that one application will be able to change the registers for the other application. In order to prevent one application from affecting another, the entire CPU state must be stored in memory when the application is "frozen" and reloaded before the application is "unfrozen". If done properly, the application will not even noticed that it has been interrupted.

The actual details about how all of this is accomplished is up to you, but this should give you enough information to get you started. If you run into any problems, just let us know.

As for your Keyboard events, I would start out with just one application that checks the keyboard controller in a loop, and have it HLT if the keyboard buffer is empty. The keyboard event will cause your application to "wake up" after the HLT instruction, and you can check the keyboard buffer again. As I mentioned before, other events will also "wake up" your application, so you will need to check the keyboard buffer to see if it actually has data in it.

Later on, when you need support for multiple applications, I would then replace your interrupt handler with one that can check the keyboard buffer, and copy any data in it into memory. Then I would change the application(s) to check that memory address instead of the keyboard buffer. Eventually, you will need to get creative and figure out how to notify multiple applications of a single keyboard event, so each application may need its own event queue, which will be filled by the OS, and processed by the application the next time it is "awake".

Re: Interrupt handlers from an OS standpoint

Posted: Sun Nov 23, 2014 3:55 am
by blasthash
SpyderTL wrote:My gut tells me that your approach should work well for a single application, but may become an issue when you start trying to run two applications at once.

Lets focus on the multi-threaded issue first. The type of processor you are using makes little or no difference to the way multiple threads work. One thing to keep in mind here is that one processor can only run one program at a time. (Modern processors actually can, but they must be configured and enabled by the OS, so you can safely ignore this functionality for now...)

So, keeping in mind that your CPU can only run one application (or thread) at a time, how does multi-threading work? Before we answer that, let's also assume that we want to support running multiple applications/threads with little or no impact on the application itself. In other words, a running program doesn't need to know that it is running in a multi-threaded environment. As far as it knows, it has complete control of the CPU from the time the application starts to the time that it finishes.

We also don't want to "trust" the application to share the CPU with the other applications, because most programs won't. So how do we pull this off with all of these limitations? The OS needs a way to "freeze" an application and "unfreeze" and application without it noticing anything at all.

As I mentioned above, by default, you will get roughly 18 interrupts a second from the PIT timer. Each time this interrupt is handled, the currently running application is "interrupted" (or "frozen"), and the address of the next instruction is stored on the stack. Then the CPU finds the address of the correct interrupt handler, and sets the IP register to that address, and continues running from there.

That interrupt handler is responsible for "handling" the interrupt, and then setting the IP register back to the original program's next instruction.

So, what if the interrupt handler loaded a different address into IP, instead? The CPU would simply "resume" (or "unfreeze") using that address, instead. So, let's say that we had two programs running. The first program would run just fine until the PIT timer interrupt fires. Then the IP register would be stored on the stack, and the interrupt handler address would be copied to the IP register, and the CPU would start executing the handler. The handler could replace the address on the stack with the "next" instruction address from the second program, and instruct the CPU to resume the application.

The second application would run until the next PIT timer interrupt, which would store the new "next" instruction address to the stack, and call the interrupt handler again. The handler could swap the "next" instruction address for the first application on the stack, and then resume the application again.

Using this flip/flop approach, each application would be paused and resumed around 9 times a second. This would probably by pretty noticeable to the user, so you probably don't want to use the default PIT timer duration of 18 times a second. You can actually reprogram the PIT timer to interrupt the CPU more often, up to about 20,000 times a second, I believe. However, at this rate, your applications will get almost no CPU time, and your interrupt handler will get nearly 100% of your CPU time, so a value in the 1000-2000 interrupts per second is probably more appropriate.

So the only problem left is that both running applications will be using the CPU and its registers. Simply pausing one application and resuming the other will mean that one application will be able to change the registers for the other application. In order to prevent one application from affecting another, the entire CPU state must be stored in memory when the application is "frozen" and reloaded before the application is "unfrozen". If done properly, the application will not even noticed that it has been interrupted.

The actual details about how all of this is accomplished is up to you, but this should give you enough information to get you started. If you run into any problems, just let us know.

As for your Keyboard events, I would start out with just one application that checks the keyboard controller in a loop, and have it HLT if the keyboard buffer is empty. The keyboard event will cause your application to "wake up" after the HLT instruction, and you can check the keyboard buffer again. As I mentioned before, other events will also "wake up" your application, so you will need to check the keyboard buffer to see if it actually has data in it.

Later on, when you need support for multiple applications, I would then replace your interrupt handler with one that can check the keyboard buffer, and copy any data in it into memory. Then I would change the application(s) to check that memory address instead of the keyboard buffer. Eventually, you will need to get creative and figure out how to notify multiple applications of a single keyboard event, so each application may need its own event queue, which will be filled by the OS, and processed by the application the next time it is "awake".
Wow... forgive me if it takes a few rounds of communication for me to process it fully; that's a complex system.

What I'm given to understand is essentially that all the processes form a 'ring' or barrel structure if you will. This draws analogies to how some of the ST registers in x87 operate, but I won't pollute the discussion with that. The process scheduling core or routines rotate this barrel around on the tick of the timer, loading and unloading the CPU environment and registers from the stack.

Needless to say this seems like it involves a lot of the PUSHAD/POPAD instruction calling, at least in the case of i386/IA-32. There are two difficult scenarios I can think of -

1) Program entry or program exit. The opening or closing of a running process would require extra complexity on the part of the scheduler. (I'll use the term "barrels" in this analogy because it fares better with my mind at this early point; thinking of the system as a revolver) If you number the barrels/processes 0-7 (8 total), with 0 starting execution first, and rotating through 7 back to 0, the stack will contain the minimum of program states during that transition, and so the optimum time to add a process would be when 7 goes back to 0, and the optimum time to close a process out would be to simply delay the transition of that process to the next one and clean its traces off the stack immediately. Thus, closing a process involves less latency than adding one in this method.

Of course, this (in the case of the opening of a new process) would hinge on a delay until the "barrel" reaches its top-dead center; the position where it started. The delay system would be easier to implement, and the other scenario, which would be asynchronous or real-time loading, would be more difficult.

2) How to manage the stack once the number of processes rotates through. Once process 0 suspends and process 1 resumes, if I understand correctly the entire CPU state (all registers) will be pushed onto the stack, meaning that process 1 will operate with process 0's state on the stack. Once process 7 suspends to allow process 0 back, all 8 processes' states will be on the stack. Ideally, this is a simple fix. At first, it would be a routine assumption to just roll back the stack pointer either 18 bytes (for x86-16, 8 registers plus IP/EIP, 36 for x86-32) and load out.

The issue with this is that because it is occurring on an interval not controllable by the processes (completely beside the concept of the rings and privilege level), the individual processes wouldn't have a say in when they get switched out. The problem here is that if one of those processes commits a stack operation (pushf, for example) and switches out before it gets to reverse the operation, the stack will be misaligned and the entire process stack is corrupted.

In terms of my interpretation of such a system, is it on the right track? What would your ideas be on how to overcome these issues?

In terms of my priorities for my OS's development, first I want to prove I can build an operating system capable of being roughly "sentient"; that is, it can manage to spawn and run a program in a manner that wasn't specified by its coding; that it could pass the baton to a completely-independent program based on user input alone as opposed to just a linear string of massive subroutines.

Once I have that and if I'm able to make significant inroads in a workable multitasking/scheduling system I'll probably be convinced I have something worth talking about and I'll work on porting it out of the AT hardware and onto generic x86 systems. But that's many months away, even if I get the devil's own API.

I'm still drawing blanks on the whole interrupt system and handlers (as that's the problem I have right in front of me, at least in the short-term), but you have my interest piqued with the multitasking/task/process switching talks and so let's continue that.

Re: Interrupt handlers from an OS standpoint

Posted: Sun Nov 23, 2014 6:57 am
by Brendan
Hi,
blasthash wrote:Second of all, can anyone help me start to figure out how to implement such a thing, if it is indeed the correct way of doing things?
The very first thing you're going to need is some way for "things" to communicate with each other. This could be events, or messages, or pipes, or whatever you like (possibly including 6 completely different things intended for almost exactly the same purpose).

For most OSs, the interrupt handler sends something to something. What it sends might be a notification that an IRQ occured (where the interrupt handler itself doesn't do any other processing); or it might be a scan-code (where the interrupt handler fetches a byte from hardware and sends that); or maybe the IRQ handler does even more processing and sends a "key code". It doesn't really matter much what is sent - the point is that something is sent.

The receiver might be a "bottom half interrupt handler" (typical for monolithic kernels), or a device driver process (typical for micro-kernels), or could be something else. Whatever the receiver is, it probably does more processing and the results of this additional processing are sent to something else (which does a little more and sends something somewhere else, which does a little more and sends something somewhere else, and so on).

For example (for a micro-kernel), you could have a chain like this:
  • Actual IRQ handler might send an "IRQ occurred" message to a normal process (a PS/2 controller driver)
  • The PS/2 controller driver was blocked waiting for a message, so (now that it's received one) the scheduler unblocks it, and (sooner or later) gives it some CPU time. The PS/2 controller driver handles the message it received by reading the byte from hardware, telling the OS to send the "End of Interrupt" back to the interrupt controller, then sending a message to something else (the keyboard driver) containing the byte it got from hardware. After this the PS/2 controller driver might block again (waiting for its next message)
  • The PS/2 keyboard driver was blocked waiting for a message, so the scheduler unblocks it, and gives it some CPU time. The PS/2 controller driver handles the message by processing the byte (that the PS/2 controller driver got from hardware) and deciding what to do with it. After some processing the PS/2 keyboard driver might send some sort of "key-press packet" to the GUI; and then the keyboard driver might block again (waiting for its next message).
  • The GUI was blocked waiting for a message, so the scheduler unblocks it, and gives it some CPU time. The GUI figures out what it should do with the "key-press packet". If it was "alt+tab" then maybe the GUI sends a "your window lost focus" message to one process and a "your window now has focus" message to another process; followed by redrawing the screen (more messages to video driver?). If it was a normal key-press, then maybe the GUI just forwards the message to the process who's window currently has focus. In any case; after handling the message the GUI might block again (start waiting for its next message).
Typically (for keyboard) at some point it starts going back in the reverse direction - e.g. the application might handle the key-press by doing some processing and sending an "update my video" message back the GUI, where the GUI does more processing and sends an "update the screen" message to the video driver.

Of course for a monolithic kernel; the PS/2 controller's device driver might be a "tasklet" or a Deferred Procedure Call instead of a separate process, but there's still some sort of event or something, and there's still some sort of "waiting for an event" involved. The rest might be using pipes (e.g. stdin/stdout), or sockets, or some other form of communication; but there's still some sort of event or something and some sort of "waiting for event" happening.

Basically; the communication (whatever it is) is a fundamental part of the OS that's used by almost everything in some way; and (excluding CPU bound tasks) the communication is likely to have a very strong influence over "what happens when" (e.g. effecting scheduling far more than any timer ever will).


Cheers,

Brendan

Re: Interrupt handlers from an OS standpoint

Posted: Sun Nov 23, 2014 10:08 am
by SpyderTL
I think the key to your multi-tasking solution is that you can, technically, have more than one stack.

Since your only real "link" to the stack is the SP register (and maybe BP register), you can, technically, only have one stack at any given time, just like you can only have one "active" process at any given time (since the only link to the running program is the IP register). However, you can swap out the "active" stack, just like you can swap out the "active" program, by swapping the SP register to the next process's "saved" SP value.

And, you can actually swap both out at the same time, since your "saved" IP value is sitting on the top of your stack, so simply swapping out SP with a new stack and then calling IRET will pull the "saved" IP address from the new process.

So, the simplest task switching logic (that I can think of) would be to have your PIC timer PUSHA, then save the SP register to memory, then replace the SP register with the "saved" stack from the next process, call POPA, then call IRET.

It's up to you how to "save" your SP values for each running process (array, list, collection, ring, etc. -- all of them will work). My first (and current) solution was to save the "next" process SP and the "last" process SP value on the top of each stack, which "works", technically, but it is very difficult to make sure that all of these pointers are updated properly, so I wouldn't recommend going this route, and I'll probably change it the next time I'm in that code.

Eventually, you will want to be able to do things like prioritize one process over another, and put one process to sleep for a specific amount of time, which will prevent it from getting any CPU time until the sleep time has elapsed. All of this will require additional data to be stored for each running process, so you might want to keep this in mind as you are designing your task switching code.

Unfortunately, we can't really make these decisions for you, but once you have decided on an approach, we can help you make it work. :)

As for creating and destroying processes, all you really need to do is add and entry to, or remove an entry from, your list of running processes. The timing doesn't really matter too much, at this point, because you are only running a single CPU. Once you start adding support for multi-core or multi-processor, then it will be a problem. :)

Good luck.

Re: Interrupt handlers from an OS standpoint

Posted: Sun Nov 23, 2014 11:52 pm
by blasthash
Brendan wrote:Hi,
blasthash wrote:Second of all, can anyone help me start to figure out how to implement such a thing, if it is indeed the correct way of doing things?
The very first thing you're going to need is some way for "things" to communicate with each other. This could be events, or messages, or pipes, or whatever you like (possibly including 6 completely different things intended for almost exactly the same purpose).

For most OSs, the interrupt handler sends something to something. What it sends might be a notification that an IRQ occured (where the interrupt handler itself doesn't do any other processing); or it might be a scan-code (where the interrupt handler fetches a byte from hardware and sends that); or maybe the IRQ handler does even more processing and sends a "key code". It doesn't really matter much what is sent - the point is that something is sent.

The receiver might be a "bottom half interrupt handler" (typical for monolithic kernels), or a device driver process (typical for micro-kernels), or could be something else. Whatever the receiver is, it probably does more processing and the results of this additional processing are sent to something else (which does a little more and sends something somewhere else, which does a little more and sends something somewhere else, and so on).

For example (for a micro-kernel), you could have a chain like this:
  • Actual IRQ handler might send an "IRQ occurred" message to a normal process (a PS/2 controller driver)
  • The PS/2 controller driver was blocked waiting for a message, so (now that it's received one) the scheduler unblocks it, and (sooner or later) gives it some CPU time. The PS/2 controller driver handles the message it received by reading the byte from hardware, telling the OS to send the "End of Interrupt" back to the interrupt controller, then sending a message to something else (the keyboard driver) containing the byte it got from hardware. After this the PS/2 controller driver might block again (waiting for its next message)
  • The PS/2 keyboard driver was blocked waiting for a message, so the scheduler unblocks it, and gives it some CPU time. The PS/2 controller driver handles the message by processing the byte (that the PS/2 controller driver got from hardware) and deciding what to do with it. After some processing the PS/2 keyboard driver might send some sort of "key-press packet" to the GUI; and then the keyboard driver might block again (waiting for its next message).
  • The GUI was blocked waiting for a message, so the scheduler unblocks it, and gives it some CPU time. The GUI figures out what it should do with the "key-press packet". If it was "alt+tab" then maybe the GUI sends a "your window lost focus" message to one process and a "your window now has focus" message to another process; followed by redrawing the screen (more messages to video driver?). If it was a normal key-press, then maybe the GUI just forwards the message to the process who's window currently has focus. In any case; after handling the message the GUI might block again (start waiting for its next message).
Typically (for keyboard) at some point it starts going back in the reverse direction - e.g. the application might handle the key-press by doing some processing and sending an "update my video" message back the GUI, where the GUI does more processing and sends an "update the screen" message to the video driver.

Of course for a monolithic kernel; the PS/2 controller's device driver might be a "tasklet" or a Deferred Procedure Call instead of a separate process, but there's still some sort of event or something, and there's still some sort of "waiting for an event" involved. The rest might be using pipes (e.g. stdin/stdout), or sockets, or some other form of communication; but there's still some sort of event or something and some sort of "waiting for event" happening.

Basically; the communication (whatever it is) is a fundamental part of the OS that's used by almost everything in some way; and (excluding CPU bound tasks) the communication is likely to have a very strong influence over "what happens when" (e.g. effecting scheduling far more than any timer ever will).


Cheers,

Brendan
How would a bottom-half interrupt handler communicate to the process? If I decide to use pipes (stdin/out) to communicate through processes, wouldn't I need some sort of buffer between the sending process and receiving process? Essentially one process dumps into the buffer and another reads out; this avoid the problem of requiring absolute synchronous operation on both processes' parts.
SpyderTL wrote:I think the key to your multi-tasking solution is that you can, technically, have more than one stack.

Since your only real "link" to the stack is the SP register (and maybe BP register), you can, technically, only have one stack at any given time, just like you can only have one "active" process at any given time (since the only link to the running program is the IP register). However, you can swap out the "active" stack, just like you can swap out the "active" program, by swapping the SP register to the next process's "saved" SP value.

And, you can actually swap both out at the same time, since your "saved" IP value is sitting on the top of your stack, so simply swapping out SP with a new stack and then calling IRET will pull the "saved" IP address from the new process.

So, the simplest task switching logic (that I can think of) would be to have your PIC timer PUSHA, then save the SP register to memory, then replace the SP register with the "saved" stack from the next process, call POPA, then call IRET.

It's up to you how to "save" your SP values for each running process (array, list, collection, ring, etc. -- all of them will work). My first (and current) solution was to save the "next" process SP and the "last" process SP value on the top of each stack, which "works", technically, but it is very difficult to make sure that all of these pointers are updated properly, so I wouldn't recommend going this route, and I'll probably change it the next time I'm in that code.

Eventually, you will want to be able to do things like prioritize one process over another, and put one process to sleep for a specific amount of time, which will prevent it from getting any CPU time until the sleep time has elapsed. All of this will require additional data to be stored for each running process, so you might want to keep this in mind as you are designing your task switching code.

Unfortunately, we can't really make these decisions for you, but once you have decided on an approach, we can help you make it work. :)

As for creating and destroying processes, all you really need to do is add and entry to, or remove an entry from, your list of running processes. The timing doesn't really matter too much, at this point, because you are only running a single CPU. Once you start adding support for multi-core or multi-processor, then it will be a problem. :)

Good luck.
Ah, I see now. You simply delete the entry of where that processes' stack would be, or revise a table entry to mark that set of positions as free.

Can you give me a quick run-through on how task switching works on the hardware level? That is, what needs to be set up and allow that to occur, from the early x86 perspective ('286). Now, I understand how it would be done in terms of design methodology, but for someone who is just getting his sea legs with respect to OS development a lot of the wiki entries on descriptor tables are a bit hard to stomach.

I think I'll start writing again, if nothing but to just start experimenting with ideas. One of the reasons my previous attempt at writing an OS didn't fare too well is I still have a great deal of my youthful headstrong nature (byproduct of being 21 and writing operating systems, eh) so I bit off more than I could chew and tried to hone each part too much. Lost sight of the forest for the trees.

This time I think I'll go by my gut impressions more; even if the interrupt system I described before has flaws or doesn't quite work out right I'd much rather have a dumb system that I can make smarter over time than have a bunch of plans for a complex hotshot system that will never be coded.

Re: Interrupt handlers from an OS standpoint

Posted: Mon Nov 24, 2014 3:01 am
by Brendan
Hi,
blasthash wrote:How would a bottom-half interrupt handler communicate to the process? If I decide to use pipes (stdin/out) to communicate through processes, wouldn't I need some sort of buffer between the sending process and receiving process? Essentially one process dumps into the buffer and another reads out; this avoid the problem of requiring absolute synchronous operation on both processes' parts.
Typically there's a buffer involved in kernel-space. The code (in the kernel) used to send the data adds the data to a buffer and also tells the scheduler to unblock the receiver (if it was blocked/waiting for data). Sooner or later (or immediately), the process that received it gets the data from the buffer.

Note: Pipes mostly suck. It's literally a stream of bytes; so (in general) if you send a "multiple byte thing" it's difficult for the receiver to determine where that "multiple byte thing" begins or ends within the stream. Don't forget that for something like keyboard you really want to send a packet containing some sort of key identifier, a pressed/released/repeated flag (and maybe other flags, like whether control or alt keys were pressed at the time), and a Unicode code point. It's not a stream of bytes at all.


Cheers,

Brendan

Re: Interrupt handlers from an OS standpoint

Posted: Mon Nov 24, 2014 8:30 pm
by SpyderTL
Can you give me a quick run-through on how task switching works on the hardware level? That is, what needs to be set up and allow that to occur, from the early x86 perspective ('286).
The 386 does have support for hardware tasks, but it's slower, and more complicated than just swapping SP to a new stack. For this reason, it is rarely used. I would recommend ignoring it. I'm not sure if the 286 has hardware task support or not. But I wouldn't use it if it did.

EDIT: 4 stars!

EDIT: You can read about hardware task switching here. Context Switching

Re: Interrupt handlers from an OS standpoint

Posted: Fri Nov 28, 2014 4:18 am
by blasthash
SpyderTL wrote:
Can you give me a quick run-through on how task switching works on the hardware level? That is, what needs to be set up and allow that to occur, from the early x86 perspective ('286).
The 386 does have support for hardware tasks, but it's slower, and more complicated than just swapping SP to a new stack. For this reason, it is rarely used. I would recommend ignoring it. I'm not sure if the 286 has hardware task support or not. But I wouldn't use it if it did.

EDIT: 4 stars!

EDIT: You can read about hardware task switching here. Context Switching
Sorry for the absence for a little bit; took a few days to start writing a CPUID code segment and read up more on assembly. Figured giving myself some breathing space would help with the whole task-switching/interrupt dilemma.

I decided to scrap the idea of building a '286-based system in favor of the '386, for a few reasons.

1) 32-bit computing. With the same segment + offset scheme in the 286 it sort of felt like protected mode with all of the disadvantages, but none of the advantages.
2) Floating-point issues. Basically, there is a discrepancy in the 287 featured on the IBM PC/AT schematics that I was attempting to clone and the 287 documents, and after talking to some people it came down to the fact that Intel revised the dies after the 287 was issued and ended up burning some pins; not knowing whether that hardware would rely on me having an old-revision 287 versus a new one, I decided I might as well move one step more modern.

A quick perusal of that wiki page seems to suggest that the HW-mediated task switching incurs a heavy penalty. What I may very well do is brew my own method and then consult the official manuals later. It may work better for me to come up with my way of doing it and then tailoring it to the exact hardware manners rather than beating my head over how I would implement the hardware system from scratch.

So here's what I'm thinking so far, and forgive me if some of the details are sort of fuzzy; this will be a "pseudo-codish" representation.

In the process switching segment/process/program I build into the TSS and define in memory a big table (large enough to hold GPRs + segment regs + process ID for say, 512 processes). At the top of that table I have a header that describes how many processes are in play and other configuration information.

When the PIT IRQ hits the processor saves the current process's data in its segment in the table, increments the process (process ID * offset = base of new process table segment) and loads its data.

When a program is added, its state is added to the table in a stack-like fashion, and the process number value in that jimmy'd header is incremented. When a program is removed, the process switcher pauses all processes and realigns the stack. Granted, that's a massive performance penalty, but it's simplistic.

My gut impression about the above system is that it's a really f***ing dumb task switching system; but I want to hear you guys' thoughts on it for a couple reasons:

First, it's a dumb system but it's easy to implement and won't require months of headache to work by itself once I have the core systems in my OS done.
Second, it's simplistic but it's a system which is more than open to optimization by way of adding hardware-specific mechanisms, and hell, just learning more. But I think it will be both a fun thing to implement and something that I can intuitively fathom (I understand how CPUs operate in terms of core functionality, but how the systems for protection and memory management are worked make me clueless).

But as I said above, I'd be interested to see what you think about it.

Re: Interrupt handlers from an OS standpoint

Posted: Fri Nov 28, 2014 6:31 pm
by SpyderTL
That should work. As I said before, the details about "how" you store the process information isn't terribly important, as long as the task-switching code understands it.
When a program is removed, the process switcher pauses all processes and realigns the stack. Granted, that's a massive performance penalty, but it's simplistic.
Using a linked list instead of a single table (or array) would solve this problem, but again, it's up to you how to store the process information. Also, you don't have to "pause" processes to remove them. If you are in your task-switching code (or task-removing code), your process has already been "paused" by the CPU. :)

Re: Interrupt handlers from an OS standpoint

Posted: Fri Nov 28, 2014 11:59 pm
by Brendan
Hi,
blasthash wrote:I decided to scrap the idea of building a '286-based system in favor of the '386, for a few reasons.

1) 32-bit computing. With the same segment + offset scheme in the 286 it sort of felt like protected mode with all of the disadvantages, but none of the advantages.
2) Floating-point issues. Basically, there is a discrepancy in the 287 featured on the IBM PC/AT schematics that I was attempting to clone and the 287 documents, and after talking to some people it came down to the fact that Intel revised the dies after the 287 was issued and ended up burning some pins; not knowing whether that hardware would rely on me having an old-revision 287 versus a new one, I decided I might as well move one step more modern.
That's a decent start. However:

a) It's possible to have an 80386 with an older 80287 FPU, so if you support 80386 you still need to worry about the same floating point issues.

b) For 80386 the FPU's error signal has to get routed via. the PIC. This is racey and slow. For 80486 and later, FPU errors can be treated as exceptions without the slow/racey legacy mess.

c) 80386 was effectively "no caches" (and had no cache management instructions). 80486 added several cache management instructions. One of these is INVLPG (to invalidate TLB entries). For decent performance on 80486 and later systems you mostly must use INVLPG (to avoid invalidating all TLBs when you only need to invalidate one).

d) For 80386, a lot of systems still used the older "expanded memory" scheme (that was originally designed to work around the limitations of real mode). This means that for a lot of these systems it's a massive pain in the neck trying to access RAM (compared to "extended memory" where you can just access it without silly bank switching nonsense).

For all of these reasons, I'd forget about 80386 - there's just too many differences between 80386 and 80486 to make it worth supporting 80386.

Note: the differences between Pentium and 80486 are mostly minor - an OS can ignore them without effecting much.


Cheers,

Brendan

Re: Interrupt handlers from an OS standpoint

Posted: Wed Dec 03, 2014 2:15 am
by blasthash
Brendan wrote:Hi,
blasthash wrote:I decided to scrap the idea of building a '286-based system in favor of the '386, for a few reasons.

1) 32-bit computing. With the same segment + offset scheme in the 286 it sort of felt like protected mode with all of the disadvantages, but none of the advantages.
2) Floating-point issues. Basically, there is a discrepancy in the 287 featured on the IBM PC/AT schematics that I was attempting to clone and the 287 documents, and after talking to some people it came down to the fact that Intel revised the dies after the 287 was issued and ended up burning some pins; not knowing whether that hardware would rely on me having an old-revision 287 versus a new one, I decided I might as well move one step more modern.
That's a decent start. However:

a) It's possible to have an 80386 with an older 80287 FPU, so if you support 80386 you still need to worry about the same floating point issues.

b) For 80386 the FPU's error signal has to get routed via. the PIC. This is racey and slow. For 80486 and later, FPU errors can be treated as exceptions without the slow/racey legacy mess.

c) 80386 was effectively "no caches" (and had no cache management instructions). 80486 added several cache management instructions. One of these is INVLPG (to invalidate TLB entries). For decent performance on 80486 and later systems you mostly must use INVLPG (to avoid invalidating all TLBs when you only need to invalidate one).

d) For 80386, a lot of systems still used the older "expanded memory" scheme (that was originally designed to work around the limitations of real mode). This means that for a lot of these systems it's a massive pain in the neck trying to access RAM (compared to "extended memory" where you can just access it without silly bank switching nonsense).

For all of these reasons, I'd forget about 80386 - there's just too many differences between 80386 and 80486 to make it worth supporting 80386.

Note: the differences between Pentium and 80486 are mostly minor - an OS can ignore them without effecting much.


Cheers,

Brendan
Took a few days off the hardware front to figure things out.

Abandoning the 286/386 base of this project of mine would completely change the game; the reason why I originally wanted to use the 80286 and then the 386 is that 286-based systems (and '386, to an extent) are very-low-integration systems; bare CMOS logic still found a use on these boards. This meant that I could understand how the hardware works natively and how communication works. The reason why I moved up initially and figured I'd work with a 386 is that it is the last x86 CPU to truly support ISA hardware (the 486 had significant problems with bus instabilities at the increased speeds).

The 486 and later chips have very high integration on their boards and that's where things stop being intuitive. Granted that I don't have any 486 or early Pentium systems laying around, I might as well go ahead and start developing on an old XP-era DDR motherboard (x86-32 Athlon XP, just over 2.0 GHz). Problem with this is now I have ACPI, PCI and USB functionality to worry about.

I started working on the ACPI tables and I have search functions and utilities for returning the pointer, reading the version and reading the write table, but now I'm utterly lost on how to write the AML interpreter. I know I could simply port in ACPICA but that would be hell in a handbasket because it requires that I integrate the OS Services Layer, and right now there isn't an operating system to work with it. It'd be one thing if I had the low-level interfaces installed and I could simply wire in the ACPICA OSL, but I don't understand many of the operators or how they're supposed to interoperate - I understand how to find the table, parse the AML of the tables to get things like device port and configuration addresses that are then fed to the drivers for initialization/servicing, but the whole "context" component that is very visible and threading and what-not is completely beyond me in my beginner state.

Re: Interrupt handlers from an OS standpoint

Posted: Wed Dec 03, 2014 7:51 am
by Brendan
Hi,
blasthash wrote:The 486 and later chips have very high integration on their boards and that's where things stop being intuitive. Granted that I don't have any 486 or early Pentium systems laying around, I might as well go ahead and start developing on an old XP-era DDR motherboard (x86-32 Athlon XP, just over 2.0 GHz). Problem with this is now I have ACPI, PCI and USB functionality to worry about.
I don't really see much difference between an OS that doesn't support things like USB and ACPI (that was designed for newer machines) and an OS that doesn't support things like USB and ACPI (that was designed for older machines).

For ACPI, it is an ugly mess (from an OS developer's perspective). However; you can ignore it completely and the system will be just like a older computer. For example, when the user presses the power button, your OS won't be given a chance make sure everything is in a sane state (and avoid things like file system corruption or lost data) - just like on old computers the power just unexpectedly disappears.

PCI is probably easier to support than ISA (especially for things like figuring out which devices are present and which IO-ports, IRQs and memory areas they use); and if you want to avoid complexity I'd seriously consider not supporting ISA.
blasthash wrote:I started working on the ACPI tables and I have search functions and utilities for returning the pointer, reading the version and reading the write table, but now I'm utterly lost on how to write the AML interpreter. I know I could simply port in ACPICA but that would be hell in a handbasket because it requires that I integrate the OS Services Layer, and right now there isn't an operating system to work with it. It'd be one thing if I had the low-level interfaces installed and I could simply wire in the ACPICA OSL, but I don't understand many of the operators or how they're supposed to interoperate - I understand how to find the table, parse the AML of the tables to get things like device port and configuration addresses that are then fed to the drivers for initialization/servicing, but the whole "context" component that is very visible and threading and what-not is completely beyond me in my beginner state.
I'd (mostly) forget about ACPI's AML until after the OS works.


Cheers,

Brendan