Page 3 of 5

Re: Multitasking in real mode?

Posted: Wed Mar 27, 2013 4:44 pm
by bluemoon
Prochamber wrote:If I copy the kernel segment and BIOS segment to another part of the memory and just ...
No, BIOS may still access BIOS data area directly.

I'm sure it is plausible if you insist to do it by any mean, but that require huge efforts, hacks and tests - just to make sure the relatively non-standard things works on other computers. Then the complexity may goes beyond writing simple protected mode OS from scratch :shock:

Re: Multitasking in real mode?

Posted: Wed Mar 27, 2013 4:56 pm
by Brendan
Hi,
Prochamber wrote:
Brendan wrote:It's impossible to understand my points but not agree with them.
Wow. How arrogant are you? I see you chose to ignore my request to stop ranting about BIOS.
Oh, I'm quite arrogant; mostly because I wasted time on a real mode OS when I first started and have spend 15 years watching other people make the same mistake. This isn't important though. The important thing is that you are refusing to refute my remarks because deep down you know I'm right.
Prochamber wrote:Here's another: Huge Unreal Mode.
As I understand Huge Unreal Mode allows you to extend the code segment into 32-bit address space.
Huge Unreal Mode gets rid of the code segment limit; but the code segment must still start in the bottom 1 MiB. This is bazaar at best and only helps if you want to modify your applications to take advantage of "larger than 64 KiB" code segments.
Prochamber wrote:If I copy the kernel segment and BIOS segment to another part of the memory and just change the segments when the task needs to execute it would mean that each task will be running a separate copy of the kernel and BIOS so they won't get corrupted.
The BIOS's code is in ROM and so it can't be corrupted to begin with (actually it can be - nothing prevents a real mode application from messing with the chipset, but that's a different matter). The BIOS Data Area could be copied somewhere else and then restored before the BIOS accesses it (which would protect the BIOS Data Area from being trashed), but huge unreal mode doesn't help this. Having multiple copies of the BIOS Data Area will break the BIOS (prevents the BIOS from correctly tracking the current state of various pieces of hardware).

Also don't forget that there's an EBDA (Extended BIOS Data Area) that is just as trashable as the BIOS Data Area, and that the firmware's SMM handler may rely on data stored in the EBDA (and the CPU can enter SMM at any time, even if IRQs are disabled).
Prochamber wrote:I understand that this will cause my EIP to mess up when calling interrupt but I could just write wrappers for my interrupted to call the original routines and preserve the return address.
Yes - to allow applications to take advantage of "larger than 64 KiB" code segments you'd need to intercept all BIOS IRQs and all BIOS functions.
Prochamber wrote:What do you think of my idea? Is it plausible?
It doesn't help you achieve any of your goals. You're clutching at straws.


Cheers,

Brendan

Re: Multitasking in real mode?

Posted: Wed Mar 27, 2013 5:43 pm
by DavidCooper
Realistically, it looks as if you're already very close to the practical limits of what real mode can offer, and to take things much further is going to require a multitude of uncomfortable contortions which are likely to be so restrictive that you'd be better off switching to protected or long mode to use the machine the way it's designed to be used. Trying to add security looks as if it's a step too far (unless you run code indirectly). No one's going to write a virus to attack your OS, and the apps can all be debugged properly to make sure they don't mess each other up (by initially running them indirectly to catch all the points where they misbehave). Multitasking can easily be done in real mode and it can maybe be done really powerfully too, but it looks as if you're asking for just a little too much by lusting after too many features of the processor which aren't there for real mode.

It looks as if any fix is going to require new apps to work with it, except for the case where you keep copying things up and down between low and extended memory. That looks like the place where your project should reach its natural end: you can try it out and see how much taskswitching it actually allows without taking up 100% of processor time just copying stuff around. If you're lucky, it may work reasonably well and allow you to run a dozen active processes at once. That will be a fine tribute to real mode, and you'll then be free to move on to a new project.

Re: Multitasking in real mode?

Posted: Thu Mar 28, 2013 1:03 am
by bubach
Sounds like all those feuds with RDOS has gotten to you Brendan, I detect a bitter tone to your posts. :lol:

I understand you feel the need to help people in what very well might be the only right direction for any serious OS, but on the other hand... Most people do this to learn about computers, not to accomplish anything like the next desktop OS. And for learning, you kind of need to make mistakes, including doing real mode.

Many of the people doing real mode OS's choose to do it just because it's outdated. It's simpler to learn from something old and so extremely well-known.

They might never get passed the hello world stage before loosing interest, and in that case real mode vs protected or long doesn't really matter. In the off chance that they do go on and actually wants to extend or rewrite as a serious modern OS, they'll need to redesign most everything anyway based on new knowledge and ideas that they had the time to think about while making the real mode "mistake".

Bottom line. Since when do people ever listen to good advice? :wink:

Re: Multitasking in real mode?

Posted: Thu Mar 28, 2013 1:17 am
by Prochamber
@DavidCooper I don't care about security I just want to isolate processes in different memory areas so that they don't mess with each other.

Also I don't think my segment copying is going to drastically ruin performance, as I write this article my operating system is copying data to the video buffer in volumes hundreds of times larger than I would in my operating system and the CPU is reading hundreds of megabytes of memory every second and it still runs seamlessly.

To get an example of this, in 1995 you could get a game that run in 800x600 screen resolution at 20 FPS. That's 9,600,000 bytes of memory per seconds on an old CPU and memory and still run the complex systems of the game. This is nearly eight times what I require.
Brendan wrote:Oh, I'm quite arrogant; mostly because I wasted time on a real mode OS when I first started and have spend 15 years watching other people make the same mistake. This isn't important though. The important thing is that you are refusing to refute my remarks because deep down you know I'm right.
Haha! Don't count it.
Don't worry about me I know what I'm doing and will be able to do it even better with the help you have provided me with, in this post at least.
Brendan wrote:The BIOS's code is in ROM and so it can't be corrupted to begin with (actually it can be - nothing prevents a real mode application from messing with the chipset, but that's a different matter). The BIOS Data Area could be copied somewhere else and then restored before the BIOS accesses it (which would protect the BIOS Data Area from being trashed), but huge unreal mode doesn't help this. Having multiple copies of the BIOS Data Area will break the BIOS (prevents the BIOS from correctly tracking the current state of various pieces of hardware).

Also don't forget that there's an EBDA (Extended BIOS Data Area) that is just as trashable as the BIOS Data Area, and that the firmware's SMM handler may rely on data stored in the EBDA (and the CPU can enter SMM at any time, even if IRQs are disabled).
Uh, yeah. I meant the BIOS Data Area and this scheme won't affect the EBDA.
I'll copy and restore:
- The interrupt vector table
- The BIOS data area
- Kernel Space
- Program Space

I've decided the system of copying data with Big (not huge) Unreal Mode will be the simplest idea to implement.
You mentioned that there could be some problems, with having many separate copies of this data, so I've looked at the BIOS data area specifications and resolved a few paradoxes.
VideoBIOS - Screen settings are the only thing that is saved in the BDA. Programs won't need to change these.
Disk Services - I'll make a wrapper to disable and re-enable multitasking.
Serial Services - Unsure, maybe I'll lock it to one process.
Keyboard Services - This will actually work to my advantage, because each process will have it's own queue and won't be able to 'steal' keypresses from others. I might write my own handler with a global API.
RTC Services - Copy the time to the next process upon exiting (if it is sane) this will work for PIT timer ticks as well.

Can anyone see any other problems that I might face?

Re: Multitasking in real mode?

Posted: Thu Mar 28, 2013 1:29 am
by bluemoon
bubach wrote:Many of the people doing real mode OS's choose to do it just because it's outdated. It's simpler to learn from something old and so extremely well-known.
This is what I disagree. Real mode does not necessary mean easier, or has richer reference materials - it does not necessary hold true for anything more complex than a hello world.
On the other hand, there are enough reference materials and tutorials for all cpu modes (including long mode) on internet.

Re: Multitasking in real mode?

Posted: Thu Mar 28, 2013 2:21 am
by rdos
I found real mode to be obsolete already in 1988, so I never did write a real-mode OS. In the 80s I tried multitasking in DOS, and this turned out to be a disaster, both in terms of too little memory, and because of how DOS was written.

So, what I did instead was to design a protected mode OS that used paging to be able to isolate multiple applications written for real-mode (DOS). The applications run in V86 mode. Because the BDA was not page aligned, I also had to write an x86 instruction emulator in order to emulate a single BDA when using multiple real mode processes. Support for various BIOS and DOS services was written as emulations in protected-mode. That way, RDOS could move away from DOS, and don't have any of the limitations of real mode. At the current stage, both BIOS and DOS emulations are obsolete, but V86 mode is still used by the VBE mode-switch code, which also includes the instruction emulator that is needed for older CPUs that don't have VME.

I don't think it will always be a bad idea to start with real mode, but building an OS API based on the limitations of real mode seems like a real bad idea. Already in version 1, the RDOS API was multi-mode, and could handle 16 and 32-bit protected mode targets and real mode targets. Support for real mode was done by selector mapping of real mode segments to protected mode selectors, so therefore the real mode API was much slower than the protected mode API, but it worked.

Edit: A minor correction. I did write a mini-kernel in the 90s that was statically linked to an embedded application for real mode. This mini-kernel was designed to be somewhat compatible with RDOS, and had preemptive multitasking. However, even in this configuration, these terminals run out of memory many years ago, and thus we cannot add new functionality to them without removing something else.

Re: Multitasking in real mode?

Posted: Thu Mar 28, 2013 3:16 am
by rdos
Prochamber wrote:@DavidCooper I don't care about security I just want to isolate processes in different memory areas so that they don't mess with each other.
Why don't you use paging and V86 mode instead? After all, that would solve the issue in a clean way with no copying.
Prochamber wrote:Uh, yeah. I meant the BIOS Data Area and this scheme won't affect the EBDA.
I'll copy and restore:
- The interrupt vector table
- The BIOS data area
- Kernel Space
- Program Space

I've decided the system of copying data with Big (not huge) Unreal Mode will be the simplest idea to implement.
You mentioned that there could be some problems, with having many separate copies of this data, so I've looked at the BIOS data area specifications and resolved a few paradoxes.
VideoBIOS - Screen settings are the only thing that is saved in the BDA. Programs won't need to change these.
Disk Services - I'll make a wrapper to disable and re-enable multitasking.
Serial Services - Unsure, maybe I'll lock it to one process.
Keyboard Services - This will actually work to my advantage, because each process will have it's own queue and won't be able to 'steal' keypresses from others. I might write my own handler with a global API.
RTC Services - Copy the time to the next process upon exiting (if it is sane) this will work for PIT timer ticks as well.

Can anyone see any other problems that I might face?
The problem with the BDA, in a multi-process environment, is that it contains some settings that are global, and some that are local. It is also not page-aligned, but instead is within the first page, along with IDT. That gives a range of consistency problems that I doubt could be solved effectively by any other means than by using paging, and not mapping the first page in each process, instead emulating all instructions accessing it. When you emulate the instructions, you could decide that this BDA setting is global and use a global copy, and that this is local and use a per-process copy. You could even emulate the contents dynamically, something that will be needed for ticks count and similar.

Re: Multitasking in real mode?

Posted: Thu Mar 28, 2013 6:41 am
by Brendan
Hi,
Prochamber wrote:Also I don't think my segment copying is going to drastically ruin performance, as I write this article my operating system is copying data to the video buffer in volumes hundreds of times larger than I would in my operating system and the CPU is reading hundreds of megabytes of memory every second and it still runs seamlessly.
Here I agree - the impact on performance caused by saving (up to) 640 KiB of RAM and then restoring (up to) 640 KiB of RAM during a task switch is likely to be negligible compared to the performance impact of "locking up" the CPU just so the BIOS can do nothing while waiting for hardware to do DMA/UDMA disk IO transfers.

Note that your video buffer example is probably not a good example. I don't know which OS you're currently using, but I'd still expect most of the data it needs is already in the video card's RAM (e.g. using hardware accelerated "video RAM to video RAM" bit blits to render windows), and anything that isn't already in the video card's RAM is being transferred via. bus mastering without consuming CPU time.
Prochamber wrote:
Brendan wrote:The BIOS's code is in ROM and so it can't be corrupted to begin with (actually it can be - nothing prevents a real mode application from messing with the chipset, but that's a different matter). The BIOS Data Area could be copied somewhere else and then restored before the BIOS accesses it (which would protect the BIOS Data Area from being trashed), but huge unreal mode doesn't help this. Having multiple copies of the BIOS Data Area will break the BIOS (prevents the BIOS from correctly tracking the current state of various pieces of hardware).

Also don't forget that there's an EBDA (Extended BIOS Data Area) that is just as trashable as the BIOS Data Area, and that the firmware's SMM handler may rely on data stored in the EBDA (and the CPU can enter SMM at any time, even if IRQs are disabled).
Uh, yeah. I meant the BIOS Data Area and this scheme won't affect the EBDA.
I'll copy and restore:
- The interrupt vector table
- The BIOS data area
- Kernel Space
- Program Space

I've decided the system of copying data with Big (not huge) Unreal Mode will be the simplest idea to implement.
Ok, let's consider a simple scenario - an application is running and some IRQ occurs. The CPU uses your IVT to figure out how to transfer control to your interrupt handler; and your interrupt handler copies the BIOS' IVT and BDA from somewhere safe back to where it needs to be, then passes control to the BIOS's interrupt handler. When the BIOS interrupt handler has finished it returns to your interrupt handler which transfers the BIOS's IVT and BDA somewhere safe again. Then your interrupt handler returns to the application.

Now; the CPU has to be able to access your IVT and has to be able to execute your interrupt handler. If the CPU can access it then real mode software can access it. This means that instead of applications being able to trash the BIOS's IVT and BDA, the applications can just trash your IVT and your interrupt handlers. This doesn't sound like an improvement to me (roughly the same amount of "trash-able area" with nothing gained).
Prochamber wrote:You mentioned that there could be some problems, with having many separate copies of this data, so I've looked at the BIOS data area specifications and resolved a few paradoxes.
VideoBIOS - Screen settings are the only thing that is saved in the BDA. Programs won't need to change these.
You've got 123 different applications all writing to video display memory at the same time and screwing each other up and you're worried about the BDA? Obviously you're going to need some sort of abstraction (e.g. where applications draw their window's contents in buffers in RAM and a GUI copies the application's buffers to display memory). Once you realise this you'll also realise that the only thing that should be touching the Video BIOS is the GUI, and there's no point having multiple copies of the BDA for video.
Prochamber wrote:Disk Services - I'll make a wrapper to disable and re-enable multitasking.
Yes. Also add "write-through" disk IO caches to avoid painfully slow BIOS functions where possible. Sadly, the BIOS won't tell you when removable media has been removed and/or inserted, so for removable media you won't be able to keep your caches synchronised and it'd only be possible to (reliably) cache non-removable disks (e.g. hard drives).
Prochamber wrote:Serial Services - Unsure, maybe I'll lock it to one process.
Don't bother - nobody has ever used the BIOS's serial services because they need constant polling to avoid data loss. Serial ports (if they exist) are so easy that your applications will just use the IO ports directly, just like DOS applications did. Of course DOS was single-tasking so (excluding TSRs) the chance of 2 pieces of software trying to use the serial ports at the same time was zero. To avoid conflicts (and data loss) you'd probably want to provide your own serial port drivers and make sure only one application can use a serial port at a time. However you'd have to wonder why applications need to talk to devices attached to serial ports themselves (e.g. why the OS doesn't have drivers for devices attached via. serial).
Prochamber wrote:Keyboard Services - This will actually work to my advantage, because each process will have it's own queue and won't be able to 'steal' keypresses from others. I might write my own handler with a global API.
It won't work to your advantage. The user pushes down the shift key and the BIOS sets a flag in the BDA, the OS does a task switch and changes the BDA, the user presses the 'a' key (while the shift key is still held down) and the BIOS checks the BDA and sees that the flag is clear so the application gets a lower-case 'a' instead of an upper-case 'A'. Then the user releases the shift key and the BIOS clears a copy of the flag that was already clear anyway. Then the OS does a task switch and copies the first BDA back, the user presses the 'b' key and the BIOS checks the flag and sees that it's set (because the wrong copy of the flag was cleared), so it gives the application an upper case 'B' instead of a lower case 'b'. See how this is completely borked?

Also note that the application that should receive the keypress probably won't be the application that's currently running.

Your best option here might be for your "IRQ 1" handler to pass control to the BIOS's IRQ 1 handler, then when the BIOS's IRQ handler returns to your IRQ handler you can check the keyboard's queue and transfer any keypress to the correct application's queue yourself. However, in a well designed system it'd be the GUI's job to determine which application should receive the keypress (unless it's a global keypress like "alt+tab" that the GUI itself handles); and the kernel should just send all keypresses to the GUI to sort out.
Prochamber wrote:RTC Services - Copy the time to the next process upon exiting (if it is sane) this will work for PIT timer ticks as well.
RTC services are mostly broken (hint: there's no way to really tell if the RTC has been set to local time or UTC, or to handle daylight savings time properly). Even when they are made to work they're slow (BIOS uses lots IO port reads/writes to get/set the time instead of caching anything in the BDA). Basically; it's better to only use the RTC during boot and to keep track of time yourself after that.
Prochamber wrote:Can anyone see any other problems that I might face?
What I can't see is a sane reason to bother having multiple copies of the BDA.


Cheers,

Brendan

Re: Multitasking in real mode?

Posted: Thu Mar 28, 2013 9:14 am
by Prochamber
Brendan wrote:Hi,
Prochamber wrote:Also I don't think my segment copying is going to drastically ruin performance, as I write this article my operating system is copying data to the video buffer in volumes hundreds of times larger than I would in my operating system and the CPU is reading hundreds of megabytes of memory every second and it still runs seamlessly.
Here I agree - the impact on performance caused by saving (up to) 640 KiB of RAM and then restoring (up to) 640 KiB of RAM during a task switch is likely to be negligible compared to the performance impact of "locking up" the CPU just so the BIOS can do nothing while waiting for hardware to do DMA/UDMA disk IO transfers.

Note that your video buffer example is probably not a good example. I don't know which OS you're currently using, but I'd still expect most of the data it needs is already in the video card's RAM (e.g. using hardware accelerated "video RAM to video RAM" bit blits to render windows), and anything that isn't already in the video card's RAM is being transferred via. bus mastering without consuming CPU time.
Huh, so modern operating systems use video memory intelligently. I guess I'm just thinking of a game I wrote with double buffering.
And yes, I will "stop the world" when a disk transfer is happening. Since I don't have page swapping or anything complex like that disk transfers don't happen that often. If something is already happening, they can wait in line. I'm unsure of what will happen if they are interrupted by I'm guessing it won't be pretty.
Brendan wrote: Ok, let's consider a simple scenario - an application is running and some IRQ occurs. The CPU uses your IVT to figure out how to transfer control to your interrupt handler; and your interrupt handler copies the BIOS' IVT and BDA from somewhere safe back to where it needs to be, then passes control to the BIOS's interrupt handler. When the BIOS interrupt handler has finished it returns to your interrupt handler which transfers the BIOS's IVT and BDA somewhere safe again. Then your interrupt handler returns to the application.

Now; the CPU has to be able to access your IVT and has to be able to execute your interrupt handler. If the CPU can access it then real mode software can access it. This means that instead of applications being able to trash the BIOS's IVT and BDA, the applications can just trash your IVT and your interrupt handlers. This doesn't sound like an improvement to me (roughly the same amount of "trash-able area" with nothing gained).
It is possible for an application to trash the IVT, if it makes one bad call and locks itself up this won't be so bad. The timer will still kick in and copy someone else's IVT into the area. If it trashes the whole table or shared stack space it will take the operating system down with it. I suppose it is a risk I'll have to live with. Since program usually run within their own segment, the chance of this happening is unlikely. They are more likely to just mess up code in their own process.
Brendan wrote:
Prochamber wrote:You mentioned that there could be some problems, with having many separate copies of this data, so I've looked at the BIOS data area specifications and resolved a few paradoxes.
VideoBIOS - Screen settings are the only thing that is saved in the BDA. Programs won't need to change these.
You've got 123 different applications all writing to video display memory at the same time and screwing each other up and you're worried about the BDA? Obviously you're going to need some sort of abstraction (e.g. where applications draw their window's contents in buffers in RAM and a GUI copies the application's buffers to display memory). Once you realise this you'll also realise that the only thing that should be touching the Video BIOS is the GUI, and there's no point having multiple copies of the BDA for video.
Actually I only support fifteen tasks ATM but I get your point. My plan was to have a wrapper for VideoBIOS that would redirect simple screen functions to graphics buffer for all except the Graphics Processes, which is basically just a thing that copies all the graphics buffers to the screen buffer in their sizes and positions. It would still seem like they are writing to the screen but they are actually writing to a buffer. If I can do this a low level all my current graphics API will be able to use it seamlessly.
Brendan wrote:
Prochamber wrote:Disk Services - I'll make a wrapper to disable and re-enable multitasking.
Yes. Also add "write-through" disk IO caches to avoid painfully slow BIOS functions where possible. Sadly, the BIOS won't tell you when removable media has been removed and/or inserted, so for removable media you won't be able to keep your caches synchronised and it'd only be possible to (reliably) cache non-removable disks (e.g. hard drives).
That sounds like a lot of work. I have no plans to add drivers for any other media. I think it would be better to just "stop the world".
Brendan wrote:
Prochamber wrote:Serial Services - Unsure, maybe I'll lock it to one process.
Don't bother - nobody has ever used the BIOS's serial services because they need constant polling to avoid data loss. Serial ports (if they exist) are so easy that your applications will just use the IO ports directly, just like DOS applications did. Of course DOS was single-tasking so (excluding TSRs) the chance of 2 pieces of software trying to use the serial ports at the same time was zero. To avoid conflicts (and data loss) you'd probably want to provide your own serial port drivers and make sure only one application can use a serial port at a time. However you'd have to wonder why applications need to talk to devices attached to serial ports themselves (e.g. why the OS doesn't have drivers for devices attached via. serial).
Really? I've used them many times without data loss but the latency of a time slice won't be good for them. They are in very few of my operating system's applications, I'll leave them for now.
Brendan wrote:
Prochamber wrote:Keyboard Services - This will actually work to my advantage, because each process will have it's own queue and won't be able to 'steal' keypresses from others. I might write my own handler with a global API.
It won't work to your advantage. The user pushes down the shift key and the BIOS sets a flag in the BDA, the OS does a task switch and changes the BDA, the user presses the 'a' key (while the shift key is still held down) and the BIOS checks the BDA and sees that the flag is clear so the application gets a lower-case 'a' instead of an upper-case 'A'. Then the user releases the shift key and the BIOS clears a copy of the flag that was already clear anyway. Then the OS does a task switch and copies the first BDA back, the user presses the 'b' key and the BIOS checks the flag and sees that it's set (because the wrong copy of the flag was cleared), so it gives the application an upper case 'B' instead of a lower case 'b'. See how this is completely borked?

Also note that the application that should receive the keypress probably won't be the application that's currently running.

Your best option here might be for your "IRQ 1" handler to pass control to the BIOS's IRQ 1 handler, then when the BIOS's IRQ handler returns to your IRQ handler you can check the keyboard's queue and transfer any keypress to the correct application's queue yourself. However, in a well designed system it'd be the GUI's job to determine which application should receive the keypress (unless it's a global keypress like "alt+tab" that the GUI itself handles); and the kernel should just send all keypresses to the GUI to sort out.
Interesting, I didn't think of that. I'll plan to do the same ideas as VideoBIOS and manipulate the output. I will make my own keyboard driver that directly handles keyboard interrupts and replace the BIOS keyboard handler across all the IVTs. This will check for key strokes and insert them into the BIOS buffer of anyone who asks for them. I'll need to find out some information regarding a standard keymap. Again this will fit in seamlessly with the existing programs and API functions.
Brendan wrote:
Prochamber wrote:RTC Services - Copy the time to the next process upon exiting (if it is sane) this will work for PIT timer ticks as well.
RTC services are mostly broken (hint: there's no way to really tell if the RTC has been set to local time or UTC, or to handle daylight savings time properly). Even when they are made to work they're slow (BIOS uses lots IO port reads/writes to get/set the time instead of caching anything in the BDA). Basically; it's better to only use the RTC during boot and to keep track of time yourself after that.
Well it seems perfectly fine from my experience. It may occasionally get few seconds out but that doesn't really matter, it is more used to measure time delays for the API than for any super accurate clock.
Brendan wrote:
Prochamber wrote:Can anyone see any other problems that I might face?
What I can't see is a sane reason to bother having multiple copies of the BDA.
What else would you suggest?
I want to integrate multitasking in a simple way so that tasks can go about their business without worrying about what everyone else is doing and the whole process of multitasking. I don't want a complex GUI with a hundred windows and I don't want to change the whole structure of my operating system in doing it. I will probably not even have a desktop, just a main menu like I have now to start applications. I have to be realistic about what I can accomplish. If you can think of a better way to accomplish my goals let me know.

Thanks for your help.

Re: Multitasking in real mode?

Posted: Thu Mar 28, 2013 9:46 am
by Griwes
It is possible for an application to trash the IVT, if it makes one bad call and locks itself up this won't be so bad. The timer will still kick in and copy someone else's IVT into the area. If it trashes the whole table or shared stack space it will take the operating system down with it. I suppose it is a risk I'll have to live with. Since program usually run within their own segment, the chance of this happening is unlikely. They are more likely to just mess up code in their own process.
Tip: how does a CPU know what to call when timer interrupt happens?
What else would you suggest?
You should just move to any other mode that was designed with protection and multitasking in mind.

Re: Multitasking in real mode?

Posted: Thu Mar 28, 2013 4:09 pm
by Brendan
Hi,
Prochamber wrote:Huh, so modern operating systems use video memory intelligently. I guess I'm just thinking of a game I wrote with double buffering.
And yes, I will "stop the world" when a disk transfer is happening. Since I don't have page swapping or anything complex like that disk transfers don't happen that often. If something is already happening, they can wait in line. I'm unsure of what will happen if they are interrupted by I'm guessing it won't be pretty.
The only reason disk transfers won't happen often is that applications running on the OS won't be able to do anything interesting. Imagine an FTP or HTTP server running, while a C++ compiler is trying to build the boost library, while a text editor is doing spell checking and "auto-save" backups, while the user plays Doom.
Prochamber wrote:It is possible for an application to trash the IVT, if it makes one bad call and locks itself up this won't be so bad. The timer will still kick in and copy someone else's IVT into the area. If it trashes the whole table or shared stack space it will take the operating system down with it. I suppose it is a risk I'll have to live with. Since program usually run within their own segment, the chance of this happening is unlikely. They are more likely to just mess up code in their own process.
Applications will only be allowed to have one tiny little 64 KiB segment? If you're very lucky, 64 KiB might be enough for the application's keyboard buffer (but "cut & paste" will probably still cause keyboard buffer overflows). For 800 * 600 with 8 bits per pixel (which is pathetic compared to modern resolutions) the data from a boring screen shot would be more than 7 times larger than an "application" (with no code and no stack) can handle. Even DOS was less retarded than that and let applications use about 600 KiB of RAM (and DOS was so retarded that most applications used "DOS extenders" to run in protected mode just so they could access more RAM).
Prochamber wrote:
Brendan wrote:Yes. Also add "write-through" disk IO caches to avoid painfully slow BIOS functions where possible. Sadly, the BIOS won't tell you when removable media has been removed and/or inserted, so for removable media you won't be able to keep your caches synchronised and it'd only be possible to (reliably) cache non-removable disks (e.g. hard drives).
That sounds like a lot of work. I have no plans to add drivers for any other media. I think it would be better to just "stop the world".
A "write-through" disk cache isn't very much work at all. When an application reads from a (non-removable) disk you check if the data is in your disk cache; if the data isn't in your disk cache you use the BIOS to load the data from disk into your cache; and then you give the application data from your disk cache. The only thing that's slightly tricky is the cache eviction code (e.g. if the cache is full but you need space to store more data, find the data that hasn't been used for the longest amount of time and recycle it). There's no need to write any device drivers to implement the disk cache (unless you want to make it work for removable media rather than just non-removable media).

For a good OS the disk cache should be more intelligent; and do things like detect when a process is reading sequential sectors and pre-fetch data from disk in the background (so that data is in RAM before a process actually asks for it, and time spent waiting for disk IO is reduced even more). Other common tricks are to buffer/postpone writes, and re-order disk accesses to minimise seek times, and to have some sort of IO priority scheme (so important things that the user is waiting for can be done before unimportant things that the user doesn't care about).

Of course if you write your own disk drivers you could ask the hardware to load the data anywhere you want and let you know when it's finished; which would prevent the "stop the world" problem and avoid the need to load the data into the bottom 640 KiB and then copy it elsewhere, and would allow you to implement an IO priority scheme that lets you prefetch things in the background (without hurting performance for more important things), and would let you cache data for removable media. This would all make a massive difference to performance (especially when you've got many tasks trying to do disk IO at the same time); and given that even a simple/retarded OS will take years to write the extra time it'd take to write (e.g.) a native SATA driver would be minor.
Prochamber wrote:
Brendan wrote:Don't bother - nobody has ever used the BIOS's serial services because they need constant polling to avoid data loss. Serial ports (if they exist) are so easy that your applications will just use the IO ports directly, just like DOS applications did. Of course DOS was single-tasking so (excluding TSRs) the chance of 2 pieces of software trying to use the serial ports at the same time was zero. To avoid conflicts (and data loss) you'd probably want to provide your own serial port drivers and make sure only one application can use a serial port at a time. However you'd have to wonder why applications need to talk to devices attached to serial ports themselves (e.g. why the OS doesn't have drivers for devices attached via. serial).
Really? I've used them many times without data loss but the latency of a time slice won't be good for them. They are in very few of my operating system's applications, I'll leave them for now.
The maths is easy - at a baud rate of 115200 bits per second setup for "no parity, 1 start bit, 8 data bits, 1 stop bit" you get 11520 bytes per second. 11520 bytes per second means that you need to poll the serial port every 86 microseconds (or faster) to avoid losing data (unless you use the serial port's IRQ to avoid polling and end up with 11520 IRQs per second, but the BIOS is too stupid for that).

I think the reason you haven't had problems is that your code doesn't do anything interesting with the data it receives (e.g. imagine trying to write the data you're receiving to disk using the BIOS disk services), and that the BIOS is too lame to bother supporting a baud rate of 115200 bits per second (even though all serial ports support it and it's the most common speed for things like transferring data between computers over serial).
Prochamber wrote:
Brendan wrote:RTC services are mostly broken (hint: there's no way to really tell if the RTC has been set to local time or UTC, or to handle daylight savings time properly). Even when they are made to work they're slow (BIOS uses lots IO port reads/writes to get/set the time instead of caching anything in the BDA). Basically; it's better to only use the RTC during boot and to keep track of time yourself after that.
Well it seems perfectly fine from my experience. It may occasionally get few seconds out but that doesn't really matter, it is more used to measure time delays for the API than for any super accurate clock.
It's extremely unlikely that your definition of "perfectly fine" is the same as mine. In general, for timing there's 3 things to consider - precision (the shortest length of time you can measure), accuracy (how correct a measurement is) and overhead (how long it takes to read the time). Modern OSs are using the CPU's Time Stamp Counter to get nano-second precision with about 25 nanoseconds of overhead, and then using things like NTP in the background and very fine drift adjustment to get millisecond accuracy. The BIOS' RTC code gets you one second precision, lots of overhead (something like 20 microseconds each time you read) and about 1 second of drift per day. The BIOS's PIT code is better - it gets you about 55 ms precision and less overhead than RTC (but the same bad drift).

Note that you could increase the frequency of the PIT. For example, instead of running the PIT chip at 18.2 Hz and running the BIOS's IRQ 0 handler every time IRQ 0 occurs; you could set the PIT chip at 291.2 Hz and run the BIOS's IRQ 0 handler every sixteenth time IRQ 0 occurs. This would allow you to do "current_time = current_time +1" in your own IRQ handler and end up with about 3.5 ms precision (with 16 times as much overhead and the same bad drift). Of course this is easier in protected mode - you can set the PIT to anything you like without caring about screwing up the BIOS's timing.
Prochamber wrote:
Brendan wrote:
Prochamber wrote:Can anyone see any other problems that I might face?
What I can't see is a sane reason to bother having multiple copies of the BDA.
What else would you suggest?
I'd suggest that you spend a few weeks finding out how "modern" operating systems actually work, and doing some research into how ancient (almost 30 years old) CPU features (like protected mode, paging, etc) can be used.
Prochamber wrote:
Prochamber wrote:I want to integrate multitasking in a simple way so that tasks can go about their business without worrying about what everyone else is doing and the whole process of multitasking. I don't want a complex GUI with a hundred windows and I don't want to change the whole structure of my operating system in doing it. I will probably not even have a desktop, just a main menu like I have now to start applications. I have to be realistic about what I can accomplish. If you can think of a better way to accomplish my goals let me know.
For those specific goals; the best way to accomplish them is to give up and avoid wasting several years "polishing a turd". :roll:


Cheers,

Brendan

Re: Multitasking in real mode?

Posted: Thu Mar 28, 2013 6:35 pm
by Prochamber
Brendan wrote:The only reason disk transfers won't happen often is that applications running on the OS won't be able to do anything interesting. Imagine an FTP or HTTP server running, while a C++ compiler is trying to build the boost library, while a text editor is doing spell checking and "auto-save" backups, while the user plays Doom.
You seem to have forgotten that I'm designing a simple hobby operating system. I'm not going to have Doom clones or web servers or C++ compilers. If there is an application that really needs it I can implement a disk cache at a later date. For now I just want to get everything off the ground.
Brendan wrote:Applications will only be allowed to have one tiny little 64 KiB segment? If you're very lucky, 64 KiB might be enough for the application's keyboard buffer (but "cut & paste" will probably still cause keyboard buffer overflows). For 800 * 600 with 8 bits per pixel (which is pathetic compared to modern resolutions) the data from a boring screen shot would be more than 7 times larger than an "application" (with no code and no stack) can handle. Even DOS was less retarded than that and let applications use about 600 KiB of RAM (and DOS was so retarded that most applications used "DOS extenders" to run in protected mode just so they could access more RAM).
Sixty four kilobytes for a keyboard buffer? That's incredibly wasteful. The size will be more like sixty four bytes.
When I want to create a copy buffer I will use a system call to put it into a memory handle that is remembered by the kernel, i.e. os_copybuffer_write to insert data and os_copybuffer_read to read it. I will not insert it in the kerboard buffer. I was planning to use a my current standard graphics mode of 320x200 (=64000 bytes) that will be easy to integrate with my existing graphics API. It's enough for all the standard interfaces I want to create.
Brendan wrote:A "write-through" disk cache isn't very much work at all. When an application reads from a (non-removable) disk you check if the data is in your disk cache; if the data isn't in your disk cache you use the BIOS to load the data from disk into your cache; and then you give the application data from your disk cache. The only thing that's slightly tricky is the cache eviction code (e.g. if the cache is full but you need space to store more data, find the data that hasn't been used for the longest amount of time and recycle it). There's no need to write any device drivers to implement the disk cache (unless you want to make it work for removable media rather than just non-removable media).

For a good OS the disk cache should be more intelligent; and do things like detect when a process is reading sequential sectors and pre-fetch data from disk in the background (so that data is in RAM before a process actually asks for it, and time spent waiting for disk IO is reduced even more). Other common tricks are to buffer/postpone writes, and re-order disk accesses to minimise seek times, and to have some sort of IO priority scheme (so important things that the user is waiting for can be done before unimportant things that the user doesn't care about).

Of course if you write your own disk drivers you could ask the hardware to load the data anywhere you want and let you know when it's finished; which would prevent the "stop the world" problem and avoid the need to load the data into the bottom 640 KiB and then copy it elsewhere, and would allow you to implement an IO priority scheme that lets you prefetch things in the background (without hurting performance for more important things), and would let you cache data for removable media. This would all make a massive difference to performance (especially when you've got many tasks trying to do disk IO at the same time); and given that even a simple/retarded OS will take years to write the extra time it'd take to write (e.g.) a native SATA driver would be minor.
It's not as simple as it seems.
Currently the kernel has no way to know the actual address in which task memory is located, it just know which memory handle that the task uses. It is up to the memory API to figure out that for it, the kernel does not really care where the memory is; only that is has memory that can be read or written to by copying it to and from local memory. Also memory is arranged through a bitmap rather than consecutively so that the arrangement is as efficient as possible and fragmentation is not an issue.
Even if the kernel did know an exact memory address with tasks being moved in and out of memory it would probably mess up the disk write. I would still have to pause any tasks waiting on disk I/O.
Brendan wrote:Note that you could increase the frequency of the PIT. For example, instead of running the PIT chip at 18.2 Hz and running the BIOS's IRQ 0 handler every time IRQ 0 occurs; you could set the PIT chip at 291.2 Hz and run the BIOS's IRQ 0 handler every sixteenth time IRQ 0 occurs. This would allow you to do "current_time = current_time +1" in your own IRQ handler and end up with about 3.5 ms precision (with 16 times as much overhead and the same bad drift). Of course this is easier in protected mode - you can set the PIT to anything you like without caring about screwing up the BIOS's timing.
That is actually a great idea. I could use a simple wrapper that would trigger my task switcher and occasionally call BIOS timing code rather than use INT 0x1C. All the BIOS and kernel timing components would still work perfectly fine. I thought I would have to rewrite time delay functions and live with a messed up clock.
Brendan wrote:I'd suggest that you spend a few weeks finding out how "modern" operating systems actually work, and doing some research into how ancient (almost 30 years old) CPU features (like protected mode, paging, etc) can be used.
These features aren't relevant to my operating system because it uses real mode. Maybe if I ever want to write another operating system.
Brendan wrote:For those specific goals; the best way to accomplish them is to give up and avoid wasting several years "polishing a turd". :roll:
Ah, I thought so. :D

Right now I think I need to start getting some code down. I've spend a lot of time talking about it already.

Re: Multitasking in real mode?

Posted: Thu Mar 28, 2013 7:59 pm
by Brendan
Hi,
Prochamber wrote:
Brendan wrote:A "write-through" disk cache isn't very much work at all.
It's not as simple as it seems.
Currently the kernel has no way to know the actual address in which task memory is located, it just know which memory handle that the task uses.
The kernel doesn't need to know the actual address in which task memory is located - it only needs to know which address the task is reading the data to (or writing the data from), and the kernel can get this information from ES:BX when the task uses "int 0x13".
Prochamber wrote:
Brendan wrote:For those specific goals; the best way to accomplish them is to give up and avoid wasting several years "polishing a turd". :roll:
Ah, I thought so. :D

Right now I think I need to start getting some code down. I've spend a lot of time talking about it already.
I've got some experimental crud I wrote a few weeks ago. It's a wrapper around the BIOS that lets protected mode software use BIOS services directly (without messing up the BIOS's IRQ handling, etc). I was mostly just playing with the idea to make it easier to write boot loaders; but you'd be able to do something like this too, and run all of your existing code as 16-bit protected mode code with almost no changes at all. It would allow you to fill the first 3 GiB of RAM with about 40000 of your little 64 KiB applications and run them where they are (without copying anything during task switches). I could post the source code for my experimental crud if you want to see how easy this is to do.

[EDIT:] I posted my BIOS wrapper thing on the "announcements" forum. Take a look.. ;)[/EDIT]


Cheers,

Brendan

Re: Multitasking in real mode?

Posted: Fri Mar 29, 2013 5:09 pm
by Mikemk
iansjack wrote:@m12 I'm not going to give a detailed response to your last post because it doesn't deserve it. I'm afraid that, IMO, it is arrant nonsense that shows a complete misunderstanding of the operating modes of x86 processors.
I never said it was a good idea - i said it was possible, and that if somebody finds a way to do it better than they can in pmode, then it is a good idea then.