Conceptual problem

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
Synon
Member
Member
Posts: 169
Joined: Sun Sep 06, 2009 3:54 am
Location: Brighton, United Kingdom

Conceptual problem

Post by Synon »

Hello :)
Before I start asking my question, I just want to say a few things.

I have no intention of starting this project for at least two years, the reason being that I'm 15 years of age and have less than a year's experience in C and ASM. As to that, all I'm planning to do is learn as much theory as I can over the next couple of years in the hope that when I do try and actually write some code, I'll know what I'm doing. My apologies if this has already been asked and answered or if I'm being ignorant. I did at one point copy out the "bare bones" example and play around with it, making some changes (a printing function among other things) and adding more stuff, and I quite enjoyed that. But I want to wait until I have a lot more experience to start actually writing an OS. It's a shame, though: at the time when I have the most free time to program, I can't actually do anything in terms of OSdev.

Anyway, my problem is that I'm unable to comprehend (due to age? lack of experience?) how this all fits together. How can the OS and programs the kernel is running work at the same time? In my (limited) understanding, the kernel loads the program's file into memory, parses the header and then jumps to the first instruction. But how, then, can the kernel do anything else at the same time? I also understand that the kernel allocates time to each process and then stores information - such as register contents - (probably in a data structure if the language has them). How does the program call kernel functions? How does the program print to the screen using kernel services? How does this work?

Thank you for your time...

Edit: sorry about the melodramatic name, it was kind of embarrassing so I changed it.
Last edited by Synon on Sun Feb 07, 2010 9:23 am, edited 1 time in total.
User avatar
piranha
Member
Member
Posts: 1391
Joined: Thu Dec 21, 2006 7:42 pm
Location: Unknown. Momentum is pretty certain, however.
Contact:

Re: Huge conceptual problem :(

Post by piranha »

Well, the doing things at the same time: It doesn't. It just switches between different tasks quickly.

No, it's not age. It's just research. You need to read every bit of material you can get your hands on for a subject. This is not an easy field, and it requires many many hours of research per subject. So, order the Intel manuals, read the whole wiki, google stuff, and repeat until you get it.

But yes, you really do need to have more C experience than one year. I tried to write an OS with little experience in C and it did not turn out well. I didn't get it at all. However, if you research the things you don't get, and look through tutorials, and repeat, you will get it.

-JL
SeaOS: Adding VT-x, networking, and ARM support
dbittman on IRC, @danielbittman on twitter
https://dbittman.github.io
Gigasoft
Member
Member
Posts: 856
Joined: Sat Nov 21, 2009 5:11 pm

Re: Huge conceptual problem :(

Post by Gigasoft »

When interrupts happen, the OS gets control again and can switch to other threads. Usually, a timer interrupt is used to determine when it's time to switch threads. Programs call the kernel using the INT or SYSENTER instructions. System services may also pause the current thread and switch to another thread.

The INT instruction issues a specified interrupt number provided that it's entry in the IDT has a DPL greater or equal to the current CPL. The IDT (Interrupt Descriptor Table) contains interrupt descriptors specifying the address of interrupt handlers. Some of the entries correspond to exceptions such as divide overflow and page fault. Others are for interrupt requests coming from a hardware component. Both of these usually have a DPL of 0 so that user mode programs, which have a CPL of 3, can't call them. Software interrupt descriptors have a DPL of 3 and can be called from programs. When an interrupt happens, the processor loads ESP and SS from the current TSS and pushes the previous SS and ESP registers if changing CPL. It then pushes the previous EFlags, CS and EIP and jumps to the interrupt handler.

The SYSENTER instruction is another way in which programs can call the OS, and it uses MSRs to decide where to jump.
User avatar
Love4Boobies
Member
Member
Posts: 2111
Joined: Fri Mar 07, 2008 5:36 pm
Location: Bucharest, Romania

Re: Huge conceptual problem :(

Post by Love4Boobies »

Sorry about Gigasoft, some of our forum members think they're smarter if they show off to newcomers with esoteric mumbo-jumbo (which, in this case by the way, only applies to the x86(-64)). If I were you, I'd check out one of the following books:
  • Operating System Concepts, 8th ed.
  • Modern Operating Systems, 3rd ed.
They should give you a clear idea about what's going on, concept-wise at least :) Apart from the time-sharing issue in CPUs where you do multiplexing, I'd like to add that MP (multiprocessor) systems are a huge trend today (not as an alternative) and there's a lot of research going on the subject.
"Computers in the future may weigh no more than 1.5 tons.", Popular Mechanics (1949)
[ Project UDI ]
Synon
Member
Member
Posts: 169
Joined: Sun Sep 06, 2009 3:54 am
Location: Brighton, United Kingdom

Re: Huge conceptual problem :(

Post by Synon »

@piranha,
Thanks :)
It's hard to get my head around a lot of stuff. But I'll get there, I'm sure, if I put enough effort into it.

@gigasoft
:l

@Love4Boobies,
this one made me laugh. Cool name, too. I understood most of what gigasoft said, it was mainly the abbreviations I didn't get. I know vaguely what an IDT is, anyway...

Thanks for the book recommendations. Incidentally, I was going to buy another one of ast's books (I have messed around with Minix 3 in QEmu) but I never got around to it. I have some money now, though; I think I will buy those. Thank you.

I've been looking around here since around September, reading threads and the wiki (I think it's excellent someone decided to put this all together, too) and this screenshots thread (some of those are amazing).
User avatar
piranha
Member
Member
Posts: 1391
Joined: Thu Dec 21, 2006 7:42 pm
Location: Unknown. Momentum is pretty certain, however.
Contact:

Re: Huge conceptual problem :(

Post by piranha »

It's hard to get my head around a lot of stuff. But I'll get there, I'm sure, if I put enough effort into it.
Yep, thats really what it's all about :)

Good luck

-JL
SeaOS: Adding VT-x, networking, and ARM support
dbittman on IRC, @danielbittman on twitter
https://dbittman.github.io
Synon
Member
Member
Posts: 169
Joined: Sun Sep 06, 2009 3:54 am
Location: Brighton, United Kingdom

Re: Huge conceptual problem :(

Post by Synon »

Image
iammisc
Member
Member
Posts: 269
Joined: Thu Nov 09, 2006 6:23 pm

Re: Huge conceptual problem :(

Post by iammisc »

Yes, you do need more experience but don't let age be an issue I started at osdev about two years before you.

IMO, you have a very idealistic idea of osdev.

Unless you have a dual processor system nothing actually runs at the same time. Processors are dumb, they don't know what's going on. Based on whatever's in their registers they will blindly continue executing code.

The kernel's job is to make the CPU do something useful.

Okay so how do we get multitasking? Simple, we don't. On a uniprocessor system, only one task actually runs. However, if things are set up right, every so often, the kernel will get an interrupt called by the pit timer. At this point the kernel can do whatever it wants. If it wants to change tasks it can. All it has to do is restore the register state.

Programs call kernel functions usually by using interrupts. By setting register state before the interrupt, the program can pass arguments
Synon
Member
Member
Posts: 169
Joined: Sun Sep 06, 2009 3:54 am
Location: Brighton, United Kingdom

Re: Huge conceptual problem :(

Post by Synon »

iammisc wrote:Yes, you do need more experience but don't let age be an issue I started at osdev about two years before you.
At 13? Wow. My mind wasn't nearly developed enough at that age to even start programming. I tried several times to get going, but never succeeded until February '09 (IIRC).
IMO, you have a very idealistic idea of osdev.

Unless you have a dual processor system nothing actually runs at the same time.
I understand that programs don't run simultaneously on single-core CPUs; what I didn't understand was that the kernel could still control the system despite another program running because only one program can run per [physical] core (can a program run on a logical core (e.g. on the Core i7s) for that matter?).
Processors are dumb, they don't know what's going on. Based on whatever's in their registers they will blindly continue executing code.
I suppose, if you remove all the bells and whistles you've just got a glorified clock that can oscillate billions of times per second... But with all due respect, I disagree with your second statement (although this is likely due to ignorance). Can't [reasonably modern] CPUs disallow a program from doing something (for example, violating another processes memory space causes a segfault because (I guess) the userland program isn't in ring0)? Or does the kernel have to tell the CPU to do that? Please don't mistake my ignorance for arrogance.
Okay so how do we get multitasking? Simple, we don't. On a uniprocessor system, only one task actually runs. However, if things are set up right, every so often, the kernel will get an interrupt called by the pit timer. At this point the kernel can do whatever it wants. If it wants to change tasks it can. All it has to do is restore the register state.
I just read part of the article on the PIT actually. So when an interrupt is called, the CPU switches back to the kernel somehow? Does the kernel tell it how to do that?
Programs call kernel functions usually by using interrupts. By setting register state before the interrupt, the program can pass arguments
And sysenter, sysexit and friends (syscall and sysret on AMD, right?) are a newer way of doing that? Are they supported on just their respective processors? Normally code compiled on an Intel processor works on an AMD processor of the same architecture (it worked for a CPUID program I wrote, anyway). What I'm asking is, would you have to make e.g two separate files (like amd/boot.S and intel/boot.S) or could you use sysenter and sysexit on AMD CPUs and syscall and sysret on Intel CPUs? It doesn't sound likely, but if not; doesn't that make those two instructions pointless and obsolete? At least int works on all x86s...
User avatar
Owen
Member
Member
Posts: 1700
Joined: Fri Jun 13, 2008 3:21 pm
Location: Cambridge, United Kingdom
Contact:

Re: Huge conceptual problem :(

Post by Owen »

Synon wrote:
IMO, you have a very idealistic idea of osdev.
Unless you have a dual processor system nothing actually runs at the same time.
I understand that programs don't run simultaneously on single-core CPUs; what I didn't understand was that the kernel could still control the system despite another program running because only one program can run per [physical] core (can a program run on a logical core (e.g. on the Core i7s) for that matter?).
As far as the OS is concerned, a logical core is like a physical core (Except as far as work on one logical core slows work on another)
Synon wrote:
Processors are dumb, they don't know what's going on. Based on whatever's in their registers they will blindly continue executing code.
I suppose, if you remove all the bells and whistles you've just got a glorified clock that can oscillate billions of times per second... But with all due respect, I disagree with your second statement (although this is likely due to ignorance). Can't [reasonably modern] CPUs disallow a program from doing something (for example, violating another processes memory space causes a segfault because (I guess) the userland program isn't in ring0)? Or does the kernel have to tell the CPU to do that? Please don't mistake my ignorance for arrogance.
The OS tells the kernel (Usually via paging) what memory the user program is allowed to access. If the program accesses memory it's not allowed to (or thats not mapped in, or...), then the processor triggers a Page Fault exception; the OS handles things from here. Not all page faults are errors: Some are used for things like virtual memory. An exception can be considered, in many respects, to be a processor local interrupt.
Synon wrote:
Okay so how do we get multitasking? Simple, we don't. On a uniprocessor system, only one task actually runs. However, if things are set up right, every so often, the kernel will get an interrupt called by the pit timer. At this point the kernel can do whatever it wants. If it wants to change tasks it can. All it has to do is restore the register state.
I just read part of the article on the PIT actually. So when an interrupt is called, the CPU switches back to the kernel somehow? Does the kernel tell it how to do that?
The processor's hardware defines what happens on an interrupt but the OS tells it some of the details, for example, what piece of code to go to
Synon wrote:
Programs call kernel functions usually by using interrupts. By setting register state before the interrupt, the program can pass arguments
And sysenter, sysexit and friends (syscall and sysret on AMD, right?) are a newer way of doing that? Are they supported on just their respective processors? Normally code compiled on an Intel processor works on an AMD processor of the same architecture (it worked for a CPUID program I wrote, anyway). What I'm asking is, would you have to make e.g two separate files (like amd/boot.S and intel/boot.S) or could you use sysenter and sysexit on AMD CPUs and syscall and sysret on Intel CPUs? It doesn't sound likely, but if not; doesn't that make those two instructions pointless and obsolete? At least int works on all x86s...
This is actually somewhat complex:
  • Software interrupts (Via the INT instruction) work on any x86 processor, but it's slower than the other methods
  • Syscall/Sysret work on any relatively recent (Around 3DNow vintage) AMD processor, and on any processor in 64-bit mode (That is, a 64-bit OS can use syscall/sysret and ignore the other mechanisms)
  • Sysenter/Sysexit work on all relatively recent Intel processor in any mode, and on all AMD processors in 32-bit mode (It is not permitted on AMDs in 64-bit mode).
The result is this:
  • All OSes support INT as a (slow) fallback.
  • Pretty much all 64-bit OSes use syscall/sysret (and ignore sysenter/exit)
  • 32-bit OSes support all three. Which they prefer varies; the best option, perhaps, is to look at the processor vendor string and pick syscall/ret on AMD, sysenter/leave on Intel, and INT if neither is supported (On the assumption that processor vendors prefer their own methods)
Non x86 processors tend to have a single syscall instruction for getting into the kernel; they don't have to deal with 30 years of backwards compatibility and bloodymindedness.
Synon
Member
Member
Posts: 169
Joined: Sun Sep 06, 2009 3:54 am
Location: Brighton, United Kingdom

Re: Huge conceptual problem :(

Post by Synon »

Owen wrote:This is actually somewhat complex:
  • Software interrupts (Via the INT instruction) work on any x86 processor, but it's slower than the other methods
  • Syscall/Sysret work on any relatively recent (Around 3DNow vintage) AMD processor, and on any processor in 64-bit mode (That is, a 64-bit OS can use syscall/sysret and ignore the other mechanisms)
  • Sysenter/Sysexit work on all relatively recent Intel processor in any mode, and on all AMD processors in 32-bit mode (It is not permitted on AMDs in 64-bit mode).
The result is this:
  • All OSes support INT as a (slow) fallback.
  • Pretty much all 64-bit OSes use syscall/sysret (and ignore sysenter/exit)
  • 32-bit OSes support all three. Which they prefer varies; the best option, perhaps, is to look at the processor vendor string and pick syscall/ret on AMD, sysenter/leave on Intel, and INT if neither is supported (On the assumption that processor vendors prefer their own methods)
Non x86 processors tend to have a single syscall instruction for getting into the kernel; they don't have to deal with 30 years of backwards compatibility and bloodymindedness.
Thanks :)

And I think Intel and AMD should break 16-bit support. 32-bit is going to be legacy soon enough, why go further back?
Just my (as yet uneducated) opinion...
User avatar
~
Member
Member
Posts: 1226
Joined: Tue Mar 06, 2007 11:17 am
Libera.chat IRC: ArcheFire

Re: Huge conceptual problem :(

Post by ~ »

Synon wrote:And I think Intel and AMD should break 16-bit support. 32-bit is going to be legacy soon enough, why go further back?
Just my (as yet uneducated) opinion...
It would effectively leave us with a new and different platform. All of the 16 or 32-bit BIOS ROM code of any peripherals (video cards?) would stop working altogether. Boot sector code would also stop working.

8, 16 and 32-bit general-purpose registers would always be used anyway or it wouldn't resemble an x86 in any way.
YouTube:
http://youtube.com/@AltComp126

My x86 emulator/kernel project and software tools/documentation:
http://master.dl.sourceforge.net/projec ... 7z?viasf=1
Synon
Member
Member
Posts: 169
Joined: Sun Sep 06, 2009 3:54 am
Location: Brighton, United Kingdom

Re: Huge conceptual problem :(

Post by Synon »

~ wrote:
Synon wrote:And I think Intel and AMD should break 16-bit support. 32-bit is going to be legacy soon enough, why go further back?
Just my (as yet uneducated) opinion...
It would effectively leave us with a new and different platform. All of the 16 or 32-bit BIOS ROM code of any peripherals (video cards?) would stop working altogether. Boot sector code would also stop working.

8, 16 and 32-bit general-purpose registers would always be used anyway or it wouldn't resemble an x86 in any way.
Ar, I didn't think about bootloaders and BIOSes and things... So, do you think that for as long as Intel is "in power" we'll always have 16-bit real mode?
User avatar
~
Member
Member
Posts: 1226
Joined: Tue Mar 06, 2007 11:17 am
Libera.chat IRC: ArcheFire

Re: Huge conceptual problem :(

Post by ~ »

I think Intel has already the newer non-x86 IA-64 Itanium architecture, and wouldn't drop the x86 processor industry unless everyone (software and hardware vendors and users) is decided to abandon the traditional x86 PC architecture.

Anyway, as long as most users and developers worldwide choose x86 machines there will always be somebody ready to make business by making these processors.
YouTube:
http://youtube.com/@AltComp126

My x86 emulator/kernel project and software tools/documentation:
http://master.dl.sourceforge.net/projec ... 7z?viasf=1
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Huge conceptual problem :(

Post by Brendan »

Hi,
Synon wrote:Ar, I didn't think about bootloaders and BIOSes and things... So, do you think that for as long as Intel is "in power" we'll always have 16-bit real mode?
Backward compatibility has probably been the most influential feature in 80x86 CPUs since it began (the feature that's influenced market share the most). End users (especially businesses/companies) invest a lot of $$$ in purchased software, and it's not just the cost of the software itself - it includes integrating that software into the way people do their jobs and integrating it with all the other equipment being used (e.g. finding employees that are familiar with the software and/or training staff to use it, etc).

For an example, imagine a business that uses "CAD/CAM software A". They want to upgrade the CPU but the newer faster CPU isn't compatible with the old CPU. The operating system and the CAD/CAM software they're using won't run, so they need to replace both of them (causing retraining costs and lost productivity while the staff learn to use the new software). Then they find out the new software doesn't communicate with the accountancy department's computers in the same way (more hardware and/or software and/or retraining costs). Then they find out that the manufacturing department's machinery only understands the file format used by "CAD/CAM software A", and the company needs to replace and/or upgrade all of the machinery. The end result is that replacing a few CPUs costs several million dollars due to compatibility issues throughout the entire company.

This means that a new CPU (without backward compatibility) won't sell very well, which is another massive problem. For example, imagine if Intel spends $100000000 designing a new CPU, and it costs them $10 to manufacture a CPU based on this design. If Intel only sell 100 of these CPUs then they'd need to sell them for $1000010 each to recover the costs (and the high cost will also effect market share). If they sell 10000000 of them (because they're backward compatible) then they'd be able to sell them for $20 each (and the low cost will help sell more CPUs).

Backward compatibility is an advantage for CPU manufacturers (market share = profit), and for end users (who don't see any of the hassles). It's also an advantage to most software developers (e.g. people using high level languages) because they don't see any of the hassles either. For people that write system software (OSs, emulators, compilers) backward compatibility is a mixed blessing (hassles versus market share), where backward compatibility isn't a clear advantage or a clear disadvantage.

Of course there's a few people writing system software who don't care about market share, where backward compatibility has no advantage. These people only see the disadvantages/hassles, but (except for forums like this one) these people are very rare... ;)

However, all of this doesn't mean that change is impossible - it just means that change takes a very long time. It might take another 10 years before almost everyone has adopted EFI (or something else that doesn't rely on real mode), then it might take another 5 years for "device ROMs" (e.g. video card ROMs, network card boot ROMs, disk controller ROMs, etc) to become compatible. By this time OSs won't need many changes (most already support EFI now). Once the firmware, devices and OSs stop relying on real mode (and virtual8086 mode), Intel and AMD would be able remove real mode from their CPU designs without causing a significant loss of profit (for themselves and for their end-users).


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Post Reply