Dear Masterkiller,
I have the feeling you don't understand correctly how a multiprocessor system works in general. (If I'm wrong, please apologize.) So here are some notes from my point of view...
Note: For this posting I assume we are only talking about phyiscal processors (no SMT-tricks like HyperThreading) and symmetric multiprocessing (all processors are the same (no asymmetry); all memory, devices, processors, ... are accessible of every processor and for now different access times are negligible (which precludes NUMA issues)). Just keep things simple for now.
Disclaimer: There are many points about which could be discussed forever. It's just meant as a start point.
Masterkiller wrote:So one could execute interrupt-handle and the other one - execute boot code. [...] And how far cores could be separated, e.g. is it possible only one of the core be in real-mode and the other one - in protected? (Probably not!)
For example, assume a Intel Core 2 Duo. There are simply two processors on one chip. That's all.
It's much like you have a mainboard with two processor sockets and a single core-processor in each socket. But those mainboards are relatively expensive. The idea of "multicore" is to put all the processors (2, 3, 4, 6, 8, ...) on one single chip. Now, you don't need multiple sockets - only one, but you have multiple processors (assuming the chipset can deal with that).
That's something the industry warmly welcomes. Think about the fat server boxes with mainboards, which have 8 sockets. Before the "multicore era" there were only single core-processors in each of the sockets, thus providing 8 processors in 8 sockets. Now take 8 Xeon/Opteron Quadcores and put them into the sockets. Now the system has 8*4=32 processors in 8 sockets. Compare that with an unaffordable (and not commercially available?) 32 socket board with 32 single core-processors. The latter is, to say it provocatively, economical suicide.
[EDIT: For an example with 4 sockets with up to 6 cores each see
IBM System x3850 M2. I tried to configure my own: a basic version with 4 Xeon 7460 (6 cores) is around $21,000.]
(There are more advantages and some issues to this approach, but this is beyond the scope for this post.)
To give a direct answer: Because there are physically independent processors, the code they execute is independent too. The cores can be in different operating modes. They all can have a totally different view of the environment (e.g. MTRRs set up differently or whatever...)
The only question is: Does it make sense? The answer to each situation is every time the same: It depends. It depends on what you are trying to achieve.
- Does it make sense to have a complete different view of the system from the individual CPUs (e.g. the aforementioned MTRRs)? - Well, it depends. (Most likely there will be no really useful case. In fact, it would be totally weird.)
- Does it make sense to have the CPUs in different operating modes? - It depends. Why not? There may be situations (like starting halted cores) where this is completely natural or even a necessity.
- Does it make sense to execute different processes on different cores? - It depends. If you want it to... (Virtually) all MP capable operating systems do it.
- Does it make sense to execute only one process at any time, but the process's different threads on different cores? - It depends. If you want it to... (Virtually) all MP capable operating systems do it.
- Does it make sense to let some cores execute interrupt handlers, while others execute "normal" processes/threads? - It depends. If you want it to... (Virtually) all MP capable operating systems do it.
- ...
It's up to the designers choice what he wants to do with the available processors. You may choose to support "this" but not "that".
And yes - theoretically you can execute two different operating systems on two cores in the same system. But consider that each of them thinks it is the only operating system executing. Both of them try to use memory, devices, processors, etc. at their own will, not thinking about there's someone that does interfering things. Without some tricks they will most likely trash data of one another. Not speaking about the issues about the states in the devices. Long story short: This will crash the machine - if not "coordinated" by something (that's the pointer to virtualization and partitioning and such).
Masterkiller wrote:And I was wondering how to tell the other core to start executing "other thread/process" (or actually code from different virtual address).
Guess what - it depends! It's up to your design. Just some *possible* ways to do it (disclaimer: I don't say they are good or bad):
- You could send an interrupt from one core to the other - an so called "inter-processor interrupt" (IPI) - using the Local APIC, which every processor has. The interrupt handler for this interrupt would look some data structures, of which it is aware where they are, and would schedule/dispatch another process/thread/whatever-you-call-it.
- You could use the timer every Local APIC has. So you have a timer for every processor and can let it trigger the scheduler on each individual processor independently.
- You could use the good old PIT and broadcast its interrupt requests to all processors (using IO APIC), which could trigger the schedular on all the cores.
- You could do cooperative scheduling and every time the task yields control to the operating system, you could check which task you want to run next. In this scenario you won't need IPIs or Local APIC timers at all.
- ...
It's all about your design. For MP systems you have many more degress of freedom compared to uniprocessor systems. You have a lot more choices to make and seemingly almost unbridled adjustment screws to turn. That's the point why it's so much harder to get it right or to at the very least not make the bollix of it.
Regards,
Thilo