Hi,
In a multi-core system, I've learnt, the application processes (user) exploit the availability of multiple CPUs by dynamically creating threads and asking the OS to run them on particular cores (using APIs) . On receiving such a request, the scheduler schedules the said task on the requested CPU (if it's available).
But, how does the OS itself run on a muti-core system? Does every core run it's own copy of the OS independently? Or does the OS scale itself to run on multiple cores? The kernel is multi-threaded. But are these threads (of the kernel) scheduled to run on multiple CPUs dynamically?
To put the question in a better way - A user application achieves true parallelism by breaking itself into threads and running each thread on a different core. Does the OS also run that way?
How does the OS run on a multi-core system?
How does the OS run on a multi-core system?
Sanjeev Mk
-
- Member
- Posts: 595
- Joined: Mon Jul 05, 2010 4:15 pm
Re: How does the OS run on a multi-core system?
Usually in an SMP system, all cores share the code and data (some data is CPU specific though). The kernel can be viewed as a shared library, that extends the user program. Since several cores might try to alter the same kernel objects at the same time, the objects must be protected. This often done by semaphores or other types of locks.
Re: How does the OS run on a multi-core system?
In an AMP system, each core runs an instance of operating system. Each instance behaves like a mono-core computer.
In an SMP system, things are quit different because each core has the same view of the systems as other cores.
Although modern kernels are multi-threaded (as Linux), there is always some parts of the code which cannot be multi-threaded. Some parts are monolithic like the code for entering and leaving an interruption. For instance, when a core enters an interruption subroutine, it goes through the same path as others cores (i.e. running the same code at the same address as any other cores which would enter an ISR). This could be viewed as “code sharing” because any cores will run this code. It is crucial to point out that this is possible because code for entering/leaving an ISR is, for a lot of kernels at least, re-entrant code. This code will not access any data which is private to a core, or just global data which are common to all cores on the system and which should not be read/write by a core without acquiring a semaphore.
Beside this kind of “shared code”, an ISR could wakeup a kernel-thread which will run on the current core, or which will be added to the runqueue of another core and will be run when this other core will do a call to his scheduler. In this case, you are right: the OS will scale!
Finally, because I have mentioned the scheduler, this is typically an example of shared code (when each core has its own scheduler and runqueue, as in Linux 2.4 O(1) scheduler): each core will execute the same path of instructions when calling the scheduler, but these instructions will read/write/modify data (the runqueue) which are private to the core calling the scheduler!
In an SMP system, things are quit different because each core has the same view of the systems as other cores.
Although modern kernels are multi-threaded (as Linux), there is always some parts of the code which cannot be multi-threaded. Some parts are monolithic like the code for entering and leaving an interruption. For instance, when a core enters an interruption subroutine, it goes through the same path as others cores (i.e. running the same code at the same address as any other cores which would enter an ISR). This could be viewed as “code sharing” because any cores will run this code. It is crucial to point out that this is possible because code for entering/leaving an ISR is, for a lot of kernels at least, re-entrant code. This code will not access any data which is private to a core, or just global data which are common to all cores on the system and which should not be read/write by a core without acquiring a semaphore.
Beside this kind of “shared code”, an ISR could wakeup a kernel-thread which will run on the current core, or which will be added to the runqueue of another core and will be run when this other core will do a call to his scheduler. In this case, you are right: the OS will scale!
Finally, because I have mentioned the scheduler, this is typically an example of shared code (when each core has its own scheduler and runqueue, as in Linux 2.4 O(1) scheduler): each core will execute the same path of instructions when calling the scheduler, but these instructions will read/write/modify data (the runqueue) which are private to the core calling the scheduler!