Multiple cores, multiple rings, microkernels

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
Post Reply
iamtim
Posts: 3
Joined: Tue May 13, 2008 7:49 am
Contact:

Multiple cores, multiple rings, microkernels

Post by iamtim »

From what I understand the main complaint about microkernels is that the increased number of context switches causes a performance hit.

My CPU has 8 cores, my next one will probably have 16.

Why not dedicate one core to running in ring 0 while the rest stay in ring 3?

Or maybe instead of dedicating a core, the scheduler could just be programmed to try to keep processes in the same ring on the same core.

Is there any reason why this wouldn't work?
User avatar
thepowersgang
Member
Member
Posts: 734
Joined: Tue Dec 25, 2007 6:03 am
Libera.chat IRC: thePowersGang
Location: Perth, Western Australia
Contact:

Re: Multiple cores, multiple rings, microkernels

Post by thepowersgang »

Context switch in this argument doesn't mean ring changes (they're pretty cheap nowadays with SYSENTER/SYSCALL)

Context switch means changing address spaces (which means flushing caches, which is expensive)
Kernel Development, It's the brain surgery of programming.
Acess2 OS (c) | Tifflin OS (rust) | mrustc - Rust compiler
Currently Working on: mrustc
User avatar
bwat
Member
Member
Posts: 359
Joined: Fri Jul 03, 2009 6:21 am

Re: Multiple cores, multiple rings, microkernels

Post by bwat »

iamtim wrote:From what I understand the main complaint about microkernels is that the increased number of context switches causes a performance hit.

My CPU has 8 cores, my next one will probably have 16.

Why not dedicate one core to running in ring 0 while the rest stay in ring 3?

Or maybe instead of dedicating a core, the scheduler could just be programmed to try to keep processes in the same ring on the same core.

Is there any reason why this wouldn't work?
All non-compute bound processes will be running on the same core. It wouldn't be surprising to see some cores at a high load whilst other cores were idle. See Chapter 9 of UNIX Systems for Modern Architectures, Schimmel for master-slave kernel design, and for why high load is bad see http://www.treewhimsy.com/TECPB/Article ... sights.pdf
thepowersgang wrote: Context switch means changing address spaces (which means flushing caches, which is expensive)
You're assuming separate address spaces and making other assumptions about cache design. See the first 7 chapters of UNIX Systems for Modern Architectures, Schimmel for cache design and how it impact context switches is UNIX like process designs.
Every universe of discourse has its logical structure --- S. K. Langer.
Post Reply