Minimalist Microkernel Memory Management
Re:Minimalist Microkernel Memory Management
What do you need asynchronous procedure calls for? I think you've confused it slightly:
Thread 1: main -> CreateThread -> Func1 -> Func2
Thread 2: ThreadMain -> Func1 -> Func3
Procedure calls are meant to be synchronous, but each thread is asynchronous from other threads.
The same logic applies here, the server is an entirely seperate process executing concurrently, synchronously calling the service stalls execution in the 'caller' whilst waiting for the 'callee'. Some things should be synchronous, others however shouldn't. eg. fread() should be synchronous unless it is explicitly told not to be but DrawRectangle() should be asynchronous, the program should not have to wait for the GUI server to draw a primitive before continuing execution (there is no meaningful information returned so why bother waiting?).
Thread 1: main -> CreateThread -> Func1 -> Func2
Thread 2: ThreadMain -> Func1 -> Func3
Procedure calls are meant to be synchronous, but each thread is asynchronous from other threads.
The same logic applies here, the server is an entirely seperate process executing concurrently, synchronously calling the service stalls execution in the 'caller' whilst waiting for the 'callee'. Some things should be synchronous, others however shouldn't. eg. fread() should be synchronous unless it is explicitly told not to be but DrawRectangle() should be asynchronous, the program should not have to wait for the GUI server to draw a primitive before continuing execution (there is no meaningful information returned so why bother waiting?).
Re:Minimalist Microkernel Memory Management
You haven't seen a language that does since a language isn't supposed to meddle with those things. It can support stuff that supports those threads, but it cannot just start a thread.JoeKayzA wrote: That brings to mind another question: Are our current programming languages (derived somehow from C or Pascal) really that suitable for today's growing requirements in the field of concurrent programming...I've not yet seen a language that directly allows for asynchronous procedure calls.
If you count system-defined functions, count c++ on atlantisos (that is, when it's finished). It's going to support a tcall function which is similar to a normal call but then the asynchronous version, which starts a thread to do the call and returns immediately. Useful for stuff like long operations on which your program doesn't directly depend and such. Oh, and of course, showing off your design on Mega-Tokyo
C-Omega != C-Sharp. Current C# doesn't have enough "abstraction" of concurrency to allow anything to support multithreading, except for the threading classes themselves. If you want to use collections from more than one thread - sorry, gotta mutex them yourself. Wanna use graphical elements - nope, no mutexes either, just very vague crashes. Wanna try to use multithreading for heavy calculation? Don't return intermediate results, since you will slow it down by a lot.Colonel Kernel wrote: http://research.microsoft.com/Comega/
Check out the papers on concurrency abstractions in C#.
Not that it matters, I've timed my program. It analyses data in c# and can do so concurrently (with mutexes, with barely any communication). It should thus run at about... 95% of normal speed.
Analysing 6 big files with 1 thread max (IE, all in a row) takes around 8 minutes. Doing them in parallel takes around 30 minutes. I've set the default limit to 1 so it's faster...
what's the point of multithreading if the OS slows it down so much?
On the concept of multithreading and speed... Why is it slow?
(think for a while)
1. Each thread has startup overhead, shutdown overhead, security overhead and concurrency overhead (mutexes, monitors etc.).
2. Each thread switch is accompanied by a switch in working set. This means that the pages paged in will be less than good, and the same concept also applies on the caching level (so no, keeping all pages in doesn't help much).
You can reduce the first one to near nothing, given a very fast kernel. It should be less than 100 cycles in a very optimized implementation (I'm aiming for around 50 cycles for thread creation, but I'm hoping it'll be below 100 if I don't make 50). You can hardly reduce the second one, except by reducing the amount of swaps. Reducing that however reduces the responsiveness of the OS.
Conclusion (for this moment): Multithreading is slow on single-processor systems.
Future-view(tm) put on:
Computers have each got around 5-10 processors, which are each about as powerful as a 1.5ghz current processor (from the older types, duron, p3 etc.). They'll support big amounts of memory and can do actual multithreading. Memory bandwidth will have increased, yet at a too slow a pace to cope with multiple processors. Common setup will be to have a few regions of memory (2-4) for the processors among which they share memory. The OS will probably be replicated in each space for speed.
In this environment, the advantage of caching will fairly quickly fade, except for read-only stuff. Programs that can be multithreaded, however fine-grained it is, will be faster than the nonmultithreaded ones, simply because they run on more than one processor.
For an OS this would mean that you'd have to support creating threads very quickly, dying threads should also be handled efficiently and it should be possible for each program to have a sort of "dead thread pool", in which the stack & such are still allocated, except that the thread itself is dead. This allows for very quick thread creation, since everything you need is there already, it just isn't in use. Activating such a thread should be as fast as possible, if possible below 10 cycles, although I doubt it's possible. It must be doable within around 50 cycles.
EOR (end of rant)
Re:Minimalist Microkernel Memory Management
I thought of a prgramming language that has a thread pool integrated in its runtime, and that has embedded functionality to create a distinct thread (or awake an idle thread) just for executing a procedure, when the call is meant to be async. I know - this is mostly cosmetic, but I think with even desktop and laptop processors becoming dual-cored nowadays, highly multi-threaded (or distributed over processes) apps could become significantly faster than single threaded ones.What do you need asynchronous procedure calls for? I think you've confused it slightly:
Thread 1: main -> CreateThread -> Func1 -> Func2
Thread 2: ThreadMain -> Func1 -> Func3
Procedure calls are meant to be synchronous, but each thread is asynchronous from other threads.
That's what I meant!Future-view(tm) put on:
Computers have each got around 5-10 processors, which are each about as powerful as a 1.5ghz current processor (from the older types, duron, p3 etc.). They'll support big amounts of memory and can do actual multithreading. Memory bandwidth will have increased, yet at a too slow a pace to cope with multiple processors. Common setup will be to have a few regions of memory (2-4) for the processors among which they share memory. The OS will probably be replicated in each space for speed.
cheers Joe
Re:Minimalist Microkernel Memory Management
I couldn't agree more with you here: Synchronious IPC and multi-processor systems don't mix too well. The reason why Liedtke decided to go for it anyway was probably that back then such systems were rather rare (which is now changing..) and that it's not trivial to allow asynchronious IPC without adding a lot of complexity to the messaging system.Brendan wrote:That's why I don't like the synchronous send-receive-reply stuff - too much like call/ret and not enough like multi-threading. For e.g. consider what happens on a dual CPU computer where there's one thread that needs to send 3 independant requests to a server
btw: The mp-example doesn't look very realistic too me, except if it takes ages to send a message (otherwise the clients will clealy dominate over it). Here's how it should look like in my opinion:
task A = window manager (server)
task B, C, D, E = apps that own a window (clients)
(Some window was moved and the clients must be informed so that they can update their view)
- task A runs on CPU0 and sends out a message to each client, then sleeps
- tasks B-E are now scheduled using all of the system's CPUs
- Once all the results have returned, task A might run again
Applications programmers aren't in an way forced to write their own abstraction in an exokernel. The normal case is that they build their programm on-top of a library that provide some comfort rather than the raw kernel as often done in a monolithic-kernel. The library acts as a wrapper class that adds a level of abstraction and thus shields the programmer from having to deal with the exokernel low-level functionality.Brendan wrote:How come nano/exo-kernels have such a bad reputation? I think it comes down to people ignoring what most OS's get used for - running "applications" (which includes things like Apache servers, etc). Applications programmers couldn't give a rat's behind what the kernel does or how much policy could be changed - they're writing Java, VisualBasic and POSIX/ANSI C code that needs to work. Often performance is less important than development time, often it needs to be portable, and almost always the application programmer has better things to do than understanding and writing new "system modules" or implementing different policies.
The main advantage is that application programmers can chose the library (and therefore policy, interface, etc) themself according to the needs of their application and if some asm-geeks really want to use the raw exo-kernel they can also do so. If the applications programmers however really doesn't care about the policy (I dare to doubt this), all he has to do is to include the standard library at the beginning of his code.
Let's take the window manager as an example because it's easier to demonstrate it for high level servers.
[pre]
o---------------------o o-----------------o
| graphic-card driver | o----o | terminal server |
o---------------------o o-----------------o
[/pre]
On the lowest level there is the graphic-card driver that has to multiplex the device. In this case I'd propose to use 'windows of pixels' (left, right, top, bottom) as the primitive that will be used to ensure security. Note that we'll limit ourselves to 2D here because the 3D part (normally) isn't needed for windows and apart from that too bad documented anyways.
At system start-up the root manager (I called it terminal manager because it'll in my design synchronize video, kbd and mouse) is started and allocates a window that spans the whole screen.
Applications can now allocate a 'window' by sending a message to the root-server. In return they get a capability that authorized them to output pixels, resize the window, move it and eventually destroy it.
What most people get wrong is that they think the poor application has to build all abstractions (toolbar, captions, menus) himself. The normal case is however that he just includes a library:
[pre]
#include <std_windows.h>
int main()
{
CreateWindow(...) // over-all look: caption bar, etc
CreateToolbar(...)
CloseWindow(...)
}
[/pre]
Of course he can also use an alternative library, possibly - but not neccessarily - a self-written one.
Yes, most people won't care but there are some that do care and they should be given the chance (a tall person might want to use a different seat, a handicapped person needs special pedals, etc).Brendan wrote:It's sort of like designing a car where the steering wheel, gearstick and pedals can be easily removed and replaced with something completely different - a waste of time considering that most people just want to go to the shops.
regards,
gaf
Re:Minimalist Microkernel Memory Management
Hi,
It should also be remembered that my simple example was simple. By the time you extend it with more threads, more CPUs, more complex interactions and hardware delays the example becomes too complicated to convey the concept effectively.
Cheers,
Brendan
IMHO the problem isn't the languages themselves, but the libraries. I've not had any problem with assembly, and for POSIX there are asynchronous file IO functions (too little too late, but it's a start).JoeKayzA wrote:That brings to mind another question: Are our current programming languages (derived somehow from C or Pascal) really that suitable for today's growing requirements in the field of concurrent programming...I've not yet seen a language that directly allows for asynchronous procedure calls.That's why I don't like the synchronous send-receive-reply stuff - too much like call/ret and not enough like multi-threading.
There's plenty of servers that are CPU bound - GUI, font engines, encryption, compression, etc. For my OS there won't be any shared libraries or DLLs either (they'll all be "servers" running as seperate threads). For file IO it's also fairly important, especially for computers with more than one hard drive (or a distributed OS) where the hardware itself can do file operations in parallel. Even when the hardware can't do file operations in parallel making all the requests at once allows the hard disk drive to optimize/re-order hard drive access (to minimize seek times).Colonel Kernel wrote:@Brendan:
IMO your examples are only really meaningful if both the client and the server are CPU bound. In terms of servers, I don't think many OS services fall under that category. In terms of clients, most clients would be stuck waiting for the server to do useful work anyway. If a client thread wants to make an asynchronous request to a server, it can always queue a request to a pool of worker threads, any of which could handle communication with the server on its behalf. Or, the server thread that handles requests could ship them off to its own thread pool to distribute the load and reply to the client immediately. Either way works, and both are common idioms in my experience.
It should also be remembered that my simple example was simple. By the time you extend it with more threads, more CPUs, more complex interactions and hardware delays the example becomes too complicated to convey the concept effectively.
I'd assume it'd depend a lot on circumstance (how many CPUs, how much can be done in parallel, how much the client needs to wait anyway, etc) and how well optimized/designed the software is.Colonel Kernel wrote:Either you make kernel-level IPC synchronous and really fast, or you add more complexity in the kernel for asynchronous IPC and you try to get rid of thread pools. It isn't clear to me which approach would perform better, but to me they seem functionally equivalent from the application developer's point of view.
We need benchmarks!
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re:Minimalist Microkernel Memory Management
Hi,
http://www.intel.com/products/processor/pentiumXE/
How long before someone puts a set of these into a 4-way motherboard?
For memory bandwidth (from what I've heard), Intel will be building the memory controller into the CPU "relatively" soon, so that both Intel and AMD multi-socket motherboards will be NUMA (where each chip has it's own local memory, but can still access all other memory too). My crystal ball seems to show AMD chips with extra hyper-transport links for memory (a seperate memory bank for each core), but the old crystal ball is still a little fuzzy on that one.
I also think we'll see "quad-core" chips well before any CPU manufacturer gets close to 6 GHz - they've been pushing to get past 4 GHz for a few years now with varying success...
Of course it's all in the future, but IMHO it's measured in months rather than years. In any case, whatever comes out of those chip fabrication plants I want to be ready for it.
Cheers,
Brendan
Check this out - dual core chips with hyper-threading (4 logical CPUs per chip):Candy wrote:Future-view(tm) put on:
Computers have each got around 5-10 processors, which are each about as powerful as a 1.5ghz current processor (from the older types, duron, p3 etc.). They'll support big amounts of memory and can do actual multithreading. Memory bandwidth will have increased, yet at a too slow a pace to cope with multiple processors. Common setup will be to have a few regions of memory (2-4) for the processors among which they share memory. The OS will probably be replicated in each space for speed.
http://www.intel.com/products/processor/pentiumXE/
How long before someone puts a set of these into a 4-way motherboard?
For memory bandwidth (from what I've heard), Intel will be building the memory controller into the CPU "relatively" soon, so that both Intel and AMD multi-socket motherboards will be NUMA (where each chip has it's own local memory, but can still access all other memory too). My crystal ball seems to show AMD chips with extra hyper-transport links for memory (a seperate memory bank for each core), but the old crystal ball is still a little fuzzy on that one.
I also think we'll see "quad-core" chips well before any CPU manufacturer gets close to 6 GHz - they've been pushing to get past 4 GHz for a few years now with varying success...
Of course it's all in the future, but IMHO it's measured in months rather than years. In any case, whatever comes out of those chip fabrication plants I want to be ready for it.
Not necessarily - reducing the amount of thread spawning and termination is another option (server threads that run indefinately, with applications made of smallish client threads). In this case the cost of spawning the application's client threads would normally be negligable compared to other application start-up costs (disk IO, etc).Candy wrote:For an OS this would mean that you'd have to support creating threads very quickly, dying threads should also be handled efficiently and it should be possible for each program to have a sort of "dead thread pool", in which the stack & such are still allocated, except that the thread itself is dead. This allows for very quick thread creation, since everything you need is there already, it just isn't in use. Activating such a thread should be as fast as possible, if possible below 10 cycles, although I doubt it's possible. It must be doable within around 50 cycles.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re:Minimalist Microkernel Memory Management
I have to correct myself in this regard, I remember now I've actually seen one: Limbo (for the Inferno platform). It had a keyword 'spawn' which you set in front of a function call, which launches the call in its own thread. But I agree that this is quite the same as calling a system function and passing a pointer to an initial function (with some flags for the thread) to launch a new thread. So your 'tcall' thing might actually be what I was talking about...Candy wrote: You haven't seen a language that does since a language isn't supposed to meddle with those things. It can support stuff that supports those threads, but it cannot just start a thread.
EDIT: I also remember now (it was quite hot in the office today, so thoughts might get blurred a bit ): The reason why I said that these things should be embedded in the language was also to hide the borders between processes when an application gets distributed over multiple CPU's or nodes in a network. The intent was that inside a process of its own, running subroutines synchronously might be more efficient (think of thread creation overhead), but in the distributed case, running them in parallel could be faster (again, you might have a look at my OS design to see why that matters to me). But I've come to the conclusion now, that these issues should better be solved at a higher level anyway...
cheers Joe
Re:Minimalist Microkernel Memory Management
Hi,
I'll freely admit it was a biased analogy to start with, as cars aren't built in a modular way (something that's always annoyed me that probably has more to do with the price that can be charged for spare parts than common sense).
Cheers,
Brendan
I still think most applications programmers just want the standard ANSI or POSIX library, or the standard JAVA runtime, or the standard .NET framework, or the standard win32 API. It's a curse that I've been trying to find a work-around for, as it's the same problem with asychronous IPC (it differs from the "standard" so most applications programmers don't want to know).gaf wrote:The main advantage is that application programmers can chose the library (and therefore policy, interface, etc) themself according to the needs of their application and if some asm-geeks really want to use the raw exo-kernel they can also do so. If the applications programmers however really doesn't care about the policy (I dare to doubt this), all he has to do is to include the standard library at the beginning of his code.
I'm taking a different approach here - simplified, the only thing used by the video driver is the GUI and the GUI handles the connections to each "client". For video the client sends a script describing how the GUI should draw it's window. This means the GUI can keep the scripts in case the window needs to be redrawn (or store "windows" in the video card's off-screen memory if the video driver supports 2D acceleration). Most applications would send a script saying "draw this icon here, put this menu there, put a radio button over there", etc so that the application doesn't need any library (smaller apps, and the same "look and feel" for everything as determined by the GUIs settings). The application can also send a script saying "display the following raw pixel data" where the raw pixel data is generated from it's own library or code (mostly for games). Alternatively the application can generate it's own video data for some things and use the scripts for others - for e.g. a GUI generated menu with combined with one or more application generated areas.gaf wrote:Let's take the window manager as an example because it's easier to demonstrate it for high level servers.
[snip]
What most people get wrong is that they think the poor application has to build all abstractions (toolbar, captions, menus) himself. The normal case is however that he just includes a library:
[snip]
Of course he can also use an alternative library, possibly - but not neccessarily - a self-written one.
Sure, but these modifications are normally either built into the basic car as adjustments (e.g. an adjustable seat for longer legs, rather than a completely replaceable seat) or done by third party companies because it's not cost effective for the original manufacturer to care (pedals for the handicapped). The former equates to a monolithic kernel that allows adjustments to it's policies (e.g. linux's "real time" scheduling option that can be used instead of the normal scheduling), while the latter equates to someone modifying the linux source code.gaf wrote:Yes, most people won't care but there are some that do care and they should be given the chance (a tall person might want to use a different seat, a handicapped person needs special pedals, etc).It's sort of like designing a car where the steering wheel, gearstick and pedals can be easily removed and replaced with something completely different - a waste of time considering that most people just want to go to the shops.
I'll freely admit it was a biased analogy to start with, as cars aren't built in a modular way (something that's always annoyed me that probably has more to do with the price that can be charged for spare parts than common sense).
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
- Colonel Kernel
- Member
- Posts: 1437
- Joined: Tue Oct 17, 2006 6:06 pm
- Location: Vancouver, BC, Canada
- Contact:
Re:Minimalist Microkernel Memory Management
Yes, I know. But the title of the first paper on the "chords" concept in C-Omega is nevertheless titled "Modern Concurrency Abstractions in C#". This was before it was named "Polyphonic C#", and eventually merged into C Omega.Candy wrote: C-Omega != C-Sharp.
I found the chords concept to be oddly intuitive, and not at all like current multithreaded programming.
Top three reasons why my OS project died:
- Too much overtime at work
- Got married
- My brain got stuck in an infinite loop while trying to design the memory manager
Re:Minimalist Microkernel Memory Management
If everybody is happy with the current situation, we should stop wasting our time with childish hobby operating systems, shut down this forum and get a new hobby (collecting stamps is said to be interesting..). I did not start OS dev for the average VisualBasic programmer, but to find ways to overcode the fundamental problems of todays monolithic kernels. In fact I think that this is the only way to be successfull with an OS under the current situation with windows and linux already covering the traditional concepts/market. It's only natural that the average programmer has to be convinced of the advantages of a new design first and if he really doesn't care about the internal design - even better - then he also won't mind that it's an exokernelBrendan wrote:I still think most applications programmers just want the standard ANSI or POSIX library, or the standard JAVA runtime, or the standard .NET framework, or the standard win32 API. It's a curse that I've been trying to find a work-around for, as it's the same problem with asychronous IPC (it differs from the "standard" so most applications programmers don't want to know).
So, what's your motivation ?
This looks pretty much like the "normal" way of doing it, with the GUI module of your kernel acting as some kind of glue between apps and driver. Just a few notes:Brendan wrote:I'm taking a different approach here - simplified, the only thing used by the video driver is the GUI and the GUI handles the connections to each "client". For video the client sends a script describing how the GUI should draw it's window. This means the GUI can keep the scripts in case the window needs to be redrawn (or store "windows" in the video card's off-screen memory if the video driver supports 2D acceleration). Most applications would send a script saying "draw this icon here, put this menu there, put a radio button over there", etc so that the application doesn't need any library (smaller apps, and the same "look and feel" for everything as determined by the GUIs settings). The application can also send a script saying "display the following raw pixel data" where the raw pixel data is generated from it's own library or code (mostly for games). Alternatively the application can generate it's own video data for some things and use the scripts for others - for e.g. a GUI generated menu combined with one or more application generated areas.
- Do you already know how the scripts will roughly look ? After all it pretty much depends on them, how much the apps can decide themselves..
- Apps don't neccessarily have to be bigger for an exokernel system, DLLs could be used to counter the effect. Apart from that, executable size doesn't seem to be a big problem today.
- The "unique look and feel" argument doesn't work very well anymore (at least not here), just have a look at your favorite media-playing application.
"Modifying the linux source" - that's a good one..Brendan wrote:The former equates to a monolithic kernel that allows adjustments to it's policies (e.g. linux's "real time" scheduling option that can be used instead of the normal scheduling), while the latter equates to someone modifying the linux source code.
Due to the inflexible and 30 years old unix design this is something that will always stay out of my scope, at least until I have a private army of coders. Not having to read through 10.000 lines of C code and fixing dependencies throughout the whole kernel is why I'm going for a modular OS..
regards,
gaf
Re:Minimalist Microkernel Memory Management
What makes you so sure that an Exokernel is so inherently better then a monolithic kernel? Sure you overcome the "all in one place" vunerability at the cost of performance. You make the system modular at the cost of performance. You make the system less vunerable to damage from (but not preventing) attacks from hackers at the cost of performance.
Exokernels may remove most of the security and stability problems of a monolithic kernel but they introduce a species of problems that are unique to micro/exo kernels. One of which is simply maintaining good enough performance to be percieved as responsive by the user, but the worst of which is exactly what makes the system more stable and secure to begin with ... the servers. How can you know if the server is trustworthy? How do you know the user authorized replacing a server with another one? How do you prevent a collection of servers from working together to steal the users files? How do you know if the server is malfunctioning and not fulfilling its duties? etc.
I am personally aiming for a micro/exo myself anyway (depending on how it works out I may or may not have memory management and process management as servers in kernel space as opposed to in the kernel) but I am yet to find a solution for the first 3 problems, the only thing that has come to mind so far is requiring code signing of all servers and blocking known evil signatures but that is a post-fix after the user is infected (I want anti-virus software to be unnecessary), or solution B is to require all servers be signed with my signature but developers may not take too well to having to submit their source to me for compiling and signing.
The truth of the real world is; it's not quality that counts, but the quantity of advertising.
Exokernels may remove most of the security and stability problems of a monolithic kernel but they introduce a species of problems that are unique to micro/exo kernels. One of which is simply maintaining good enough performance to be percieved as responsive by the user, but the worst of which is exactly what makes the system more stable and secure to begin with ... the servers. How can you know if the server is trustworthy? How do you know the user authorized replacing a server with another one? How do you prevent a collection of servers from working together to steal the users files? How do you know if the server is malfunctioning and not fulfilling its duties? etc.
I am personally aiming for a micro/exo myself anyway (depending on how it works out I may or may not have memory management and process management as servers in kernel space as opposed to in the kernel) but I am yet to find a solution for the first 3 problems, the only thing that has come to mind so far is requiring code signing of all servers and blocking known evil signatures but that is a post-fix after the user is infected (I want anti-virus software to be unnecessary), or solution B is to require all servers be signed with my signature but developers may not take too well to having to submit their source to me for compiling and signing.
I personally just want to build something better then what's already available [It adds to my 1337ness basically , and I just find it fun], I don't particularly care if no-one uses it, and frankly I believe anyone who thinks they're going to be commercially successful in the mass-market is kidding themself. Brendan's OS sounds as though it has potential for specific applications for example but it is highly unlikely that anyone will dislodge Microsoft from the x86 (Market Inertia - people won't use your product even if it's better because they prefer to stay with what they know. Then there's the Existing Software Library - Windows has the largest range of every type of software, the only way to get around this really is to integrate Wine which isn't finished, and built for X and POSIX, and largely Linux specifically) unless they go out of buisness, in which case the marketing powers of IBM will switch everyone to Linux anyway.So, what's your motivation ?
The truth of the real world is; it's not quality that counts, but the quantity of advertising.
Re:Minimalist Microkernel Memory Management
Hi,
Almost everywhere I go there's a LAN with N computers, where a single application can only use the resources of one of those computers while the remaining N-1 computers spend most of their time idle. Another annoyance is that each computer is only used by one user at a time, despite the hardware being capable of handling multiple video cards, multiple keyboards, etc. A typical office with 10 computers could be reduced to 5 computers (2 users per computer) while increasing performance and providing additional features.
I guess I'm hoping that the distributed features of the OS are enough to create a niche market, and that the OS can expand from this niche.
Cheers,
Brendan
I believe the future is computers with many-CPUs and that current OSs & programming practice doesn't get the most from it. I also want a peer to peer distributed OS capable of handling a cluster consisting of different platforms. I guess I don't like Windows or *nix much either.gaf wrote:f everybody is happy with the current situation, we should stop wasting our time with childish hobby operating systems, shut down this forum and get a new hobby (collecting stamps is said to be interesting..). I did not start OS dev for the average VisualBasic programmer, but to find ways to overcode the fundamental problems of todays monolithic kernels. In fact I think that this is the only way to be successfull with an OS under the current situation with windows and linux already covering the traditional concepts/market. It's only natural that the average programmer has to be convinced of the advantages of a new design first and if he really doesn't care about the internal design - even better - then he also won't mind that it's an exokernel
So, what's your motivation ?
Almost everywhere I go there's a LAN with N computers, where a single application can only use the resources of one of those computers while the remaining N-1 computers spend most of their time idle. Another annoyance is that each computer is only used by one user at a time, despite the hardware being capable of handling multiple video cards, multiple keyboards, etc. A typical office with 10 computers could be reduced to 5 computers (2 users per computer) while increasing performance and providing additional features.
I guess I'm hoping that the distributed features of the OS are enough to create a niche market, and that the OS can expand from this niche.
Not really - it's actually quite difficult to design it well (or design it so that any 2D or 3D accelerators in the video card can be used to do as much of the actual work as possible). I've got a lot of ideas, but won't have any formal design until just before I start implementing it .gaf wrote:This looks pretty much like the "normal" way of doing it, with the GUI module of your kernel acting as some kind of glue between apps and driver. Just a few notes:
- Do you already know how the scripts will roughly look ? After all it pretty much depends on them, how much the apps can decide themselves..
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re:Minimalist Microkernel Memory Management
I think that most people really overestimate the loss in performance connected with an ?kernel design. Back in the days of the first ?kernels, which were mostly based on a slighly adapted monolithic design, this might have been true, but more recent systems like L4 and QNX have proven that a much more efficient implementation is possible. Brendan has already posted a nice quotation about this:AR wrote:Exokernels may remove most of the security and stability problems of a monolithic kernel but they introduce a species of problems that are unique to micro/exo kernels. One of which is simply maintaining good enough performance to be percieved as responsive by the use.
"One of L4's goals was to demonstrate that the microkernel concept was not a flawed concept, and that acceptable performance was achievable simply by focusing on fixing the problems and making the system more machine specific [..]"
While one could argue whether monolithic kernels couldn't also be build in a more machine-specific way or not, this quotation at least shows that modern ?kernels can also compete with todays monolithic systems when it comes to performance.
What remains is a rather small performance disadvantage that will in my opinion will get less and less important in the future because CPU performance is seldomly a bottle-neck today. I'm aware that there are still highly CPU bound tasks, but they are rather rare and you'll hopefully all agree that the gap between CPU speed and I/O latency and throughput keeps on increasing with every new generation of computers.
Apart from that one should also not forget that a more modular design might also result in increased performance for certain specialized applications whose requirements can't be satisfied by traditional kernel APIs.
The servers are part of the OS and therefore has to be trustworty just like the drivers in any design have to trused. Even in an exo-kernel a driver can't be held from screwing up its own device (eg: disk driver deletes the harddisk, spyware in the driver).AR wrote:How can you know if the server is trustworthy? How do you know the user authorized replacing a server with another one? How do you prevent a collection of servers from working together to steal the users files? How do you know if the server is malfunctioning and not fulfilling its duties? etc.
I think it's also important to mention that servers in an exokernel can't be compared to the traditional servers as they can be found in ?kernels or sometimes even monolithic systems (GUI in Linux, etc). In 1st generation ?kernels the servers are usually by some means privileged and export an abstraction to the user that doesn't much differ from what a monolithic kernel would provide, with the only exception that the policy is now in user-space and therefore can be exchanged. The problem that applications can not freely decide which policy they want to use however still remains to a certain extend (an application can't just start a new server for itself) so that exokernels follow a different approach that can be sumarized best as "It's not enoght if the policy is just in user-space, it has to under user-control".
In fact the original exokernal by Engler and Kaashoek doesn't even know any servers, applications directly access the resources through the multiplexing interface ("low level drivers") offered by the kernel itself. In my opinion this however causes problem whenever a global policy has to be ensured that provides a certain level of fairness and keeps apps from just hogging all the resources for themselves.
I therefore reintroduced servers in my own design, but they are only meant to manage access rights to the system resources and can thus be kept very simple. The number of servers will also be much smaller than in a ?kernel because no specific policy is include, so that they can be used under a variety of system configurations.
(message - to - long - must - split)
Re:Minimalist Microkernel Memory Management
Here's a small example:
[pre]
microkernel:
o-----------------o o--------------o
| terminal server | o---o----o | KDE Server |
o-----------------o | o--------------o
|
| o--------------o
o----o | Win32 GDI |
o--------------o
exokernel:
o-----------------o o---------------------------o
| terminal server | o---o----o | upper half of the screen |
o-----------------o | o---------------------------o
|
| o---------------------------o
o----o | lower half of the screen |
o---------------------------o
[/pre]
I decided to use the same example as in my last post to provide some consistency . The 'terminal server' is the GUI root server, it has full access to screen/console, keyboard and mouse.
In the microkernel the two user-level servers "win32" and "KDE" are privileged and can therefore access the terminal server. They provide a full abstractions of their resource and applications are only free to chose the server that is less bad for them. Since they can't access the terminal server directly, all communication has to go through one of the GUI servers first, which is not a big problem here, but can cause a quite drastic performance lost if systems with more levels of stacked servers.
In the exokernel there are also two servers which divide the screen (lets assume we're using the x86 console) into two halves and connect a minimum policy with each of the two halfs:
* The upper half of the console is for applications, everybody can use it
* The lower half is reserved for tasks that belong to the system (group ID) and need to output debug infos et cetera
An application only has to communicate with the server if it either wants to allocate a resources from it or free them, but not to access resources which is done directly using the exokernel interface, so that the stacked servers problem is eliminated.
There are various other situations in which it makes some more sense than in my example to use further user-level managers:
- partitions on hard-disks
- groups in general (students, profs, deans)
- quotas for users
regards,
gaf
[pre]
microkernel:
o-----------------o o--------------o
| terminal server | o---o----o | KDE Server |
o-----------------o | o--------------o
|
| o--------------o
o----o | Win32 GDI |
o--------------o
exokernel:
o-----------------o o---------------------------o
| terminal server | o---o----o | upper half of the screen |
o-----------------o | o---------------------------o
|
| o---------------------------o
o----o | lower half of the screen |
o---------------------------o
[/pre]
I decided to use the same example as in my last post to provide some consistency . The 'terminal server' is the GUI root server, it has full access to screen/console, keyboard and mouse.
In the microkernel the two user-level servers "win32" and "KDE" are privileged and can therefore access the terminal server. They provide a full abstractions of their resource and applications are only free to chose the server that is less bad for them. Since they can't access the terminal server directly, all communication has to go through one of the GUI servers first, which is not a big problem here, but can cause a quite drastic performance lost if systems with more levels of stacked servers.
In the exokernel there are also two servers which divide the screen (lets assume we're using the x86 console) into two halves and connect a minimum policy with each of the two halfs:
* The upper half of the console is for applications, everybody can use it
* The lower half is reserved for tasks that belong to the system (group ID) and need to output debug infos et cetera
An application only has to communicate with the server if it either wants to allocate a resources from it or free them, but not to access resources which is done directly using the exokernel interface, so that the stacked servers problem is eliminated.
There are various other situations in which it makes some more sense than in my example to use further user-level managers:
- partitions on hard-disks
- groups in general (students, profs, deans)
- quotas for users
I think that I'd be very hard to ensure that such a system is not just circumvented by faking the signatures. You could of course try to couter this, but then even m$ can't make sure that nobody steals/hacks their software..AR wrote:I am yet to find a solution for the first 3 problems, the only thing that has come to mind so far is requiring code signing of all servers and blocking known evil signatures but that is a post-fix after the user is infected (I want anti-virus software to be unnecessary), or solution B is to require all servers be signed with my signature but developers may not take too well to having to submit their source to me for compiling and signing.
regards,
gaf
Re:Minimalist Microkernel Memory Management
I meant actual code signatures (private/public key pairs), code signatures are difficult to forge, and any modification after the signing corrupts the signature (Same sort of thing used in SSL), so it is used to verify authenticity. To pre-empt malicious software, B is the only way to go but if anyone has a better idea then I'd like to hear it. (As a desktop OS, the design must assume the user is computer illiterate, and is also the Administrator therefore the OS must protect itself and make decisions for the user unless they turn it off - I know the general philosophy is to not protect the user from themself, but unfortunately that is what is necessary with the average user)gaf wrote:I think that I'd be very hard to ensure that such a system is not just circumvented by faking the signatures. You could of course try to couter this, but then even m$ can't make sure that nobody steals/hacks their software..AR wrote:I am yet to find a solution for the first 3 problems, the only thing that has come to mind so far is requiring code signing of all servers and blocking known evil signatures but that is a post-fix after the user is infected (I want anti-virus software to be unnecessary), or solution B is to require all servers be signed with my signature but developers may not take too well to having to submit their source to me for compiling and signing.