Minimalist Microkernel Memory Management

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
Post Reply
User avatar
Colonel Kernel
Member
Member
Posts: 1437
Joined: Tue Oct 17, 2006 6:06 pm
Location: Vancouver, BC, Canada
Contact:

Re:Minimalist Microkernel Memory Management

Post by Colonel Kernel »

Since I've got nothing better to do on a Sunday afternoon... ;D
gaf wrote:In fact the x86 is meant as a general purpose architecture and you can run a huge variety os apps on it. You've enumerated some of them yourself already and my idea would be to provide programmers with a number of libOS that are specialized for such systems.
You're still missing my point... I'm asking why the kernel has to be general-purpose. No OS on the face of this earth is truly general-purpose. They all have some underlying assumptions about the hardware they run on and the types of applications that they will support.

Some examples: QNX has scheduling policies in the kernel that make sense for a real-time OS, but not necessarily for a server OS where throughput is paramount. Most desktop OSes were designed for standalone machines before everyone had Internet connectivity, so they are plagued by security problems. Etc....

So, by pushing all policy out of the kernel (including things like how to manage IRQs... I read the Omega0 paper... yikes!) it seems to me that the purpose is to create a completely general purpose kernel (i.e. -- to let apps do whatever they want with the resources that they are given).

To me, this seems like a point at the far end of the spectrum from OSes like Linux, for example. There are a gazillion different versions of Linux out there, each compiled for a particular purpose (server, smartphone/PDA, desktop, etc.). To me, this sounds like bad engineering in the sense that you have to modify source code (or at least re-compile with different options) to adapt the OS to a different environment and set of assumptions. I'm sure you agree. :) However, from a real-world standpoint, it gets the job done.

An example that's closer to the middle of the spectrum would be QNX. It has a microkernel that works more or less the same way regardless of the hardware that it's running on, but a customer that wants to use it in some custom embedded system is free (AFAIK) to replace the default Process Manager with their own, or at least leave it out. QNX wasn't designed to allow for multiple pagers the way that L4 was, but has enough flexibility in its design to allow for a different pager, or no pager. Unlike Linux, this doesn't require the modification of source code. Unlike L4, this doesn't allow for radical policy changes at run-time either.

So my question is, why go to the other extreme and make the OS adapatable at run-time? Isn't that overkill? I'm asking this independently of the other advantages of exokernels. Why is this degree of flexibility, in and of itself, a good thing?
What's the advantage of making it static ? This would mean that you couldn't even run a game on your "desktop" machine because it has a different user-level manager - hardly practical in my opinion..
No one says you can't run different types of applications on systems they're not suited for. You just might not get the performance you expect. I was talking about the roles that OSes tend to play in actual usage, not on any particular limits imposed by the OS.
Why, if not due to its flexible design, should an exokernel be any faster than a traditional OS ?
It's not flexibility that makes an OS fast, it's the avoidance of excessive overhead. If these two things happen to coincide within the exokernel design, then great.
Security largly depends on the user-space manager design, and if they are monolithic it's not any better either.
But you keep talking about capabilities... If this concept is not inexorably tied to the exokernel design, then why keep bringing it up as an advantage?
Yep, the kernel sets up a capability that spans the whole device and sends it to a root-manager. The root-manager can then split this capability and pass it on to other lower level managers.
Yep, I get that. I think what's missing is a discussion of how the root-manager can later revoke capabilities from the lower-level managers when resource usage becomes unbalanced. Which brings up the question of how much policy is in the root-managers and what these policies might look like. And how well do they work. :) Sigma0 for example seems pretty useless to me from a policy point of view. The top-level pagers under sigma0 pretty much have to duke it out amongst themselves if they want to rebalance their memory resources.

...continued...
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!
User avatar
Colonel Kernel
Member
Member
Posts: 1437
Joined: Tue Oct 17, 2006 6:06 pm
Location: Vancouver, BC, Canada
Contact:

Re:Minimalist Microkernel Memory Management

Post by Colonel Kernel »

...continued...
The pager itself will hardly be more than a few megs in size, paging it out therefore doesn't make any sense. You would just take one of the pages it has passed on to its apps which can then also be dirty..
I'm lost now. My understanding of page fault handling in L4 is that when a thread incurs a page fault, the kernel puts it in a special "receiving state", then notifies the thread's pager thread by way of IPC. Then that pager thread has to come up with a page to satisfy the page fault and send the mapping to the thread that incurred the page fault. Until recently, I was under the impression that the faulting thread and its pager thread were in different processes.

If the kernel unmaps a dirty page from some app, it must also be unmapped from every other address space to which it had been mapped, otherwise the kernel can't use it (pages of kernel bookkeeping data cannot be shared). However, the pager for the victim app doesn't know that the page has been unmapped unless it tries to access or map it later on. Where in this scenario does the pager have a chance to save its contents?

For that matter, if pagers themselves should not be paged out (as a matter of policy), how does the kernel know whether it is respecting this when it steals pages?
Wouldn't it be easier if the task had to pay the tay right way ? In my opinion the idea requires too much book-keeping (every time a resource in allocated/delocated) just to make sure that the kernel won't run out of memory, but it should nevertheless work. If you just want to prevent DOS attacks, I'd be more practical to require caps for the creation af tasks etc..
There wouldn't be much bookkeeping required (it's not for every resource, just thread creation). The kernel would have to check the flexpage it was given to see that it's big enough and that it's mapped. If it's too small, the system call would fail (tax evasion! ;D). If it's big enough but not mapped, the kernel would force a page fault on the thread making the system call. Once it has real memory backing the sacrificed flexpage, it would unmap it from all address spaces and add it to its own free list. I don't see this as being more expensive than your proposed solutions.

I can see how capabilities could be used to prevent DOS attacks in principle, but I'm not sure how it would work in practice. A higher-level manager could certainly deny an app the right to create new threads, but how would it decide when to do this? Would there be a fixed number of threads that can exist at any given time? Would there be a fixed number per app instead...? Or would there need to be some other criteria? What worries me about this scheme is that it still relies on the good graces of a user-space server to keep the kernel from exhausting its memory.
Maybe MM just wan't the best example in the first place as it's quite theoretic..
LOL... It sure is hard to converge on a good MM design. ;D But it's also quite fun. :)
JoeKayzA wrote:Assuming that the user level pager is a process of its own, this still looks like a normal, pure microkernel system to me...maybe we just have a terminology problem?
There's no fundamental difference between microkernels and exokernels, it's a qualitive matter. In a ?kernel the user-space pager would decide about the whole paging policy itself while in an exokernel the pager only decide about the minimum policy needed and leave the rest to the app.
This makes more sense now, although there are a few more differences between the two I think:
  • Microkernels can have a portable interface; exokernels do not since they provide no abstractions as such, just raw hardware resources.
  • Many microkernels are perfectly happy to just multiplex the CPU, memory, I/O space, and interrupts. There is no notion of disks and other devices as being fundamental resources of the system.
  • Some microkernels are portable, although it's debatable how useful this is.
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!
User avatar
gaf
Member
Member
Posts: 349
Joined: Thu Oct 21, 2004 11:00 pm
Location: Munich, Germany

Re:Minimalist Microkernel Memory Management

Post by gaf »

Colonel Kernel wrote:So my question is, why go to the other extreme and make the OS adapatable at run-time? Isn't that overkill? I'm asking this independently of the other advantages of exokernels. Why is this degree of flexibility, in and of itself, a good thing?

No one says you can't run different types of applications on systems they're not suited for. You just might not get the performance you expect. I was talking about the roles that OSes tend to play in actual usage, not on any particular limits imposed by the OS.
If I got you right, your idea is to release distros of your OS that are specialized for a certain platform/purpose and consist of exokernel modules that were linked/compiled to form a static system. I'm not saying that this can't work out, but I do see the danger that the distros are eventually too different to allow programms to run on all of them.

Let's assume you have compiled three systems:
- desktop (browsing, mp3, style-sheet)
- university (groups & users are important..)
- real-time (meeting deadlines is a must)

I'll now focus on scheduling, but the problem arises in other areas aswell.

For the desktop system one would probably want to use priority round-robin as it's very simple and has already proven to work sufficiently well for such systems. The university main-frame can also use a round-robin algorithm, but this time the applications aren't as free in choosing their priority as the groups and user quotas have to be considered. This might for example mean that a programm run by a student can never have a priority as high as the lowest priority possible if the same programm was run by the dean or that students get more CPU time if they work on behalf of their group then they would get if they were doing private things. For a real-time system the round robin algorithm is too vague as it's hard to say in what CPU share a certain priority will result. One would therefore rather use something like lottery scheduling that allows the app to specify directly how long it has to run.

Each of the systems would use a different, optimized scheduler which leads to problems as an app written for one of the systems would rely on the interface of "its" scheduler. For the first two systems it might be possible to find a solution as they basically use the same algorithm, but it'd be very hard to define an interface that is appropriate for both round-robin and lottery scheduling. The more the policies differ, the harder it'll be to find a common ground and the more you'll have to rely on compability patches (schedulers have to emulate other scheduler's interfaces) and abstractions which is exactly what the exokernel tries to avoid.

In a system that allows several schedulers the problem would in my opinion be easier to solve. Here the lottery-scheduler would be the root schduler as the real-time tasks have to meet their deadlines at all costs. The two other schedulers only run if there's no real-time task and share the CPU time that remains.

I personally don't think that an OS that allows user-level managers to be added at run-time wouldn't really be that much additional work as you also have to design the kernel in a modular way if you want to build a static release.
Colonel Kernel wrote:It's not flexibility that makes an OS fast, it's the avoidance of excessive overhead.
And why shouldn't I be able to avoid excessive overhead in any other OS design ? The only reason why it's indeed easier to do so in an exokernel is it's modular design that allows the programmer to keep controll over his code. Nobody - not even Bill Gates - can handle 40 million lines of C code without having to use some hacks every now and then ;)..
Colonel Kernel wrote:But you keep talking about capabilities... If this concept is not inexorably tied to the exokernel design, then why keep bringing it up as an advantage?
Capabilities are indeed nothing new to exokernel, such simple things as win32 handles basically work the same way. The difference is that exokernels apply capabilities for almost everything and that they are a vital part of the system.

(post goes on..)
User avatar
gaf
Member
Member
Posts: 349
Joined: Thu Oct 21, 2004 11:00 pm
Location: Munich, Germany

Re:Minimalist Microkernel Memory Management

Post by gaf »

Colonel Kernel wrote:I think what's missing is a discussion of how the root-manager can later revoke capabilities from the lower-level managers when resource usage becomes unbalanced.
As the root-manger still holds the highest capability for a resource, it can also revoke it. All other apps that hold a (lower..) cap will be informed first and can use this chance to save their content and update their internal book-keeping (there's a short description of the idea in section 3.3 of this paper).
Since it might not always be wise to revoke resources from servers by force, the root server could also define a mechanism that allows the server to at least choose which resource it wants to give away as often any will do. I'd also be possible to define an interface that allows lower-level pagers to specify some pages in advance that can be evicted in case that the root server needs them back.
This all takes place in user-space and, as the kernel isn't even aware of it, any policy might be used as long as it eventually returns the needed resource.
Colonel Kernel wrote:Until recently, I was under the impression that the faulting thread and its pager thread were in different processes.
As far as I rememeber L4 only requires a threadID to be defined for the pager. A pager running as one of the apps's threads should therefore be possible, although an external one is probably the better choice as global paging policies tend to work better..
Colonel Kernel wrote:If the kernel unmaps a dirty page from some app, it must also be unmapped from every other address space to which it had been mapped, otherwise the kernel can't use it (pages of kernel bookkeeping data cannot be shared). However, the pager for the victim app doesn't know that the page has been unmapped unless it tries to access or map it later on. Where in this scenario does the pager have a chance to save its contents?
The apps are informed if any of their resources is revoked..
Colonel Kernel wrote:For that matter, if pagers themselves should not be paged out (as a matter of policy), how does the kernel know whether it is respecting this when it steals pages?
Either by using a user-space protocol that allows the pager to specify the page to be evicted or by using two different capabilities, one for the pages the pager will use itself and one for the pages it wants to allocate for applications. The root-manager would then only revoke pages that were allocated using the application-capability and thus spare the pager's pages. This would mean that the pager couldn't as easily grow because it has to ask the root-manager first, but since the average pager has probably a rather static memory footprint this shouldn't be such a big problem.
Colonel Kernel wrote:I can see how capabilities could be used to prevent DOS attacks in principle, but I'm not sure how it would work in practice. A higher-level manager could certainly deny an app the right to create new threads, but how would it decide when to do this? Would there be a fixed number of threads that can exist at any given time? Would there be a fixed number per app instead...? Or would there need to be some other criteria?


That depends on the user-level managers and is not the kernel's problem. One approach would be to use a central server that helps by associating certain attributes describing the function in the system with a task. The server itself could use an ini-file or a data-base that can be edited by the user and it could be refered to by user-space servers in distress..
Colonel Kernel wrote:What worries me about this scheme is that it still relies on the good graces of a user-space server to keep the kernel from exhausting its memory.
As the user-space managers are an integral part of the OS, it's in my opinion acceptable if the kernel relies on them not doing stupid things. The kernel does protect them from one another and if the whole (user-space) system is buggy and vulnerable, one shouldn't be too surpised if it doesn't work..

regard,
gaf
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re:Minimalist Microkernel Memory Management

Post by Solar »

gaf wrote: The more the policies differ, the harder it'll be to find a common ground and the more you'll have to rely on compability patches (schedulers have to emulate other scheduler's interfaces) and abstractions which is exactly what the exokernel tries to avoid.
My concept in that area was to provide a "lean monolithic / fat microkernel" system with plug-in "policy modules" that implement policy for various subsystems, like scheduling, memory management etc., and allow to levels of access to the system: A generic one that makes use of whatever plug-in is installed, and a specific one that works with a specific plug-in only.

I was able to come up with four policies: throughput (server), responsive (desktop), performance (gaming / multimedia), and real-time (embedded / production). These would be configured at boot-time, or run-time if I could make it work.

I'm curious: Can you come up with a policy, or sub-policy, that is not part of the above categories (e.g., a paging mechanism that would make sense for certain type of throughput-optimized systems but not others), or a sound reason why you would want to mix any of the above?

If you can, my concept is broken. If you can't, you've just lost the reason why only an exokernel would suffice.
Every good solution is obvious once you've found it.
Legend

Re:Minimalist Microkernel Memory Management

Post by Legend »

Well, I have to admit, switching from desktop usage to gaming might be a good idea - on the other hand, these two should be close enough either.
User avatar
gaf
Member
Member
Posts: 349
Joined: Thu Oct 21, 2004 11:00 pm
Location: Munich, Germany

Re:Minimalist Microkernel Memory Management

Post by gaf »

Solar wrote:My concept in that area was to provide a "lean monolithic / fat microkernel" system with plug-in "policy modules" that implement policy for various subsystems, like scheduling, memory management etc., and allow to levels of access to the system: A generic one that makes use of whatever plug-in is installed, and a specific one that works with a specific plug-in only.
If you want to allow modules to implement a policy, the rest of the OS has to be free of policy and since several of these modules have to be supported you'll also need a common module-kernel interface. I think that this eventualy leads to a design similiar to the one Colonel Kernel has proposed with an ?kernel structure under the hood and statically linked managers that may then of course aswell run in kernel-space.
Solar wrote:I was able to come up with four policies: throughput (server), responsive (desktop), performance (gaming / multimedia), and real-time (embedded / production). These would be configured at boot-time, or run-time if I could make it work.
'Performance' is a nice policy.. ;D

No, seriously I think that games and multimedia apps would run best with a (weak) real-time policy to avoid stuttering..
Solar wrote:I'm curious: Can you come up with a policy, or sub-policy, that is not part of the above categories, or a sound reason why you would want to mix any of the above?

If you can, my concept is broken. If you can't, you've just lost the reason why only an exokernel would suffice.
Hmm, so you like playing at high stakes 8)

I could now try to come up with some examples, but I doubt that this would change anything as I've already tried to make my point concerning this clear in several of my posts. It's a pretty fundamental question of design/philosophy and trying to convince you of my point of view seems to be a losing game for me:
Solar wrote:Don't bother. I don't like exo's, and you won't convince me otherwise. I'll be happy to help you if I can, though. ;)
I personally really believe in the exokernel design to have the potential to overcome most of the problems todays kernels face, although it's of course still a very young idea and there are of also some things that will require further research to be done. That's however just my opinion and if you want to go for a "lean monolithic/fat ?-kernel" I won't keep you from doing so. After all that means one OS less in the exokernel market I'll have to compete with.. ;)

regards,
gaf
User avatar
Colonel Kernel
Member
Member
Posts: 1437
Joined: Tue Oct 17, 2006 6:06 pm
Location: Vancouver, BC, Canada
Contact:

Re:Minimalist Microkernel Memory Management

Post by Colonel Kernel »

gaf wrote: If I got you right, your idea is to release distros of your OS that are specialized for a certain platform/purpose and consist of exokernel modules that were linked/compiled to form a static system. I'm not saying that this can't work out, but I do see the danger that the distros are eventually too different to allow programms to run on all of them.
Not if you have a sensible default policy for apps that don't need any special treatment.

I wasn't really proposing a design, I was presenting a hypothesis about actual usage of OSes that says in a nutshell: You can have really special apps with really special policy requirements, but you're probably also running them on really special hardware, and you're probably not running a web browser and a word processor at the same time. :)

This is probably what Solar was saying, just phrased differently.

I wouldn't say I'm against the idea of exokernels though... I just like to separate the orthogonal elements of a design so I can better understand why it is good/bad/whatever. I'm also a glutton for implementation details so I can understand how things work. ;)

Back to revocation...

I said:
If the kernel unmaps a dirty page from some app, it must also be unmapped from every other address space to which it had been mapped, otherwise the kernel can't use it (pages of kernel bookkeeping data cannot be shared). However, the pager for the victim app doesn't know that the page has been unmapped unless it tries to access or map it later on. Where in this scenario does the pager have a chance to save its contents?
Then you said:
The apps are informed if any of their resources is revoked..
I should have been more specific. I'm trying to understand how such a scheme would work within the context of L4, not in general. In the case of L4, my understanding is that the app is informed that its page is gone via a page fault when it tries to access it, which is too late to do anything about it. This means it is the pager's responsibliity to reconstruct the contents of that page as required (which means the pager must be able to save that page to disk if it's dirty). The problem with revoking a dirty page from every address space (which as I said is necessary in order for the kernel to use it securely) is that even the pager loses its copy, meaning it has no chance to save its contents because, as I said, notification of revocation comes too late in the normal L4 map/unmap scheme.

So what you're saying is to simply not do things the way L4 does, which I guess is ok, but it's not necessarily any better or involves less overhead than the "tax" scheme I suggested.
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!
Legend

Re:Minimalist Microkernel Memory Management

Post by Legend »

One issues I see with the exokernel is that after the DOS times, hardware abstraction was added to the operating systems, to avoid having the application developer to deal with the specifics of the hardware.

An exokernel seems to try exactly the opposite, or not?
User avatar
gaf
Member
Member
Posts: 349
Joined: Thu Oct 21, 2004 11:00 pm
Location: Munich, Germany

Re:Minimalist Microkernel Memory Management

Post by gaf »

Colonel Kernel wrote:Not if you have a sensible default policy for apps that don't need any special treatment.
But using the standard policy is not really the point of having an exokernel..

What I wanted to say is that an (specialized) application that was meant to be run on a certain distro is also linked to a certain manager. If you now decided to run this app on any other distro, the API is different and the app won't work without adapting it first. Of course it's possible to solve this problem by, for example, using dynamically linked libs that translate the API calls first, trying to immitate the original behaviour. I just wanted to point out that there are also some new issues arising when the managers are static..
Colonel Kernel wrote:You can have really special apps with really special policy requirements, but you're probably also running them on really special hardware, and you're probably not running a web browser and a word processor at the same time. :)
I basically agree that there's no use in having 20 different managers with different policies but saying that there might only be one is in my opinion not such a good idea either because it means giving away a lot of flexability for no immediate gain.
There are by the way some areas in which having more than one policy might make sense. One example would be a hard-disk with several partitions which can all have different policies because each of them is linked to a private storage manager. Aprt from that I could also imagine that networking benefits from multiple managers as this would allow to mix protocols in an easier way..
Colonel Kernel wrote:I should have been more specific. I'm trying to understand how such a scheme would work within the context of L4, not in general. In the case of L4, my understanding is that the app is informed that its page is gone via a page fault when it tries to access it, which is too late to do anything about it.
I was actually refering to the exokernel which handles resource revocation in a much more transparent way for the apps..

L4 seems to expect that the app(s) holding the page to be revoked are informed in advance by some user-space mechanism if they need to store their data:
- Kernel asks root to organize 2 pages
- root picks one of the lower level pager at random
- the unlucky pager has to page-out 2 pages of its choice
- it the pager refuses to do so, root just takes 2
- root saves the contents of the pages in a "swap" file
- the pager can get its pages' contents back by the end of the year ;)
Legend wrote: One issues I see with the exokernel is that after the DOS times, hardware abstraction was added to the operating systems, to avoid having the application developer to deal with the specifics of the hardware.

An exokernel seems to try exactly the opposite, or not?
Excluding policy decisions from the kernel is a pretty new idea as all traditional operating systems don't only multiplex but at the same time provide a (more or less) high level abstraction to the applications. DOS is perfectly monolithic, what you probably meant is programming under DOS which often means circumventing the OS..

I agree that the application programmer shouldn't have to deal with the raw hardware, not only because of the extra work this would mean, but also because compatibility and portability of applications requires a static programming interface. In a real exokernel this would however hardly be an issue as such an abstract interface can also be established in user-space using libraries that shield the programmer from the raw kernel interface.

regards,
gaf
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re:Minimalist Microkernel Memory Management

Post by Solar »

gaf wrote: If you want to allow modules to implement a policy, the rest of the OS has to be free of policy and since several of these modules have to be supported you'll also need a common module-kernel interface.
For the generic access, yes. For example a database server is free to circumvent the second-level generic interface and access the first-level interface of the "throughput" plug-in as you wouldn't run a professional-grade database on anything else but a throughput-optimized system anyway.
I think that this eventualy leads to a design similiar to the one Colonel Kernel has proposed with an ?kernel structure under the hood and statically linked managers that may then of course aswell run in kernel-space.
Something like that.
No, seriously I think that games and multimedia apps would run best with a (weak) real-time policy to avoid stuttering..
As I have done some delving in the area, I can say that the common concept of "real time" <-> "high performance" is flawed. Providing a hard real-time environment actually reduces the overall performance. That is acceptable for production-level audio recording, for example, or controlling machines. For games / multimedia, you want a setup that squeezes the last drop of performance out of the system (best try), instead of throwing a fit when it detects your hardware can't sustain 30 fps in a worst-case scenario (as a real-time system would).
Hmm, so you like playing at high stakes 8)
Not exactly. I like to see my design can withstand significant flak without falling apart. Makes me sleep better during the implementation phase. ;)
I could now try to come up with some examples, but I doubt that this would change anything as I've already tried to make my point concerning this clear in several of my posts. It's a pretty fundamental question of design/philosophy and trying to convince you of my point of view seems to be a losing game for me:
Correct. You won't convince me of using exokernel tech, just like I won't convince you to use monolithic approaches. But from disagreement comes deeper understanding, and perhaps it makes me use a different monolithic approach, or you using a different libOS approach.

If you don't want to stress this thread further, I'd like to hear one or two of your examples in private message.
I personally really believe in the exokernel design to have the potential to overcome most of the problems todays kernels face...
I believe that they have the potential to bring up quite a lot of new problems. While my project credo was "to hell with all the legacy", I still didn't want to throw several decades of knowledge and experience overboard and steer into uncharted waters.

But someone has to chart them, that's why I don't try to discourage you from using exokernels for your project. I just want you to understand the criticism brought forth, just as I want to understand the criticism against traditional approaches - that helps avoiding mistakes.
That's however just my opinion and if you want to go for a "lean monolithic/fat ?-kernel" I won't keep you from doing so. After all that means one OS less in the exokernel market I'll have to compete with.. ;)
:-D
Every good solution is obvious once you've found it.
User avatar
gaf
Member
Member
Posts: 349
Joined: Thu Oct 21, 2004 11:00 pm
Location: Munich, Germany

Re:Minimalist Microkernel Memory Management

Post by gaf »

Solar wrote:For games / multimedia, you want a setup that squeezes the last drop of performance out of the system (best try), instead of throwing a fit when it detects your hardware can't sustain 30 fps in a worst-case scenario (as a real-time system would).
If the system is not able to constantly produce 25 fps for a movie there's hardly any use in trying to play it. It's similiar for games which need some certain frame-rate to avoid stutter while it's at the same time pointless to produce more than that, as most engines do today, since a frame-rate around 30 is totaly sufficient to create the illusion of smooth movement.
Solar wrote:If you don't want to stress this thread further, I'd like to hear one or two of your examples in private message.
As Legend has already said a lot of users would probably like to run both desktop and gaming apps on their system without having to restart/reinstall first. Quite in general I think that they would often like to use applications on their systems that you didn't have on your mind when designing the distro..

Here's your example:
Some weeks ago I was on a (private) LAN and the guy which has invited us set up an older computer that he normally used for desktop work to allow the internet to be accessed throughout the whole local network. If he had used your OS the software needed to allow something like this would have been part of the "server" distro and therefore couldn't have been used on his "desktop" installation. How should he have solved the problem ?

regards,
gaf
Legend

Re:Minimalist Microkernel Memory Management

Post by Legend »

I would try to make an approach that covers all situations okay (like it is done today with monolithic kernels). However, in contrast to a pure monolithic kernel having the option to add special managers for special purposes might be interesting for some cases.

But in 99% of the cases the application can't really benefit really from a special pager/scheduler etc., I see no reason for punishing them by adding overhead to the generic pager and scheduler, by putting then into user mode, when the amount of code pushed to user space would not be that big either. For allocating and deallocating where the pager is (and even how the pager internally works) is not that interesting anyway, as most memory allocation and deallocations are first cached by the malloc implementation in the application. Swapping however, is a different story. That could be interesting for a few apps.
If someone thinks that loaded pagers at runtime into kernel mode could be made secure, that discussion would not exist anyway.

For the games, at best soft realtime scheduling might sense. Games can try to adapt high load situations by reducing the level of detail anyway better then the OS could help by stalling other applications (and when you would expect not to need it, the word stalling really fits, not just slow down).
User avatar
Colonel Kernel
Member
Member
Posts: 1437
Joined: Tue Oct 17, 2006 6:06 pm
Location: Vancouver, BC, Canada
Contact:

Re:Minimalist Microkernel Memory Management

Post by Colonel Kernel »

gaf wrote: What I wanted to say is that an (specialized) application that was meant to be run on a certain distro is also linked to a certain manager. If you now decided to run this app on any other distro, the API is different and the app won't work without adapting it first. Of course it's possible to solve this problem by, for example, using dynamically linked libs that translate the API calls first, trying to immitate the original behaviour. I just wanted to point out that there are also some new issues arising when the managers are static..
That's a straw man... I didn't mean static as in static linking. You're taking what I said way too literally, and too technically.

I think it would stupid to design an OS where the gaming flavour of it would only run games, the server flavour would only run servers, etc. I also think it would be silly for the OS API to vary wildly depending on the underlying policies.

I can run games just fine on my desktop OS. Why do I need specialized policies for games on a desktop machine? Isn't the general "desktop OS" policy good enough? If I can play games on my XBox, but I also want to surf, shouldn't it be possible to stick a browser on there? Won't it be just as happy (and fast) with the "game OS" policies? Is performance really going to suffer that badly if I can't have specialized policies for everything?

Where does this urgent need for per-app specialization come from? I haven't yet seen a good motivating example...
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!
User avatar
gaf
Member
Member
Posts: 349
Joined: Thu Oct 21, 2004 11:00 pm
Location: Munich, Germany

Re:Minimalist Microkernel Memory Management

Post by gaf »

Legend wrote:I would try to make an approach that covers all situations okay (like it is done today with monolithic kernels).
Why reinventing the wheel, if you're happy with the current design ? By just taking the linux source and building an own distro you can change the details you don't like about it without having to write a whole OS. This means much less work and you get compatibility with thousands of linux apps for free..
Legend wrote:But in 99% of the cases the application can't really benefit really from a special pager/scheduler etc.
That's just your opinion..
Legend wrote:I see no reason for punishing them by adding overhead to the generic pager and scheduler, by putting then into user mode, when the amount of code pushed to user space would not be that big either.
I'll never get why this is really such a big problem for some people. Let's assume I'd rewrite parts of the Linux kernel to turn it into a ?kernel with modules in user-space. How big do you expect the difference in performance to be compared to a traditional kernel ?

I'm serious - give me some numbers, maybe this will help me to understand..
Legend wrote:If someone thinks that loaded pagers at runtime into kernel mode could be made secure, that discussion would not exist anyway.
It can be made secure using a secure scripting language. In my opinion this wouldn't even be necessary: A signature guaranteeing that the module is an offical release should suffice as monolithic kernels also run all this stuff in kernel mode and nobody worries about security/stability there..

Now does this change anything for you ?
Colonel Kernel wrote:Is performance really going to suffer that badly if I can't have specialized policies for everything?
This depends on how inapproprite the the policy is for the application and what you understand under "badly". I suppose that the avarage desktop application only benefits slightly (<10%), but the more specialized/optimized the applications get, the higher should the gain be. My opinion is that especially these applications need performance badest and won't get it on a traditional system..

Unfortuantely it's not easy to get actual numbers on this as few benchmarking has been done. The only document I'm aware of is the one about server performance, but I think I've already given you the link (just in case..).

regards,
gaf
Post Reply