Page 1 of 1

A dynamic, self-aware, and thinking kernel

Posted: Tue Jan 18, 2011 11:40 am
by dKingston
Some of you have already heard this on IRC, and some of you may have not, or some of you don't use IRC and just use the forums. In any case, I will say this though: this idea sounds really messed up at first and it makes me sound like I'm on drugs. I'm still trying to research some things to get it going, but I do want opinions. Don't hold back.

Another note: "player" == entity, "game" == work to do

My dynamic kernel idea involves making every kernel component a module. Something like a modular kernel, but not necessarily. It's about a kernel that can adapt to it's environment, and think. I do not mean artificial intelligence. The idea involves the Nash equilibrium. I realize it's a game theory, but it in my opinion can be applied to an operating system. Here are my reasons why:

The occurrence of the "Nash Equilibrium" depends on the "game" played among players under certain conditions, and if these conditions match, then the NE strategy set will be adopted. Sufficient conditions to guarantee that the Nash equilibrium is adopted are:
The players all will do their utmost to maximize their expected payoff as described by the game.
I think everything involving computing wants their "payoff" (performance) to be max.
The players are flawless in execution.
While this may never be possible, the only way to achieve this is with a bug-free program. This is a flaw in my idea.
The players have sufficient intelligence to deduce the solution.
The players know the planned equilibrium strategy of all of the other players.
The players believe that a deviation in their own strategy will not cause deviations by any other players.
There is common knowledge that all players meet these conditions, including this one. So, not only must each player know the other players meet the conditions, but also they must know that they all know that they meet them, and know that they know that they know that they meet them, and so on.
These can all be achieved by contacting a "name server" in user space, for example asking "hey, what is 'x' player going to do?" and other communication. The kernel can generate strategies for anything incoming through the IPC, such as scheduling, context-switches, etc. For instance, if the OS notices that there is extreme overhead, it can generate a strategy to the user-level servers, dispatch it, and have them carry out the kernel's will.

Certain pieces of the kernel can also be moved from user/kernel space on the fly. Normally during bootup of such a system, almost everything starts out in user space. In conditions where the overhead starts to affect system performance, the kernel will generate a strategy to safely take the player from user space and move it to kernel space.

I understand I sounded like a crackpot throughout this entire thing, but it's somewhat theoretical and somewhat practical in my opinion. Thoughts?

Re: A dynamic, self-aware, and thinking kernel

Posted: Tue Jan 18, 2011 1:46 pm
by JamesM
What is the advantage of such a system?

How would such a system deal with uncooperative modules, programs or other subsystems that fail to adhere to the rules/assumptions?

Re: A dynamic, self-aware, and thinking kernel

Posted: Tue Jan 18, 2011 2:29 pm
by dKingston
JamesM wrote:What is the advantage of such a system?

How would such a system deal with uncooperative modules, programs or other subsystems that fail to adhere to the rules/assumptions?
Since the entire OS is almost game based, it basically starts a game in which the players are the entity, and the bad module/program is the opponent. The entity can be a driver going against the user level driver server for example. Whoever wins, stays. Whoever loses, gets killed. Although in the "Nash Equilibrium", instability is unlikely to arise in practice, so it shouldn't be too much of an issue. If a highly unstable Nash Equilibrium were to occur though, often the kernel/user level servers are top dog and thus will kill off the bad entity instantly. The advantage is everything is treated equally, and can possibly create a balance within the system, e.g. fairly giving scheduler time, taking into account what the user level servers say.

Re: A dynamic, self-aware, and thinking kernel

Posted: Tue Jan 18, 2011 3:15 pm
by JamesM
dKingston wrote:
JamesM wrote:What is the advantage of such a system?

How would such a system deal with uncooperative modules, programs or other subsystems that fail to adhere to the rules/assumptions?
Since the entire OS is almost game based, it basically starts a game in which the players are the entity, and the bad module/program is the opponent. The entity can be a driver going against the user level driver server for example. Whoever wins, stays. Whoever loses, gets killed. Although in the "Nash Equilibrium", instability is unlikely to arise in practice, so it shouldn't be too much of an issue. If a highly unstable Nash Equilibrium were to occur though, often the kernel/user level servers are top dog and thus will kill off the bad entity instantly. The advantage is everything is treated equally, and can possibly create a balance within the system, e.g. fairly giving scheduler time, taking into account what the user level servers say.
I don't think you really answered either of my questions...

Re: A dynamic, self-aware, and thinking kernel

Posted: Tue Jan 18, 2011 5:09 pm
by earlz
dKingston wrote:
JamesM wrote:What is the advantage of such a system?

How would such a system deal with uncooperative modules, programs or other subsystems that fail to adhere to the rules/assumptions?
Since the entire OS is almost game based, it basically starts a game in which the players are the entity, and the bad module/program is the opponent. The entity can be a driver going against the user level driver server for example. Whoever wins, stays. Whoever loses, gets killed. Although in the "Nash Equilibrium", instability is unlikely to arise in practice, so it shouldn't be too much of an issue. If a highly unstable Nash Equilibrium were to occur though, often the kernel/user level servers are top dog and thus will kill off the bad entity instantly. The advantage is everything is treated equally, and can possibly create a balance within the system, e.g. fairly giving scheduler time, taking into account what the user level servers say.
So you have two kernel modules fighting each other because one is "bad" and one is "good". I don't think fighting for control is the best thing to take place in a kernel environment

Re: A dynamic, self-aware, and thinking kernel

Posted: Tue Jan 18, 2011 9:07 pm
by TylerH
You seem to be describing something similar to a genetic algorithm. In that, you have a set definition of fitness and you plan to select for modules/programs/etc that better match the definition of fitness. This works well for simple AI, but I imagine it would be difficult to implement effectively for something as complex as a kernel. It's a promising idea, but you need to lock down the specifics, like your definition of fitness and how you plan to implement selection, reproduction, and mutation.

Re: A dynamic, self-aware, and thinking kernel

Posted: Tue Jan 18, 2011 9:29 pm
by bewing
As, for example, a PhD thesis -- the idea of using an NE-based scheduler is inspired.

As a practical proposition, however, it seems far too computation-intensive to be useful. While it is true that you may be able to make a managed-code OS based on the concept, and theoretically a managed system has some efficiency advantages -- there are also fundamental downsides to them.
But in general for our scheduler algorithms, we are looking for algorithms that are both "fair" and O(1). NE is not O(1).

So I think your answer depends on what you are trying. Is this simply an intellectual/mathematical experiment? Or are you trying to propose a fundamental basis for a practical real-world OS?

Re: A dynamic, self-aware, and thinking kernel

Posted: Tue Jan 18, 2011 9:36 pm
by NickJohnson
The problem with your userspace/kernelspace moving idea is that (along with the fact that it would be hell to implement) drivers in the kernel simply are faster than drivers in userspace, pretty much no matter what. You usually have drivers in userspace for safety and modularity, and allowing drivers to be in either place removes those benefits. Therefore, there is no balance: either you want speed, or you want other benefits. Any adaptable system would be useless.

However, there are places where figuring out patterns and adapting could be very beneficial for speed. The most obvious is the scheduler, which you already mentioned. For example, in some microkernels (notably L4 for x86), there is a feature that slices address spaces into pieces (using segmentation IIRC) and puts multiple processes in each address space. When messages are passed between processes that happen to be in the same address space, the message passing is faster because there is no context switch. Figuring out which processes should go together is obviously a complex problem, and some AI could be beneficial there. Pretty much anywhere where there is a valuable resource that has to be shared is a candidate for what you're talking about. I don't know whether a Nash equilibrium is really going to be better than the specialized algorithms developed for these problems, but you can always try it.

Also, as an example of a kernel design where adaptation is taken to the extreme: http://infolab.stanford.edu/~manku/qual ... thesis.htm

Re: A dynamic, self-aware, and thinking kernel

Posted: Wed Jan 19, 2011 9:00 am
by dKingston
earlz wrote:
dKingston wrote:
JamesM wrote:What is the advantage of such a system?

How would such a system deal with uncooperative modules, programs or other subsystems that fail to adhere to the rules/assumptions?
Since the entire OS is almost game based, it basically starts a game in which the players are the entity, and the bad module/program is the opponent. The entity can be a driver going against the user level driver server for example. Whoever wins, stays. Whoever loses, gets killed. Although in the "Nash Equilibrium", instability is unlikely to arise in practice, so it shouldn't be too much of an issue. If a highly unstable Nash Equilibrium were to occur though, often the kernel/user level servers are top dog and thus will kill off the bad entity instantly. The advantage is everything is treated equally, and can possibly create a balance within the system, e.g. fairly giving scheduler time, taking into account what the user level servers say.
So you have two kernel modules fighting each other because one is "bad" and one is "good". I don't think fighting for control is the best thing to take place in a kernel environment
I agree, that is not something I would want in a kernel environment. Hence, another flaw in my idea.
TylerH wrote:You seem to be describing something similar to a genetic algorithm. In that, you have a set definition of fitness and you plan to select for modules/programs/etc that better match the definition of fitness. This works well for simple AI, but I imagine it would be difficult to implement effectively for something as complex as a kernel. It's a promising idea, but you need to lock down the specifics, like your definition of fitness and how you plan to implement selection, reproduction, and mutation.
This makes MUCH more sense instead of trying to implement NE completely in a full blown OS. Basically what you're suggesting is that the computer "learns and grows", no?
bewing wrote:As, for example, a PhD thesis -- the idea of using an NE-based scheduler is inspired.

As a practical proposition, however, it seems far too computation-intensive to be useful. While it is true that you may be able to make a managed-code OS based on the concept, and theoretically a managed system has some efficiency advantages -- there are also fundamental downsides to them.
But in general for our scheduler algorithms, we are looking for algorithms that are both "fair" and O(1). NE is not O(1).

So I think your answer depends on what you are trying. Is this simply an intellectual/mathematical experiment? Or are you trying to propose a fundamental basis for a practical real-world OS?
An NE-based scheduler could have some issues with intense computation, yes. I can't say whether it would be FAST or not. However, computing NE equilibria doesn't seem TOO intensive, but it does seem fair.
NickJohnson wrote:The problem with your userspace/kernelspace moving idea is that (along with the fact that it would be hell to implement) drivers in the kernel simply are faster than drivers in userspace, pretty much no matter what. You usually have drivers in userspace for safety and modularity, and allowing drivers to be in either place removes those benefits. Therefore, there is no balance: either you want speed, or you want other benefits. Any adaptable system would be useless.

However, there are places where figuring out patterns and adapting could be very beneficial for speed. The most obvious is the scheduler, which you already mentioned. For example, in some microkernels (notably L4 for x86), there is a feature that slices address spaces into pieces (using segmentation IIRC) and puts multiple processes in each address space. When messages are passed between processes that happen to be in the same address space, the message passing is faster because there is no context switch. Figuring out which processes should go together is obviously a complex problem, and some AI could be beneficial there. Pretty much anywhere where there is a valuable resource that has to be shared is a candidate for what you're talking about. I don't know whether a Nash equilibrium is really going to be better than the specialized algorithms developed for these problems, but you can always try it.

Also, as an example of a kernel design where adaptation is taken to the extreme: http://infolab.stanford.edu/~manku/qual ... thesis.htm
I will agree that there cannot be a true balance. An adaptable system would be useless in that case, yes. I agree that the scheduler would be a #1 target for utilizing the NE for fairness, but I'm worried about speed. And you're correct, I can always try. That "synthesis kernel" does look interesting.
berkus wrote:
dKingston wrote:These can all be achieved by contacting a "name server" in user space, for example asking "hey, what is 'x' player going to do?" and other communication. The kernel can generate strategies for anything incoming through the IPC, such as scheduling, context-switches, etc. For instance, if the OS notices that there is extreme overhead, it can generate a strategy to the user-level servers, dispatch it, and have them carry out the kernel's will.
Communication overhead will kill it. In a system with 10,000 threads every thread asking kernel about 9,999 other threads will spend all of its time asking.

For the dynamic reconfiguration and migration of modules between kernel and user space I recommend you read up on Pebble - an experimental OS from Bell Labs. It's available from Lucent site for archaeological purposes, if you're interested. It implemented a call gate mechanism that could be instrumented and modified by kernel based on the call latency and frequency analysis. This means that calls like getpid() (a favourite example in that paper) do not cause a RPC roundtrip into the kernel, because kernel will replace it with equivalent of "mov eax, PID; ret" directly in process' space. Of course, other optimizations are possible. Just RTFP.
Yeah, the overhead could be terrifying. I read Pebble, it looks interesting.
JamesM wrote:
dKingston wrote:
JamesM wrote:What is the advantage of such a system?

How would such a system deal with uncooperative modules, programs or other subsystems that fail to adhere to the rules/assumptions?
Since the entire OS is almost game based, it basically starts a game in which the players are the entity, and the bad module/program is the opponent. The entity can be a driver going against the user level driver server for example. Whoever wins, stays. Whoever loses, gets killed. Although in the "Nash Equilibrium", instability is unlikely to arise in practice, so it shouldn't be too much of an issue. If a highly unstable Nash Equilibrium were to occur though, often the kernel/user level servers are top dog and thus will kill off the bad entity instantly. The advantage is everything is treated equally, and can possibly create a balance within the system, e.g. fairly giving scheduler time, taking into account what the user level servers say.
I don't think you really answered either of my questions...
Well, the advantage of such a system in the theory was to achieve a balance within the OS proper. The system would deal with uncooperative modules by like I said, starting a game which the players are the module, and the user level server that controls modules. It's basically like Chess, if you make a wrong move, checkmate. If a program/subsystem fails to adhere to the rules/assumptions, it'll normally be killed off instantly.
----------------------------------------------------------------------------------------------------------------------------------------------------

With all that said, I think a NE-based scheduler can be done, but it has to be fast. It could be more "fairer" than the Completely Fair Scheduler, since it could generate a strategy for complex scheduling, which in theory could offer a pretty nice speed boost. As tylerh mentioned, a genetic algorithm seems much saner than implementing the NE completely into the OS, because it can "grow" as far as I can understand it.

Re: A dynamic, self-aware, and thinking kernel

Posted: Wed Jan 19, 2011 10:32 am
by gravaera
Yo:

I think you're probably taking that theory and applying it too aggressively somewhere it doesn't fit. Kernel mode vs. usermode isn't any kind of competition: the kernel's duty is to make resources available to userspace; not to be suspicious of every userspace "player": your aim is to schedule and run code, and not to supervise a match. The hardware (CPU) provides means for you to catch exceptional circumstances. You build your security model based on that.

For example, you force userspace to allocate memory using a kernel API. If userspace dereferences a location that has not been allocated in its address space, you receive a page fault. You can catch most general system compromising bugs in programs. But a kernel cannot detect malice beyond a certain point. That is where anti-virus programs, etc. come in. The kernel targets keeping itself and the system up and running and in a state to execute instructions as asked by the user, and the user and the various userland tools he has installed work to ensure that the up-and-running system is not compromised by malice.

--All the best
gravaera

Re: A dynamic, self-aware, and thinking kernel

Posted: Wed Jan 19, 2011 5:00 pm
by Dario
Way too much computation. Kernel should be more automated then "smart" and that automated part should be as fast as possible, providing only basic services for user programs. It's up to them to be "intelligent".