Multi-Kernel Self Deciding OS
- Lionel
- Member
- Posts: 117
- Joined: Fri Jul 16, 2010 2:16 pm
- Libera.chat IRC: ryanel
- Location: California
Multi-Kernel Self Deciding OS
I was thinking of designs for a smart kernel, and I thought, "Hey, why can't I let it make its own decisions?" Then after that, I thought "That is completely ludicrous, so it might just work, but how?" So anyway, I came up with a Idea. (Using the names I would use for my Project, of course )
Layer 1 - Ununtrium Kernel
Ununtrium is the real kernel. It manages everything just like a normal Hybrid would do, except it is kinda like a modular exokernel. It would be a real kernel, if it could do logic. Ununtrium basically acts like a interface and mediator for the Personalities in Layer 2. Ununtrium also manages all of the processes/scheduling. Just not the logic / deciding.
Layer 2 - Personalities
Personalities are basically processes that debate. Each one has a slightly different programming than the others, and this allows them to basically think and act like independent entities. If Ununtrium gets something that isn't basic logic, it basically forwards it to the Personalities. They either pull their computing power together to solve it, or if its something like "Is this process acting up" or "What in the world should we do with this permission?", they debate over it. If it gets More than half votes, it gets passed. Otherwise it is rejected. They might also learn from the user, and might know their opinions about stuff, and act accordingly.
Implementation
I would, if it would work, would do this. I would also have 3 personalities, Balthazar, Melchior, and Caspar.
Opinions?
P.S: Bonus Points to whoever guesses where this idea originated from. Hint: It's a computer in a anime that starts with E
Layer 1 - Ununtrium Kernel
Ununtrium is the real kernel. It manages everything just like a normal Hybrid would do, except it is kinda like a modular exokernel. It would be a real kernel, if it could do logic. Ununtrium basically acts like a interface and mediator for the Personalities in Layer 2. Ununtrium also manages all of the processes/scheduling. Just not the logic / deciding.
Layer 2 - Personalities
Personalities are basically processes that debate. Each one has a slightly different programming than the others, and this allows them to basically think and act like independent entities. If Ununtrium gets something that isn't basic logic, it basically forwards it to the Personalities. They either pull their computing power together to solve it, or if its something like "Is this process acting up" or "What in the world should we do with this permission?", they debate over it. If it gets More than half votes, it gets passed. Otherwise it is rejected. They might also learn from the user, and might know their opinions about stuff, and act accordingly.
Implementation
I would, if it would work, would do this. I would also have 3 personalities, Balthazar, Melchior, and Caspar.
Opinions?
P.S: Bonus Points to whoever guesses where this idea originated from. Hint: It's a computer in a anime that starts with E
-
- Member
- Posts: 283
- Joined: Mon Jan 03, 2011 6:58 pm
Re: Multi-Kernel Self Deciding OS
Evangelion of course
Very interesting idea. But it would require a "near" true AI (or 3) to actually work me thinks...
- Monk
Very interesting idea. But it would require a "near" true AI (or 3) to actually work me thinks...
- Monk
Re: Multi-Kernel Self Deciding OS
It's difficult to work with a person with 3 personalities, so what made a computer with 3 personalities good for?
- Love4Boobies
- Member
- Posts: 2111
- Joined: Fri Mar 07, 2008 5:36 pm
- Location: Bucharest, Romania
Re: Multi-Kernel Self Deciding OS
It sounds pretty terrible to me. Moreover, this isn't a design; it's a very incomplete architecture that's not based on any requirements, which, in turn, are not based on an actual problem. Here are a few questions you should attempt to answer before investing any more time in your project:
- What is the problem that this project is trying to solve? Note that "the kernel should be able to make its own decisions" is a solution, not a problem. What is the advantage of having personalities with competing goals? If you jump directly to the solution without first considering the problem, you will pick your solution from the incorrect solution space. For instance, if your problem is that kernel policies are too hard to change at runtime (this perhaps being the reason for which you have personalities as modules), then you might reach the conclusion that a configuration file or a scripting language are more flexible and possibly simpler to implement.
- Is this really something that needs to happen at the kernel level? E.g., if you're voting on whether an old file needs compressing to save up disk space, you should probably implement this as a daemon or stand-alone program.
- What are the exact inputs and outputs? You talk about votes and solutions but you are very vague.
"Computers in the future may weigh no more than 1.5 tons.", Popular Mechanics (1949)
[ Project UDI ]
[ Project UDI ]
Re: Multi-Kernel Self Deciding OS
Why does it need to solve a problem? As has been said time and again, chances are nobody is going to use anyone's OS on this forum, barring a few exceptions, for any real work. Why not just build this as a 'why the hell not?' exercise. I've certainly done enough of them.
And, to back up my point, what about the first gentleman who discovered electricity. I can just imagine it. Sets up some equipment for something, including a magnet and a coil of wire. Moves magnet into coil of wire.
"Ouch! What the hell was that?"
Taking your argument, of not experimenting unless it solves a problem, that would be the end of the story. No electricity, and nothing to do OSDev on.
If you REALLY need a problem to solve, let the problem be your own curiosity!
Just my $0.02
And, to back up my point, what about the first gentleman who discovered electricity. I can just imagine it. Sets up some equipment for something, including a magnet and a coil of wire. Moves magnet into coil of wire.
"Ouch! What the hell was that?"
Taking your argument, of not experimenting unless it solves a problem, that would be the end of the story. No electricity, and nothing to do OSDev on.
If you REALLY need a problem to solve, let the problem be your own curiosity!
Just my $0.02
- Love4Boobies
- Member
- Posts: 2111
- Joined: Fri Mar 07, 2008 5:36 pm
- Location: Bucharest, Romania
Re: Multi-Kernel Self Deciding OS
For the hobbyist, except for things that are obviously well-defined problems, I see three sensible attitudes: wanting to learn OS theory, wanting to gain experience as a programmer, and wanting to do something different (something specific). However, if you think about it clearly, you will see that these too are problems. If it solves no problem, it's not useful. If it's not useful, it's a waste of time---why not use that time fruitfully instead?CWood wrote:Why does it need to solve a problem? As has been said time and again, chances are nobody is going to use anyone's OS on this forum, barring a few exceptions, for any real work. Why not just build this as a 'why the hell not?' exercise. I've certainly done enough of them.
You're missing the difference between inventions/engineering (what we were discussing earlier) and discoveries/observations (what you seem to be discussing now) entirely.CWood wrote:And, to back up my point, what about the first gentleman who discovered electricity. I can just imagine it. Sets up some equipment for something, including a magnet and a coil of wire. Moves magnet into coil of wire.
"Ouch! What the hell was that?"
Taking your argument, of not experimenting unless it solves a problem, that would be the end of the story. No electricity, and nothing to do OSDev on.
If you REALLY need a problem to solve, let the problem be your own curiosity!
"Computers in the future may weigh no more than 1.5 tons.", Popular Mechanics (1949)
[ Project UDI ]
[ Project UDI ]
- Lionel
- Member
- Posts: 117
- Joined: Fri Jul 16, 2010 2:16 pm
- Libera.chat IRC: ryanel
- Location: California
Re: Multi-Kernel Self Deciding OS
tjmonk15: Correct. Fifteen internets to tjmonk15!
Love4Boobies: It does solve a problem. Sort of. It is basically a Knowledge Engine-Esque OS that has a UNIX user-space. The problem is that humans are very bad at deciding things. Emotions and preconceived notions cloud our judgement. This aims to solve this. Remember, this isn't a general use OS. If the Personalities were the same, they would always agree with each-other. If they are different, they could compromise with each other, the same way people compromise with their friends when they go out. And yes, while configuration and scripting are easier, think about it this way: This is completely dynamic. This would make it very easy for them to react to situations not thought of in the script or config file. And how would it not be flexible? They can think for themselves, so they are just as flexible as a living being would be (not for a long time, unfortunately ).
Well, not exactly, but preferred. But because three AI (witch would be somewhat omnipotent - hence being in supervisor mode) need to run on one computer, it would probably be faster if they their code was executed in kernel mode, so we can get the overhead of mode-switches away. Plus they would have direct memory access.
Inputs would be trusted network sources for data (Wolfram Alpha?), common sense, the user, and their own thoughts. The output is somewhat situational. For instance, work on data might be saved, shown, etc. based on the digression of the AI. If it is solving a problem, same thing.
Also, sorry for not posting everything. I was really tired, and was typing this up before I went to bed.
CWood: It's kinda both, a why the hell not if theirs no interest. A serious project if their is a lot of interest. I've done a lot of them too.
Thomas Edison? Well, he was crazy... But yes, I see your point. But it's always nice to have a reason so your not like "Why did I do this again?".
Yes. But remember, curiosity electrocuted the cat. Unless they are in Norway.
Love4Boobies (most Recent):
Because I am waiting for my fruit to grow (specifically my lemons and apples, it's winter over here). It does solve a problem, well it will solve a multitude of problems, so it is bound to be useful. And yes, I have all those attitudes, but you never really loose them. I want to do something unique, but familiar. But entirely different.
Agreed, but he still has a point. It's like if I apply try to explain quantum mechanics with a barrel of apples, of course it won't exactly match up, but it still gets the major points across.
Anyway, thanks for your opinions so far. Love4Boobies, you have given me a lot to think about. ALOT.
P.S: Have any of you tried One Note on Windows 8? It is absolutely delicious!
Love4Boobies: It does solve a problem. Sort of. It is basically a Knowledge Engine-Esque OS that has a UNIX user-space. The problem is that humans are very bad at deciding things. Emotions and preconceived notions cloud our judgement. This aims to solve this. Remember, this isn't a general use OS. If the Personalities were the same, they would always agree with each-other. If they are different, they could compromise with each other, the same way people compromise with their friends when they go out. And yes, while configuration and scripting are easier, think about it this way: This is completely dynamic. This would make it very easy for them to react to situations not thought of in the script or config file. And how would it not be flexible? They can think for themselves, so they are just as flexible as a living being would be (not for a long time, unfortunately ).
Well, not exactly, but preferred. But because three AI (witch would be somewhat omnipotent - hence being in supervisor mode) need to run on one computer, it would probably be faster if they their code was executed in kernel mode, so we can get the overhead of mode-switches away. Plus they would have direct memory access.
Inputs would be trusted network sources for data (Wolfram Alpha?), common sense, the user, and their own thoughts. The output is somewhat situational. For instance, work on data might be saved, shown, etc. based on the digression of the AI. If it is solving a problem, same thing.
Also, sorry for not posting everything. I was really tired, and was typing this up before I went to bed.
CWood: It's kinda both, a why the hell not if theirs no interest. A serious project if their is a lot of interest. I've done a lot of them too.
Thomas Edison? Well, he was crazy... But yes, I see your point. But it's always nice to have a reason so your not like "Why did I do this again?".
Yes. But remember, curiosity electrocuted the cat. Unless they are in Norway.
Love4Boobies (most Recent):
Because I am waiting for my fruit to grow (specifically my lemons and apples, it's winter over here). It does solve a problem, well it will solve a multitude of problems, so it is bound to be useful. And yes, I have all those attitudes, but you never really loose them. I want to do something unique, but familiar. But entirely different.
Agreed, but he still has a point. It's like if I apply try to explain quantum mechanics with a barrel of apples, of course it won't exactly match up, but it still gets the major points across.
Anyway, thanks for your opinions so far. Love4Boobies, you have given me a lot to think about. ALOT.
P.S: Have any of you tried One Note on Windows 8? It is absolutely delicious!
- Love4Boobies
- Member
- Posts: 2111
- Joined: Fri Mar 07, 2008 5:36 pm
- Location: Bucharest, Romania
Re: Multi-Kernel Self Deciding OS
Right, it's just that we're not yet entirely sure what you mean by "things." Like what, when to perform backups, or maybe which files are safe for execution? Being very specific would take this conversation closer to the ground. Besides, as a user, if I am to run such a system, I want to know precisely what to expect of it.Lionel wrote:It does solve a problem. Sort of. It is basically a Knowledge Engine-Esque OS that has a UNIX user-space. The problem is that humans are very bad at deciding things. Emotions and preconceived notions cloud our judgement. This aims to solve this.
Why do you prefer having three personalities with competing goals, that vote, instead of one that takes all the information into account (in some situations, you can literally deal with an infinite amount of data in machine learning---see SVM's) and makes a decision that is perhaps more optimal?Lionel wrote:Remember, this isn't a general use OS. If the Personalities were the same, they would always agree with each-other. If they are different, they could compromise with each other, the same way people compromise with their friends when they go out.
Sorry, I should have explained better. What I meant was that users would be able to write script or use settings that are perfect for their particular goals. If the script or settings do not cover the new situation they are in, then they can just modify them. On the other hand, these personalities would likely create the following problems:Lionel wrote:And yes, while configuration and scripting are easier, think about it this way: This is completely dynamic. This would make it very easy for them to react to situations not thought of in the script or config file. And how would it not be flexible? They can think for themselves, so they are just as flexible as a living being would be (not for a long time, unfortunately ).
- Unlike with scripts and/or settings, the results would be rather unpredictable. I could easily write a script to do exactly what I want to. On the other hand, it would probably be very hard to tell the personalities that I want something else than they do.
- The kernel would need to keep a huge database of recent inputs and outputs so that that sample can be used in case of an OS update. If any of the personalities change in a way that renders them incompatible with their old knowledge base, it will need to retrain them. For example, think about using a different type of ANN or switching from matrices to gradient descent in the case of a regression problem.
- There would be no standard behavior (because each system running Ununtrium would have a different history with the personalities), meaning that users would become confused when switching from one Ununtrium instance to another. Getting help in the case of a problem would be equally frustrating because each system would behave differently (e.g., on the wiki we have information, tutorials, and troubleshooting articles on how to set up a proper OSdev toolchain; these apply universally).
Not really. There would be no need for the kernel and personalities to go back and forth. And there's no need to call the kernel directly. If the user wants to do something, he/she interacts with the personalities (which run either in user or kernel mode), which in turn interact with the kernel (making the switch if it hasn't already been made). The advantage of implementing them in user space is that they are easier to turn of if they get annoying.Lionel wrote:Well, not exactly, but preferred. But because three AI (witch would be somewhat omnipotent - hence being in supervisor mode) need to run on one computer, it would probably be faster if they their code was executed in kernel mode, so we can get the overhead of mode-switches away. Plus they would have direct memory access.
The output can't just be decided by the personalities. You have to code in each class of output independently. Hence, you must decide exactly what kinds of problems the personalities will get involved in. Also, I am not too sure how to interpret "common sense."Lionel wrote:Inputs would be trusted network sources for data (Wolfram Alpha?), common sense, the user, and their own thoughts. The output is somewhat situational. For instance, work on data might be saved, shown, etc. based on the digression of the AI. If it is solving a problem, same thing.
You prev!Lionel wrote:Because I am waiting for my fruit to grow (specifically my lemons and apples, it's winter over here).
Glad to hear.Lionel wrote:Anyway, thanks for your opinions so far. Love4Boobies, you have given me a lot to think about. ALOT.
"Computers in the future may weigh no more than 1.5 tons.", Popular Mechanics (1949)
[ Project UDI ]
[ Project UDI ]
Re: Multi-Kernel Self Deciding OS
I am glad to point out that those 3 "personalities" come from anime "Neon Genesis Evangelion".I would, if it would work, would do this. I would also have 3 personalities, Balthazar, Melchior, and Caspar.
- Griwes
- Member
- Posts: 374
- Joined: Sat Jul 30, 2011 10:07 am
- Libera.chat IRC: Griwes
- Location: Wrocław/Racibórz, Poland
- Contact:
Re: Multi-Kernel Self Deciding OS
Older sources date those three names used together roughly at the beginning of our era, so I guess the anime derived them from the older sources.
Not that it brings anything into the topic.
Not that it brings anything into the topic.
Reaver Project :: Repository :: Ohloh project page
<klange> This is a horror story about what happens when you need a hammer and all you have is the skulls of the damned.
<drake1> as long as the lock is read and modified by atomic operations
<klange> This is a horror story about what happens when you need a hammer and all you have is the skulls of the damned.
<drake1> as long as the lock is read and modified by atomic operations
-
- Member
- Posts: 283
- Joined: Mon Jan 03, 2011 6:58 pm
Re: Multi-Kernel Self Deciding OS
Love4Boobies: Just wanted to hop in and explain the "original" concept (from Evangelion, as it seems you havn't seen it?)
This system was used for "large" decisions, like military deployments, fail-safe scenarios, etc. Not simple PC work. (Kinda like Joshua from "War Games")
In this case, as an OS for a PC, or as a hypervisor module, there are some interesting problems a system like this could solve such as
Let me just say that with current technology this idea is probably not realistical due to latency of decisions vs processing power. It goes back to the old argument of "perfect" vs. good enough.
- Monk
This system was used for "large" decisions, like military deployments, fail-safe scenarios, etc. Not simple PC work. (Kinda like Joshua from "War Games")
In this case, as an OS for a PC, or as a hypervisor module, there are some interesting problems a system like this could solve such as
- thread prioritization
- Paging out memory
- Filesystem optimization (most used stuff, or more likely to be used stuff at disk locations with better access time)
- preloading of applications/libraries (code)
- preloading of images/documents (data)
- threat detection (non heuristic virus/malware scans)
Let me just say that with current technology this idea is probably not realistical due to latency of decisions vs processing power. It goes back to the old argument of "perfect" vs. good enough.
- Monk
- Lionel
- Member
- Posts: 117
- Joined: Fri Jul 16, 2010 2:16 pm
- Libera.chat IRC: ryanel
- Location: California
Re: Multi-Kernel Self Deciding OS
Love4Boobies:
Things... Just kidding. Actually, tjmonk15 has the right idea. Exactly on the money. Or in this case, 15 internets. (I have alot of internets to give )
The three Personalities would have different thoughts, and when their conclusion is in majority favor, it is passed. Unless they all decide it is detrimental or extremely important, which they then all have to agree on (imagine them running a facility, and they are proposed to self destruct. You don't want just majority, you want COMPLETE approval.) And as tjmonk15 said, it is more likely correct if two conflicting entities agree on one thing. Also, they would all have access to data, which could be infinite. Not exactly sure how to get infinite data, possibly a kind of ram disk buffer-thing.
Of course their unpredictable, their suppose to be sentient! Not unreasonable though. It might be possible for root to force something to be done, but if it is truly important (such as self-destruct), then it would still be up to them.
Probably using something similar to GoogleFS. for data management, except more database-like, and adapted for a single (or multiple) computer(s).
Not really. Binaries would run the same, which is most of the experience. Alot of differences would be under the hood. Also, remember that this isn't general purpose: think government and military usage. Besides, each personality would develop over time to fit the users needs.
Their not meant to be turned off. Turning them off would be the equivalent of " I don't like you, I'm turning you off now *gunshot*". There is also a great need for the personalities to be in kernel mode. This is so they have full memory access, don't have task mode switches, and their need to be omnipotent within the confines of the machine.
Yes, there would be a bunch of abstract base classes. But remember, because they are omnipotent, why can't they create their own kinds of input? It would be absolutely amazing in the future if I get them to self-modify their code. This also might be the case where I could use configuration files.
Since these personalities are based on the characteristics and personalities of a subject-human, they would inherit a humans common sense.
I don't understand. Prev is not a word, and my lemon tree is actually dying right now, its almost winter.
Seriously, you are like the master at making people think, except my mom when she is angry.
serviper:
Yes.
I also renamed the Personality Transfer OS to Ununtrium (element 113 on the periodic table), made it a kernel, and made it run and manage the hardware and personalities.
Griwes:
Yup. The anime is a extremely religious, with elements popping up everywhere. It is the biggest punch to your physiology, sanity, muffins, and faith ever created. It was worse for me, I watched it when I was nine.
tjmonk15:
I AGREE WITH EVERYTHING YOU MIND READER.
YOU READ MY MIND, MY TINY DAMAGED MIND.
Also, the latency issue would be fixed by having:
a) Fast hardware
b) Clusters of said hardware
c) Having four of those
or
a) Fast quad core machine.
or
a) A brain or artificial neural system.
The hardware might not be ready for it, but by the time I finish the basic components, won't the hardware already exist (in 2015?, 2115?).
Thanks guys, you've been really helpful.
Things... Just kidding. Actually, tjmonk15 has the right idea. Exactly on the money. Or in this case, 15 internets. (I have alot of internets to give )
The three Personalities would have different thoughts, and when their conclusion is in majority favor, it is passed. Unless they all decide it is detrimental or extremely important, which they then all have to agree on (imagine them running a facility, and they are proposed to self destruct. You don't want just majority, you want COMPLETE approval.) And as tjmonk15 said, it is more likely correct if two conflicting entities agree on one thing. Also, they would all have access to data, which could be infinite. Not exactly sure how to get infinite data, possibly a kind of ram disk buffer-thing.
Of course their unpredictable, their suppose to be sentient! Not unreasonable though. It might be possible for root to force something to be done, but if it is truly important (such as self-destruct), then it would still be up to them.
Probably using something similar to GoogleFS. for data management, except more database-like, and adapted for a single (or multiple) computer(s).
Not really. Binaries would run the same, which is most of the experience. Alot of differences would be under the hood. Also, remember that this isn't general purpose: think government and military usage. Besides, each personality would develop over time to fit the users needs.
Their not meant to be turned off. Turning them off would be the equivalent of " I don't like you, I'm turning you off now *gunshot*". There is also a great need for the personalities to be in kernel mode. This is so they have full memory access, don't have task mode switches, and their need to be omnipotent within the confines of the machine.
Yes, there would be a bunch of abstract base classes. But remember, because they are omnipotent, why can't they create their own kinds of input? It would be absolutely amazing in the future if I get them to self-modify their code. This also might be the case where I could use configuration files.
Since these personalities are based on the characteristics and personalities of a subject-human, they would inherit a humans common sense.
I don't understand. Prev is not a word, and my lemon tree is actually dying right now, its almost winter.
Seriously, you are like the master at making people think, except my mom when she is angry.
serviper:
Yes.
I also renamed the Personality Transfer OS to Ununtrium (element 113 on the periodic table), made it a kernel, and made it run and manage the hardware and personalities.
Griwes:
Yup. The anime is a extremely religious, with elements popping up everywhere. It is the biggest punch to your physiology, sanity, muffins, and faith ever created. It was worse for me, I watched it when I was nine.
tjmonk15:
I AGREE WITH EVERYTHING YOU MIND READER.
YOU READ MY MIND, MY TINY DAMAGED MIND.
Also, the latency issue would be fixed by having:
a) Fast hardware
b) Clusters of said hardware
c) Having four of those
or
a) Fast quad core machine.
or
a) A brain or artificial neural system.
The hardware might not be ready for it, but by the time I finish the basic components, won't the hardware already exist (in 2015?, 2115?).
Thanks guys, you've been really helpful.
Re: Multi-Kernel Self Deciding OS
Hi,
I'm curious...
Is there, or has there ever been, any form of AI that is not:
a) inferior to a sufficiently complex algorithm with no artificial intelligence
Or:
b) a sufficiently complex algorithm with no artificial intelligence (that merely seems like AI - "The science you do not understand looks like magic")
Cheers,
Brendan
I'm curious...
Is there, or has there ever been, any form of AI that is not:
a) inferior to a sufficiently complex algorithm with no artificial intelligence
Or:
b) a sufficiently complex algorithm with no artificial intelligence (that merely seems like AI - "The science you do not understand looks like magic")
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Multi-Kernel Self Deciding OS
The only useful "AI" I've seen/used is fuzzy logic, but then this is not really AI, but rather a complex algorithm.
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
Re: Multi-Kernel Self Deciding OS
Isn't human consciousness a sufficiently complex application of physics so that it looks like magic?b) a sufficiently complex algorithm with no artificial intelligence (that merely seems like AI - "The science you do not understand looks like magic")