Many operating system ideas
Many operating system ideas
Hi, am a newbie at this forum. I have been interested in operating systems for quite a long while. While I would like to totally destroy the current paradigm of computing, i believe that it would be next to impossible. However, I believe that we can take small steps in a different direction, to see out other ways of doing things.
I have written a blog of many of my ideas. They range from persistance, functional languages, file system types, interface, security (I just realised that security was so important), and anything else that somebody would be interested in.
The below is a repost of the first post of my blog, to see my full blog go to http://windozer007.blogspot.com/ . Just for your information, I have updated some of my ideas and this repost conveys much, but not all of what my ideas are based around. For further clarity of my ideas (yes, they are very, very scattered and now well explained) read up the blog. I recommend reading the archives first. The more recent blog posts are a bit more scattered - less time on my part.
So, without further ado, here is the repost:
I would really like to see a great operating system that is not of the windoze or unix/linux/*nix type. But what would an operating system that does not live off these two giants be like? My ideas migth often be unachievable, slow, or not useful, but here is what would be cool.
1. Operating systems and Program split: is it useful to have a very clear distinction, rather, does the operating system needs be an unchanging thing? Security and safty can be managed even if there is a lack of distinction. Many of the following ideas can be implemented in a program, however, if all programs will end up implementing this, why not make it a more basic service offered by the operating system. The question then becomes one of how much we can put into the operating system, and what a program actually needs.
2. Super-flexibility: i would like to see the whole operating system as a very small core operating system bit that exposes mechanisms to do things. So the operating system might be more of a plugin command station. If the entire system ran on managed code, that might work.
3. Powerful information system: well, computers are really there to do a few simple things: show information, do stuff to information, get information from people, hardware, internet and to store information. The current way of doing this is by separating this information into files, and ordering files into directories. A better way of doing things is to give information tags, and to search for information.
4. Powerful and Flexible viewing: in terms of showing information it would be great if we are shown the essence of the information, rather than one aspect of the information. That means that we can specify what way we would like to view it. Sort of like how windows explorer can show files as icons, as lists etc. We should not be limited by how we can view the same information
5. Lazyness: the operating system should only do what it must to do. If there are some things that it must to do, it should only use as much resources as it can get away with.
6. First class everything: do you like anything you see on the computer. Save the information down on the computer. Everythng is first class, so you can apply functions on whatever you save. User interfaces are first class, you can directly edit it - like how you can edit a toolbar, except this is for all user interface elements. Settings are first class, so you can load it in an independent common setting editor. Maybe a structured language like XML would be useful. We can make all functions curryable since a curried function is another function.
7. Separate/decouple all components of programs, and allow total distribution: if you have a chess game, and you distribute the interface across two computers on a network, you have a networked chess program. if you used managed code, you can ensure that you only use a grainuality and protocol that suit the distribution. Because you decouple things, you achieve higher reliability. Implementing the model-viewer-controller or a variant idea.
8. Separate functionality: why have many little bits of functionality that make up a program when you can simply separate the functions? So for example, instead of having a text editor, we have a text viewer, plus a set of text transform functions (which may take user input or other forms of input), which takes the text from one poitn to another. Even something like pasting or typing a bit of text would be a function, which may look something like insert_text(source, destination)
9. Logging: all file operations (or information operations) are logged as a set of functions that have been applied this gives you the power of undoing for free, but also gives you a backtrace of all raw sources of information. Since the log of what has happened to a file is a first class object, we might even have a file composed only of a log, which can act as a file.
10. Separate the representation from the information content: i suppose that is what Object Orientedness istrying to do, but I am sure that there is a better way than objects. Or, maybe an reformulation of the idea.
11. Choice and non-connectedness: when the computer wants information, the programmer creates a dialogue box to ask for information. Instead, what should be done is that a query for information is made, and passed onto whatever type of input program there is. That is, much like the model-viewer-controller view, we can decouple input from the program. The program merely gives a query for information. We can then search the computer for this information, query the user for information (via web interface, mobile phone sms, dialogue boxes, swipe card interface... anything) or anything that gets the information
12. Layered: the operating system must present different levels of abstraction for different level of uses.Some users may be more comfortable with the familiar windoze interface. So that is what the OS looks like, but as you strip away abstraction, the system will become more flexible, and more of an "object/information management system". because we remove abstraction, we can get more flexibility and interconnectedness
13. List-edness: J (APL descendant) rules because it allows you to process one element just like how you would process an array of like elements. If we use managed code, when a function is called, we can automatically write in a managed loop around the program, and even make sure that the program does not run twice, it merely does all operations twice.
14. Never do anything (more than) twice: since everything is a function of some sort, and all functions are recorded, we can see that when a person does something twice, the computer can be made to recognise such a loop. it can then generalise the loop by providing generalised inputs such as a list of inputs. Since a list of actions is afirst class object, you can save this and use it later. So we can get macros for free if we log. In fact, when you do such a process, we can be lazy and simply append the relevant operations onto the (first class) file operation log. When the information is accessed, it will then do the processing, or alternatively, it will do it n the spare cpu time
15. Reflection everywhere - since anything is first class, even metadata about how a program runs will be storable.
16. Flows: http://vvvv.meso.net/tiki-index.php?page=screenshots [warning, lots of pictures] flows allows us to visualise the process of function composition
17. Transparent: we will make many things like compression, encryption, network access transparent because they are just convolutions of information access. We can implement these as interceptors on our flow charts.
18. well, i am sure there will be more ideas from you! yes, I need you! please help. thanks
Anyway, regardless of whether this will work or not, I think a nice name for the OS will be Tao. I tried to base these the above ideas on the following concepts from the Tao/Dao.
* It will allow the sage to experience without abstraction (via removable layers) and accomplish without (too much) action.
* It will treat all things equally (as first class objects).
* It will draw upon experience(macros/operation lists) to accomplish.
* Once the purpose is achieved, it will retire (lazily).
* It will be scarcely known.
* It will use the unchanging(first class objects) to represent motion.
* It will deal with small problems(fine grained functions) to deal with the large.
I have written a blog of many of my ideas. They range from persistance, functional languages, file system types, interface, security (I just realised that security was so important), and anything else that somebody would be interested in.
The below is a repost of the first post of my blog, to see my full blog go to http://windozer007.blogspot.com/ . Just for your information, I have updated some of my ideas and this repost conveys much, but not all of what my ideas are based around. For further clarity of my ideas (yes, they are very, very scattered and now well explained) read up the blog. I recommend reading the archives first. The more recent blog posts are a bit more scattered - less time on my part.
So, without further ado, here is the repost:
I would really like to see a great operating system that is not of the windoze or unix/linux/*nix type. But what would an operating system that does not live off these two giants be like? My ideas migth often be unachievable, slow, or not useful, but here is what would be cool.
1. Operating systems and Program split: is it useful to have a very clear distinction, rather, does the operating system needs be an unchanging thing? Security and safty can be managed even if there is a lack of distinction. Many of the following ideas can be implemented in a program, however, if all programs will end up implementing this, why not make it a more basic service offered by the operating system. The question then becomes one of how much we can put into the operating system, and what a program actually needs.
2. Super-flexibility: i would like to see the whole operating system as a very small core operating system bit that exposes mechanisms to do things. So the operating system might be more of a plugin command station. If the entire system ran on managed code, that might work.
3. Powerful information system: well, computers are really there to do a few simple things: show information, do stuff to information, get information from people, hardware, internet and to store information. The current way of doing this is by separating this information into files, and ordering files into directories. A better way of doing things is to give information tags, and to search for information.
4. Powerful and Flexible viewing: in terms of showing information it would be great if we are shown the essence of the information, rather than one aspect of the information. That means that we can specify what way we would like to view it. Sort of like how windows explorer can show files as icons, as lists etc. We should not be limited by how we can view the same information
5. Lazyness: the operating system should only do what it must to do. If there are some things that it must to do, it should only use as much resources as it can get away with.
6. First class everything: do you like anything you see on the computer. Save the information down on the computer. Everythng is first class, so you can apply functions on whatever you save. User interfaces are first class, you can directly edit it - like how you can edit a toolbar, except this is for all user interface elements. Settings are first class, so you can load it in an independent common setting editor. Maybe a structured language like XML would be useful. We can make all functions curryable since a curried function is another function.
7. Separate/decouple all components of programs, and allow total distribution: if you have a chess game, and you distribute the interface across two computers on a network, you have a networked chess program. if you used managed code, you can ensure that you only use a grainuality and protocol that suit the distribution. Because you decouple things, you achieve higher reliability. Implementing the model-viewer-controller or a variant idea.
8. Separate functionality: why have many little bits of functionality that make up a program when you can simply separate the functions? So for example, instead of having a text editor, we have a text viewer, plus a set of text transform functions (which may take user input or other forms of input), which takes the text from one poitn to another. Even something like pasting or typing a bit of text would be a function, which may look something like insert_text(source, destination)
9. Logging: all file operations (or information operations) are logged as a set of functions that have been applied this gives you the power of undoing for free, but also gives you a backtrace of all raw sources of information. Since the log of what has happened to a file is a first class object, we might even have a file composed only of a log, which can act as a file.
10. Separate the representation from the information content: i suppose that is what Object Orientedness istrying to do, but I am sure that there is a better way than objects. Or, maybe an reformulation of the idea.
11. Choice and non-connectedness: when the computer wants information, the programmer creates a dialogue box to ask for information. Instead, what should be done is that a query for information is made, and passed onto whatever type of input program there is. That is, much like the model-viewer-controller view, we can decouple input from the program. The program merely gives a query for information. We can then search the computer for this information, query the user for information (via web interface, mobile phone sms, dialogue boxes, swipe card interface... anything) or anything that gets the information
12. Layered: the operating system must present different levels of abstraction for different level of uses.Some users may be more comfortable with the familiar windoze interface. So that is what the OS looks like, but as you strip away abstraction, the system will become more flexible, and more of an "object/information management system". because we remove abstraction, we can get more flexibility and interconnectedness
13. List-edness: J (APL descendant) rules because it allows you to process one element just like how you would process an array of like elements. If we use managed code, when a function is called, we can automatically write in a managed loop around the program, and even make sure that the program does not run twice, it merely does all operations twice.
14. Never do anything (more than) twice: since everything is a function of some sort, and all functions are recorded, we can see that when a person does something twice, the computer can be made to recognise such a loop. it can then generalise the loop by providing generalised inputs such as a list of inputs. Since a list of actions is afirst class object, you can save this and use it later. So we can get macros for free if we log. In fact, when you do such a process, we can be lazy and simply append the relevant operations onto the (first class) file operation log. When the information is accessed, it will then do the processing, or alternatively, it will do it n the spare cpu time
15. Reflection everywhere - since anything is first class, even metadata about how a program runs will be storable.
16. Flows: http://vvvv.meso.net/tiki-index.php?page=screenshots [warning, lots of pictures] flows allows us to visualise the process of function composition
17. Transparent: we will make many things like compression, encryption, network access transparent because they are just convolutions of information access. We can implement these as interceptors on our flow charts.
18. well, i am sure there will be more ideas from you! yes, I need you! please help. thanks
Anyway, regardless of whether this will work or not, I think a nice name for the OS will be Tao. I tried to base these the above ideas on the following concepts from the Tao/Dao.
* It will allow the sage to experience without abstraction (via removable layers) and accomplish without (too much) action.
* It will treat all things equally (as first class objects).
* It will draw upon experience(macros/operation lists) to accomplish.
* Once the purpose is achieved, it will retire (lazily).
* It will be scarcely known.
* It will use the unchanging(first class objects) to represent motion.
* It will deal with small problems(fine grained functions) to deal with the large.
Many of your ideas are things I have dabbled with over the last 5 years. I must admit that I am not as "abstract" as you are, but here is my critique.
1. This can be easily covered by abstracting a computer for what it really is, a finite computing resource. Provide basic, yet specific, system initialization that will allow a security-oriented Memory/Process Management supervisor. You may wish to abstract further into the Von-Neumann architecture, which is quite like the human brain, in that you have long-term storage (i.e. disk/network drives) as well.
2. Excellent idea. Type-safe or managed code is a good idea, but it would be better to compile it into machine code on demand. This *may* also help in dissolving the need for protection levels (i.e. supervisor/user), if program intent/logic can be determined and approved prior to runtime.
3. Filesystems work in that you should have very fast access to the data you need despite the immense amount of data stored. You need to specify how you intend to replace the basic filesystem idea with something greater-than (or at least equal to) it.
4. Chicken or the egg. Once you get deep into memory management, you start realizing the issues of ideological OS design. It takes information to define information. That information also needs to be stored/sorted somehow. This is a limit of the User Interface and not so much the OS.
5. Ideally, yes. Using CPU downtime to defrag your memory space would be a very smart thing to do... but would not necessarily be deemed "necessary". Windows on the other hand...
6. Same principles as #4. How information is manipulated is based on the complexity of the User Interface. You seem to be heavily invested in type-safe material (e.g. XML) at this point. May I suggest if you go this route, that the User Interface adhere to the same principles as you suggested in #5.
7. This requires the application programmer to be aware-of and capitalize-on basic facilities that will be closely related to multi-threading concepts. Furthermore, you will probably have to provide abstractions to certain threads within a process/program.
8. Already established. This is the basic concept of a shared library or DLL. A good example is the Win32 API. Loading many shared code libraries can have a negative impact on performance, as well as over-generalizing functions. May I suggest a *very* careful study of User Interface design. Group up your common routines by class/function.
9. Similar to #4, it will take a considerable amount of room. The basics of this is something that is more efficiently covered by concepts of journalizing filesystems such as NTFS, EXT3 and Reiser.
10. Similar to #4 and #9.
11. Definitely the job of the User Interface and accompanying tools.
12. When you remove abstractions, you remove structure. It takes more time and space to keep track of those structures until they are restructured into a different abstraction.
13. Generally solved by the miracles of Paging. Reuse the code pages, but define new data pages. Efficiently recalling functions is only half the battle, you have to account for the new data that is to be manipulated in that function.
14. The cost of learning is the time it takes to "unlearn" something and learn again. An exception to the rule in a time-critical application can be disastrous in such an environment.
15. Yes, but don't give too much up on the idea of structure... no one will want to develop applications if there are no standards to follow.
16. Good for humans. Bad for computers.
17. You mean drivers???
It is OK to have abstract ideas, but you still need a place and method to implement them. For example, modern computing is somewhat mystical and to be awed... you can do just about anything in your imagination (like your abstract concepts)... but remember that the hardware still has to be there to provide those abstractions the real-world manipulation mechanisms they require. This relationship is the same as your OS design. You will have to provide some layer of standards/structure/consistency in order to allow such abstract ideas to thrive.
PS: You'd probably *really* like Unununium and/or Singularity
1. This can be easily covered by abstracting a computer for what it really is, a finite computing resource. Provide basic, yet specific, system initialization that will allow a security-oriented Memory/Process Management supervisor. You may wish to abstract further into the Von-Neumann architecture, which is quite like the human brain, in that you have long-term storage (i.e. disk/network drives) as well.
2. Excellent idea. Type-safe or managed code is a good idea, but it would be better to compile it into machine code on demand. This *may* also help in dissolving the need for protection levels (i.e. supervisor/user), if program intent/logic can be determined and approved prior to runtime.
3. Filesystems work in that you should have very fast access to the data you need despite the immense amount of data stored. You need to specify how you intend to replace the basic filesystem idea with something greater-than (or at least equal to) it.
4. Chicken or the egg. Once you get deep into memory management, you start realizing the issues of ideological OS design. It takes information to define information. That information also needs to be stored/sorted somehow. This is a limit of the User Interface and not so much the OS.
5. Ideally, yes. Using CPU downtime to defrag your memory space would be a very smart thing to do... but would not necessarily be deemed "necessary". Windows on the other hand...
6. Same principles as #4. How information is manipulated is based on the complexity of the User Interface. You seem to be heavily invested in type-safe material (e.g. XML) at this point. May I suggest if you go this route, that the User Interface adhere to the same principles as you suggested in #5.
7. This requires the application programmer to be aware-of and capitalize-on basic facilities that will be closely related to multi-threading concepts. Furthermore, you will probably have to provide abstractions to certain threads within a process/program.
8. Already established. This is the basic concept of a shared library or DLL. A good example is the Win32 API. Loading many shared code libraries can have a negative impact on performance, as well as over-generalizing functions. May I suggest a *very* careful study of User Interface design. Group up your common routines by class/function.
9. Similar to #4, it will take a considerable amount of room. The basics of this is something that is more efficiently covered by concepts of journalizing filesystems such as NTFS, EXT3 and Reiser.
10. Similar to #4 and #9.
11. Definitely the job of the User Interface and accompanying tools.
12. When you remove abstractions, you remove structure. It takes more time and space to keep track of those structures until they are restructured into a different abstraction.
13. Generally solved by the miracles of Paging. Reuse the code pages, but define new data pages. Efficiently recalling functions is only half the battle, you have to account for the new data that is to be manipulated in that function.
14. The cost of learning is the time it takes to "unlearn" something and learn again. An exception to the rule in a time-critical application can be disastrous in such an environment.
15. Yes, but don't give too much up on the idea of structure... no one will want to develop applications if there are no standards to follow.
16. Good for humans. Bad for computers.
17. You mean drivers???
It is OK to have abstract ideas, but you still need a place and method to implement them. For example, modern computing is somewhat mystical and to be awed... you can do just about anything in your imagination (like your abstract concepts)... but remember that the hardware still has to be there to provide those abstractions the real-world manipulation mechanisms they require. This relationship is the same as your OS design. You will have to provide some layer of standards/structure/consistency in order to allow such abstract ideas to thrive.
PS: You'd probably *really* like Unununium and/or Singularity
Just a few clarifications (read that blog to get more details). Sorry for writing so much, but I feel passionate about this.
1. In later versions of my idea, there are no programs. A software package would consist of: {definitions of data, functions to manipulate data, one or many meta-viewers for the data, a set of standard user interface hooks to apply a function}. They are basically plugins that are as minimal as possible. It also means that you can literally embed anything in anything else. Except usually that would be a high level power not given to newbies used to windows.
2. The operating system is in effect a dynamic compiler, ensuring that just as things need to be done, it creates the code for it. This can involve taking several functions and merging them together (simple loop fusion, tail-recursion removal, etc). I have not really changed my understanding of this. However, if this is to be efficient, it is a large area of research.
3. File systems: I am not totally sure about it (my opinion and ideas are always changing on this one), but currently it is this. There is many types of information, that I believe should be managed differently, because the underlying intent is somewhat different. However, they will all be tied back together with the resource locator and transform points. Also note that the main difference between searches and hierarchical structures is: 1.order of keywords do not matter in search 2.you can attach non-nested tags 3.you can o
One is user information - the stuff the user cares about (timetables, finances, documents). The user information on the other hand, has a structure of tags attached to each piece of information. Of course, you would probably attach a tag to a amalgamation of information, rather than one small piece of information. Using a hash would give this good performance. THe slow bit would be merging of search results. Then again, user information would be limited to 1000 documents at most. When there are specialised types of information, such as music, it can be almagated behind a transform point.
The other is computer information, that the applications(read functions) care about, (such as profiles, styles, etc). I envisage that this information can really be managed as a (secure where functions can only read it's own info - avoid viruses) generalised/object registry. This can be implemented as a function of some sort getinfo(info_name, misc stuff here). Probably a hash would give good performance.
The final is temporary information - a function outputs information, and this information exists until it is taken in by another function, or explicitly destroyed. If you tag temporary information, it becomes searchable - i.e. more handy, and now user information. If you dont tag this information, it will just float around, and will have to be explored and cleaned periodically. Generally a script (a list of functions acting on each other) will make sure that all iniformation will be passed around, so that there is only one final answers. If it does not, then there is a memory leak.
4. Viewing information: i propose (or re-invent rather) the viewer. Basically a viewer takes information of a specific kind, and then physicalises/realises it (makes it real). It takes information and turns it into physical representation - such as 1,2,3d graphics, 1d pure note, 1d smell. Dont forget time is a +1d, so 1+1d,2+1d,3+1d graphics, 1+1d music, 1+1d changing smell sensations. That takes care of viewing information. Dont forget that most computers do not have a 3d hologram screen of any sort, so you will generally have a camera function that converts what information you want to display into the information that you can display. E.g. 3d->2d (camera function), 1d->2d (projector function), smell->text (for people without smell generators), text->2d (for people without text display devices, but have monitors)
Then comes arranging information. We can arrange things in a few ways: temporally, and spatially. This is what a metaviewer does. It takes information about arrangement and arranges sub-viewers according to the arrangement info. Example: a text editor is a vertical list arrangement of: toolbar, toolbuttons, tabs, text. The toolbar is a horizontal list arrangement of text viewers. The toolbuttons is a horizontal list arrangement of 2d graphics viewers.
And so on. Now, we see that our text editor has essentially become: 1. a text file describing it's composition (list of toolbar, toolbuttons, tabs, text) 2. a viewer for text, graphics (provided by system) 3. specific viewers for text (such as cool highlighting) possibly provided by first streaming the text through a formatter before putting it into the viewer 4. a set of information manipulation functions. namely (insert character, delete character, and set format fo character) 5. a set of user interface hooks.
Also, stuff like multiple desktops, and compviz type 3d rotating cube desktops are merely types of metaviewers. Windows are metaviewers that allow arbitrary positioning and resizing of information in the screen. 3d Cube desktops are functions that take the windows, convert them to 3d, place them spatially (metaviewer), and convert this back to 2d, to be displayed by monitor.
Once again, we can preserve the structure in the file system, and since the structure of the file system is a text file, we can apply a metaviewer on it, and suddenly we have windows explorer. it is a metaviewer on the file system.
5. Lazyness: By lazyness, I do not mean defrag computer. I mean such functions that do not have a viewer attached on it. While these functions results are not looked at or needed by other functions, and are thus not (in a wierd quantum physics analogy) necessary, they might not be being calculated.
6. First class information: because we rely on viewers so much, we need everything to be first class, so that our viewers can actually attach to it. Once we have viewers capable of viewing it, we can load up our standardised tools to do stuff to the information.
7. Distributed stuff: well, when I wrote it, I did not understand the complexities and nondecidability of various deadlocking problems. Now that I do, I am more cautious about this landmine. Basically I mean that functions can exist anywhere, and a pipe that pipes information between functions does not have to be local pipes between local functions, but literally pipe across any network (which is in essence, a very large pipe). The problems start to roll in because this causes time inefficiencies due to network, encoding, synchronisaton. Lazyness was supposed to be the way to solve this problem, but it is a hard one. Good research area.
8. Functions: having functions in different namespaces, paged across different ram areas, and needing to go though windows for each dll call is what makes dll calls slow. Instead, because functions are all so fundamental, we will not have any namespaces, eliminate paging, and make the managed code actually compile a direct link (with validation etc) between functions. In fact, most functions (being small things) will be inlined, loop fusioned, or some other optimisation. Functions will all be very small and even atomic. THe trick is that even though we have small, readily testable functions that are slow, the compiler will inline most of it. THe compiler "weaves" the program at run time, depending on the situation. As for namespace pollution, which is a big problem, I am unsure how to do it. If we truly run on intents, the intent to do a thing is unaffected by what type of stucture it is in.
9. Journalling file system: I am not sure what I am saying is a journalled file system as I understand it to be. Journalled file systems write old versions. I delete all old versions. What we store is a list of what functions have been applied to get to that version of information. This is a list, and updating a file is basically appending a function to it. All data sources will be stored - a data source is something that can not be recreated by looking at the functions applied. Basically, they are functions of not just the internal computer, but of the world and the computer. Thus text input, downloads, data entry - basically anything temporary that depends on the state of the universe outside of the computer will be stored, unless explicitly deleted. This allows us to go to any stage of the log. Although, for performance, it might be good to keep a few copies of the past.
11. User interface: what is a user interface? a viewer and a set of input hooks. the userinterface is part of the system, and thus we get amazing uniformity in the operating system, leading to nice feeling of comfort. daydreams.... Admittedly, viewers are expensive, since they involve many layers of callback functions. But i am sure with further use of the operating system's dynamic compiler's refactoring, we can rewrite the functions out of existence.
13. Dealing with lots of data: well, lazyness is good, functions do not act until asked to act. Thus we can fuse lazy functions into a pipeline, that can be executed sequentially with a temporary variable over the entire list.
14. Macros: I am not sure what you were saying, but I was pushing that isnce the entire operating system is a set of functions, you can look for patterns and write generalised macros for it (needs human intervention to work though).
15. There will not be much need for developers, except to be very able at using the primitives of the OS. Developers will be much like sewing machine engineers. They exist, they are indespensible, but there are not that many of them. Rather, we have sewing machine pattern makers, sewing machine users.
16. Flows are just another composition metaviewer, that takes a graph and displays it. We can thus use general graph editing functions to edit our function compositions. Thus with we can access the entire computer, if you can point the viewer to the right place (needs priveleges, and newbie shell removed though), with a graph editor. The file system is just a list. You can view it with a list viewer. The operating system is a list of functions running, and pipes connecting. View it with a list viewer.
17. Transformation points: you talk to it like a normal file system object, and it does magic behind your back, and you get what you want. Compressed files are just normal files behind a compression transformation point. Network computers are just normal files with a "hidden" network access layer between.
18. If you braved all that above bad text, I will tell you about the OS's essence. It is the tao. I still dont understand the tao. I still dont fully know my OS ideas, and many are just as loose as the tao is. I hope to define things a bit more, and start making a start into the project. viewers, information and functions.
1. In later versions of my idea, there are no programs. A software package would consist of: {definitions of data, functions to manipulate data, one or many meta-viewers for the data, a set of standard user interface hooks to apply a function}. They are basically plugins that are as minimal as possible. It also means that you can literally embed anything in anything else. Except usually that would be a high level power not given to newbies used to windows.
2. The operating system is in effect a dynamic compiler, ensuring that just as things need to be done, it creates the code for it. This can involve taking several functions and merging them together (simple loop fusion, tail-recursion removal, etc). I have not really changed my understanding of this. However, if this is to be efficient, it is a large area of research.
3. File systems: I am not totally sure about it (my opinion and ideas are always changing on this one), but currently it is this. There is many types of information, that I believe should be managed differently, because the underlying intent is somewhat different. However, they will all be tied back together with the resource locator and transform points. Also note that the main difference between searches and hierarchical structures is: 1.order of keywords do not matter in search 2.you can attach non-nested tags 3.you can o
One is user information - the stuff the user cares about (timetables, finances, documents). The user information on the other hand, has a structure of tags attached to each piece of information. Of course, you would probably attach a tag to a amalgamation of information, rather than one small piece of information. Using a hash would give this good performance. THe slow bit would be merging of search results. Then again, user information would be limited to 1000 documents at most. When there are specialised types of information, such as music, it can be almagated behind a transform point.
The other is computer information, that the applications(read functions) care about, (such as profiles, styles, etc). I envisage that this information can really be managed as a (secure where functions can only read it's own info - avoid viruses) generalised/object registry. This can be implemented as a function of some sort getinfo(info_name, misc stuff here). Probably a hash would give good performance.
The final is temporary information - a function outputs information, and this information exists until it is taken in by another function, or explicitly destroyed. If you tag temporary information, it becomes searchable - i.e. more handy, and now user information. If you dont tag this information, it will just float around, and will have to be explored and cleaned periodically. Generally a script (a list of functions acting on each other) will make sure that all iniformation will be passed around, so that there is only one final answers. If it does not, then there is a memory leak.
4. Viewing information: i propose (or re-invent rather) the viewer. Basically a viewer takes information of a specific kind, and then physicalises/realises it (makes it real). It takes information and turns it into physical representation - such as 1,2,3d graphics, 1d pure note, 1d smell. Dont forget time is a +1d, so 1+1d,2+1d,3+1d graphics, 1+1d music, 1+1d changing smell sensations. That takes care of viewing information. Dont forget that most computers do not have a 3d hologram screen of any sort, so you will generally have a camera function that converts what information you want to display into the information that you can display. E.g. 3d->2d (camera function), 1d->2d (projector function), smell->text (for people without smell generators), text->2d (for people without text display devices, but have monitors)
Then comes arranging information. We can arrange things in a few ways: temporally, and spatially. This is what a metaviewer does. It takes information about arrangement and arranges sub-viewers according to the arrangement info. Example: a text editor is a vertical list arrangement of: toolbar, toolbuttons, tabs, text. The toolbar is a horizontal list arrangement of text viewers. The toolbuttons is a horizontal list arrangement of 2d graphics viewers.
And so on. Now, we see that our text editor has essentially become: 1. a text file describing it's composition (list of toolbar, toolbuttons, tabs, text) 2. a viewer for text, graphics (provided by system) 3. specific viewers for text (such as cool highlighting) possibly provided by first streaming the text through a formatter before putting it into the viewer 4. a set of information manipulation functions. namely (insert character, delete character, and set format fo character) 5. a set of user interface hooks.
Also, stuff like multiple desktops, and compviz type 3d rotating cube desktops are merely types of metaviewers. Windows are metaviewers that allow arbitrary positioning and resizing of information in the screen. 3d Cube desktops are functions that take the windows, convert them to 3d, place them spatially (metaviewer), and convert this back to 2d, to be displayed by monitor.
Once again, we can preserve the structure in the file system, and since the structure of the file system is a text file, we can apply a metaviewer on it, and suddenly we have windows explorer. it is a metaviewer on the file system.
5. Lazyness: By lazyness, I do not mean defrag computer. I mean such functions that do not have a viewer attached on it. While these functions results are not looked at or needed by other functions, and are thus not (in a wierd quantum physics analogy) necessary, they might not be being calculated.
6. First class information: because we rely on viewers so much, we need everything to be first class, so that our viewers can actually attach to it. Once we have viewers capable of viewing it, we can load up our standardised tools to do stuff to the information.
7. Distributed stuff: well, when I wrote it, I did not understand the complexities and nondecidability of various deadlocking problems. Now that I do, I am more cautious about this landmine. Basically I mean that functions can exist anywhere, and a pipe that pipes information between functions does not have to be local pipes between local functions, but literally pipe across any network (which is in essence, a very large pipe). The problems start to roll in because this causes time inefficiencies due to network, encoding, synchronisaton. Lazyness was supposed to be the way to solve this problem, but it is a hard one. Good research area.
8. Functions: having functions in different namespaces, paged across different ram areas, and needing to go though windows for each dll call is what makes dll calls slow. Instead, because functions are all so fundamental, we will not have any namespaces, eliminate paging, and make the managed code actually compile a direct link (with validation etc) between functions. In fact, most functions (being small things) will be inlined, loop fusioned, or some other optimisation. Functions will all be very small and even atomic. THe trick is that even though we have small, readily testable functions that are slow, the compiler will inline most of it. THe compiler "weaves" the program at run time, depending on the situation. As for namespace pollution, which is a big problem, I am unsure how to do it. If we truly run on intents, the intent to do a thing is unaffected by what type of stucture it is in.
9. Journalling file system: I am not sure what I am saying is a journalled file system as I understand it to be. Journalled file systems write old versions. I delete all old versions. What we store is a list of what functions have been applied to get to that version of information. This is a list, and updating a file is basically appending a function to it. All data sources will be stored - a data source is something that can not be recreated by looking at the functions applied. Basically, they are functions of not just the internal computer, but of the world and the computer. Thus text input, downloads, data entry - basically anything temporary that depends on the state of the universe outside of the computer will be stored, unless explicitly deleted. This allows us to go to any stage of the log. Although, for performance, it might be good to keep a few copies of the past.
11. User interface: what is a user interface? a viewer and a set of input hooks. the userinterface is part of the system, and thus we get amazing uniformity in the operating system, leading to nice feeling of comfort. daydreams.... Admittedly, viewers are expensive, since they involve many layers of callback functions. But i am sure with further use of the operating system's dynamic compiler's refactoring, we can rewrite the functions out of existence.
13. Dealing with lots of data: well, lazyness is good, functions do not act until asked to act. Thus we can fuse lazy functions into a pipeline, that can be executed sequentially with a temporary variable over the entire list.
14. Macros: I am not sure what you were saying, but I was pushing that isnce the entire operating system is a set of functions, you can look for patterns and write generalised macros for it (needs human intervention to work though).
15. There will not be much need for developers, except to be very able at using the primitives of the OS. Developers will be much like sewing machine engineers. They exist, they are indespensible, but there are not that many of them. Rather, we have sewing machine pattern makers, sewing machine users.
16. Flows are just another composition metaviewer, that takes a graph and displays it. We can thus use general graph editing functions to edit our function compositions. Thus with we can access the entire computer, if you can point the viewer to the right place (needs priveleges, and newbie shell removed though), with a graph editor. The file system is just a list. You can view it with a list viewer. The operating system is a list of functions running, and pipes connecting. View it with a list viewer.
17. Transformation points: you talk to it like a normal file system object, and it does magic behind your back, and you get what you want. Compressed files are just normal files behind a compression transformation point. Network computers are just normal files with a "hidden" network access layer between.
18. If you braved all that above bad text, I will tell you about the OS's essence. It is the tao. I still dont understand the tao. I still dont fully know my OS ideas, and many are just as loose as the tao is. I hope to define things a bit more, and start making a start into the project. viewers, information and functions.
I don't know if you realize this, but many of the things you are saying are just abstractions of lower-level functionality that you still must implement.
Ideas don't construct an Operating System, programming logic does. Rewording concepts does not alleviate the work required to achieve your goal. I would stay to start implementing (programming) your ideas and see if/how they change.
In your case, time is the dimension of most concern. We might not have the kind of common-place technology needed to see your project through. No matter how you state your intentions, when dealing with abstractions, the bottom line is the same... it will take more time to process those abstractions.
I do not wish to discourage you, but I've seen plenty of abstract OS concepts come and go. I see them all the time, "futuristic OS ideas" and so forth. So far, UUU is the only one I seen with any amount of *realistic* and non-corporate success.
I actually wish to encourage you, which is why I insisted on the actual implementation of these ideas... instead of having them fade away in dream-land like the rest of them do. The other factor in actually implementing them is that you will answer your own questions about which things can and cannot be done in a reasonable manner.
Good luck with your project
Ideas don't construct an Operating System, programming logic does. Rewording concepts does not alleviate the work required to achieve your goal. I would stay to start implementing (programming) your ideas and see if/how they change.
In your case, time is the dimension of most concern. We might not have the kind of common-place technology needed to see your project through. No matter how you state your intentions, when dealing with abstractions, the bottom line is the same... it will take more time to process those abstractions.
I do not wish to discourage you, but I've seen plenty of abstract OS concepts come and go. I see them all the time, "futuristic OS ideas" and so forth. So far, UUU is the only one I seen with any amount of *realistic* and non-corporate success.
I actually wish to encourage you, which is why I insisted on the actual implementation of these ideas... instead of having them fade away in dream-land like the rest of them do. The other factor in actually implementing them is that you will answer your own questions about which things can and cannot be done in a reasonable manner.
Good luck with your project
haha, that is true, but I dont mind slower personal computing, as long as full screen videos run at realtime (well, that is why we have computers ^_^)
But as I go further, I realise that if the thing is implemented correctly, then it will in fact be very easy to make work. The only difficult bits would be the device drivers. Other than that, what I am doing is providing ring after ring of abstractions away from just processing power, until you reach what we seem to have. THe benefit of this is that you can strip away rings of abstraction. At the final ring, what you have might just be a text viewer (read as: command line thingy).
In terms of timespace complexity, i believe that the efficiency is alright (most algorithms I would emply would be at most nlogn). However, I do worry about the constant in the efficiencies, because of the generalised methods.
So what I am currently finding difficult
1. Finally settling down on one thing. I want to be sure that my idea is ok, before doing any design of it solidly
2. Writing device drivers - hard... no interest... synchronisation - basically it ruins the perfect stuff. I would refactor all of this out to some magic function.
3. Chosing the language of choice for this project. Hard choices..... definitely managed code for everything, including "kernel". Of course, the dynamic compiler can not be managed, since it is a "superkernel". The compiler is ubiquitious, it moves around, rewriting stuff as we go along.
4. Paging and limitations of the physical - I would very much like to refactor these out of the code for the OS, since future computers may have differnt paging styles, memory styles, etc. These aspects should all be in one concentrated bit of source code
5. Making a compiler that can fuse lazy functions. It is difficult.
6. Debugging. I am no good at debugging. Especially stuff like system software, we would need to work harder. Of course one technique is to go and make it so that when a fault occurs, the over-compiler is pushed into action, writes an entire debugger into memory, gives it control, and then allows you to do debuging on the code. (that would be so cool) And by that, I mean that it writes an entire different "debug OS" into memory, and passes control to that OS.
basically this boils down to:
1. Physical limits - computers need to page, we do not have infinte core computers, we need device drivers, time space limitations
2. Knowledge limits - know how to write a compiler
3. Time limits - I am not immortal
The plan is:
1. Decide on a language
2. To steal a lot from linux/OSkit in terms of low level stuff
3. Convert it into my language
4. Write such a compiler
5. Leave minimal actual low level code lying around
6. Start leveraging the power of the OS in the development of the OS
But as I go further, I realise that if the thing is implemented correctly, then it will in fact be very easy to make work. The only difficult bits would be the device drivers. Other than that, what I am doing is providing ring after ring of abstractions away from just processing power, until you reach what we seem to have. THe benefit of this is that you can strip away rings of abstraction. At the final ring, what you have might just be a text viewer (read as: command line thingy).
In terms of timespace complexity, i believe that the efficiency is alright (most algorithms I would emply would be at most nlogn). However, I do worry about the constant in the efficiencies, because of the generalised methods.
So what I am currently finding difficult
1. Finally settling down on one thing. I want to be sure that my idea is ok, before doing any design of it solidly
2. Writing device drivers - hard... no interest... synchronisation - basically it ruins the perfect stuff. I would refactor all of this out to some magic function.
3. Chosing the language of choice for this project. Hard choices..... definitely managed code for everything, including "kernel". Of course, the dynamic compiler can not be managed, since it is a "superkernel". The compiler is ubiquitious, it moves around, rewriting stuff as we go along.
4. Paging and limitations of the physical - I would very much like to refactor these out of the code for the OS, since future computers may have differnt paging styles, memory styles, etc. These aspects should all be in one concentrated bit of source code
5. Making a compiler that can fuse lazy functions. It is difficult.
6. Debugging. I am no good at debugging. Especially stuff like system software, we would need to work harder. Of course one technique is to go and make it so that when a fault occurs, the over-compiler is pushed into action, writes an entire debugger into memory, gives it control, and then allows you to do debuging on the code. (that would be so cool) And by that, I mean that it writes an entire different "debug OS" into memory, and passes control to that OS.
basically this boils down to:
1. Physical limits - computers need to page, we do not have infinte core computers, we need device drivers, time space limitations
2. Knowledge limits - know how to write a compiler
3. Time limits - I am not immortal
The plan is:
1. Decide on a language
2. To steal a lot from linux/OSkit in terms of low level stuff
3. Convert it into my language
4. Write such a compiler
5. Leave minimal actual low level code lying around
6. Start leveraging the power of the OS in the development of the OS
THat is true, but I am a lazy programmer. I would never implement something that already exists, and is going to be obviously much better than what I will write, due to the hundreds of thousands of man-hours spent on it. That is why I would like to make it interesting. if it were just a look alike or a clone or followed windows/nix design principles, then why not just get windows/nix? Sure it is educational, but so is taking a course in operating system design at uni.
It is less about looking like existing OSes anre more about dealing with the reality of the architecture you are working with.
In most respects, Windows and Linux are not much different. They both operate on a flat address space. They both utilize paging to achieve Virtual Memory, multi-tasking and shared address space. This was all influenced by the design (and limitations) of the x86 architecture. Other than the corporate backing of Windows and the amount of "official" device drivers available, the two just use different names and methods to achieve the same concepts.
However "elite" you wish to be, you will have to use these same basic concepts (i.e. conform to the architecture) and put your abstractions on top of it. Point in case, INT 0x2E vs. INT 0x80... both rely on the interrupt mechanism provided by the x86 architecture.
To be honest, you sound more like you are interested in a new and abstract GUI than an entire OS. Mind you, I say this because an entire OS involves the kernel as well. If you are not willing to conform to standards and do the bare minimum that Windows and Linux "have already done", then you are simply not going to enjoy kernel programming.
You may want to grab a copy of the Linux kernel and start writing new device driver and graphical user interface concepts for it. See how well your ideas hold in a proven operating environment.
In most respects, Windows and Linux are not much different. They both operate on a flat address space. They both utilize paging to achieve Virtual Memory, multi-tasking and shared address space. This was all influenced by the design (and limitations) of the x86 architecture. Other than the corporate backing of Windows and the amount of "official" device drivers available, the two just use different names and methods to achieve the same concepts.
However "elite" you wish to be, you will have to use these same basic concepts (i.e. conform to the architecture) and put your abstractions on top of it. Point in case, INT 0x2E vs. INT 0x80... both rely on the interrupt mechanism provided by the x86 architecture.
To be honest, you sound more like you are interested in a new and abstract GUI than an entire OS. Mind you, I say this because an entire OS involves the kernel as well. If you are not willing to conform to standards and do the bare minimum that Windows and Linux "have already done", then you are simply not going to enjoy kernel programming.
You may want to grab a copy of the Linux kernel and start writing new device driver and graphical user interface concepts for it. See how well your ideas hold in a proven operating environment.
THat sounds good. One of my ideas was to write it as virtual machine in windows, much like squeak. However, I am unsure of how to manage to do dynamic rewriting when under windows. There might be security etc, in windows that will stop you. So writing an os ensures that nobody will mess with what you will be writing. Getting most of it from a OSkit was one of the ways to speed it up.
Also, wouldnt it be cool to be able to dynamically modify the kernel? Or to look at your running kernel as a graph of connected functions? (well, not really) After all a process manager is basically a list manager that handles lists. A windows drawer is basically just a viewer of give variables (x,y,width,height,style)->(picture 2d).
If it were a virtual machine, then I would create compatibility with windows by mounting a virtual drive that derives information from the virutal machine. I have seen (only one) kernel, and I found it mostly understandable. THe writer managed to put all x86 architecture quirks into one .c file, which had about 50 inline assembly functions. The rest was written in c. This makes it portable to other architecture, given that they do use stuff like interrupts
I have written a c compiler (without knowing any theory, nor about yacc and lex), so now I see the exercise of writing a imperative compiler as an exercise in using yacc, lex and then doing some extremely complicated gluing.
Overall, I am really unsure of how to go. Any directions/tips?
Also, wouldnt it be cool to be able to dynamically modify the kernel? Or to look at your running kernel as a graph of connected functions? (well, not really) After all a process manager is basically a list manager that handles lists. A windows drawer is basically just a viewer of give variables (x,y,width,height,style)->(picture 2d).
If it were a virtual machine, then I would create compatibility with windows by mounting a virtual drive that derives information from the virutal machine. I have seen (only one) kernel, and I found it mostly understandable. THe writer managed to put all x86 architecture quirks into one .c file, which had about 50 inline assembly functions. The rest was written in c. This makes it portable to other architecture, given that they do use stuff like interrupts
I have written a c compiler (without knowing any theory, nor about yacc and lex), so now I see the exercise of writing a imperative compiler as an exercise in using yacc, lex and then doing some extremely complicated gluing.
Overall, I am really unsure of how to go. Any directions/tips?