Why So Much Complexity On Engineering?
Why So Much Complexity On Engineering?
More complex doesn't mean better. And for us, OS developers, so much complexity is causing that the projects reach a truly mature stage in a lifetime.
So, why not do things inventing new algorithms for things such as multitasking, filesystems, memory managers and so, in a way that is both innovative, easy (brief) and effective, like in the first days of computing, in which impressive things could be done in a really small program?
At least I try to "invent" such new algorithms, because after 2 years of learning such a bunch of (in my opinion) overcomplicated specifications, I am feeling uncomfortable and willing to find options.
It's not without reason that all of these (our) projects don't seem to go beyond simple networking, simple GUI, simple filesystem handling, and virtually simple everything. So, why not look for ways of switching the balance and make our work simple to do great results instead of working with great difficulty to do simple results?
So, why not do things inventing new algorithms for things such as multitasking, filesystems, memory managers and so, in a way that is both innovative, easy (brief) and effective, like in the first days of computing, in which impressive things could be done in a really small program?
At least I try to "invent" such new algorithms, because after 2 years of learning such a bunch of (in my opinion) overcomplicated specifications, I am feeling uncomfortable and willing to find options.
It's not without reason that all of these (our) projects don't seem to go beyond simple networking, simple GUI, simple filesystem handling, and virtually simple everything. So, why not look for ways of switching the balance and make our work simple to do great results instead of working with great difficulty to do simple results?
I think that so-called "simplicity" of earlier systems is a deception. One half of that "simplicity" was our lack of deeper understanding, the other half was simple lack of features.
AmigaOS - a highly efficient multitasking microkernel. No memory protection, though.
Early MacOS - small, fast, slick. Unfortunately it didn't do preemptive multitasking, so one "bad" application could freeze your system.
Oversimplified and only two examples, but I think they show my point.
AmigaOS - a highly efficient multitasking microkernel. No memory protection, though.
Early MacOS - small, fast, slick. Unfortunately it didn't do preemptive multitasking, so one "bad" application could freeze your system.
Oversimplified and only two examples, but I think they show my point.
Every good solution is obvious once you've found it.
Maybe a big deal of what is applied is a workaround and could be stripped down to something that provides security and functionality, to be mid-sized but still simple, which by now is huge and way too complex.
If that's so, we are well lost in our efforts, and if we ever want it to be something more than a hobby, we won't be able to keep up and could eventually turn into something we can't afford anymore...
If that's so, we are well lost in our efforts, and if we ever want it to be something more than a hobby, we won't be able to keep up and could eventually turn into something we can't afford anymore...
Oh no, my friends, I think there are plenty of simplicity options not yet implemented, like stop redefining standars every couple of years. At least for our software that can be a goal, unlike Microsoft, which intentionally screws up older-versions of their specifications with new features that not only are it but that are also designed to cause incompatibility.
That's a perfectly avoidable plague. And, if a design is really outstanding, it will be thought and re-thought to be both internally straightforward and that naturally fits in the rest of the logical system. If that doesn't happen, then the design is so, so imperfect that it won't be nice to implement and would be a real pain not to find a better solution (like the upgrade from WAV to MP3, not very simple, but a very good example of what I mean: stable, very hard to make obsolete and very tight, takes up having to know all of its principles, but is near a 70% to ideal).
That's a perfectly avoidable plague. And, if a design is really outstanding, it will be thought and re-thought to be both internally straightforward and that naturally fits in the rest of the logical system. If that doesn't happen, then the design is so, so imperfect that it won't be nice to implement and would be a real pain not to find a better solution (like the upgrade from WAV to MP3, not very simple, but a very good example of what I mean: stable, very hard to make obsolete and very tight, takes up having to know all of its principles, but is near a 70% to ideal).
New Jersey! Unix! Son of Asmodeus!ehird wrote:Internal simplicity manifests as external complication. You can either have simple internals that acts like voodoo, or simplicity that internally is a horror.
I prefer the former.
As you can obviously tell, I prefer that my abstractions work correctly and *simply* (with complicated implementation) rather than make users learn the black magic of my system.
I don't think there's that much black magic in the basic Unix functionality at userlevel. In fact, I'd say Unix belongs to the class of "looks simple from outside, total mess inside", at least if I have to pick one of them.
The real problem with goto is not with the control transfer, but with environments. Properly tail-recursive closures get both right.
- Colonel Kernel
- Member
- Posts: 1437
- Joined: Tue Oct 17, 2006 6:06 pm
- Location: Vancouver, BC, Canada
- Contact:
This is going to be pretty abstract, so bear with me
At the architecture level, complexity is defined as the interconnectedness of things. Things can be functions, modules, classes, whatever. A connection between them is a dependency of some kind -- call, communication protocol, common file format, etc.
The goal of a good design is to reduce the interconnectedness of things as much as possible. I think a lot of people lose sight of this and try instead to create something that looks simple on the outside but is nightmarishly complex on the inside. This is called "simplexity".
Although I can't directly relate it to OS design, I've seen this before in typical OO designs. People who understand the "letter" but not the "spirit" of OOD will look at the requirements, create a class for every noun, and proceed to absorb huge amounts of responsibility into each class. This ultimately leads to all kinds of crazy dependencies between classes. Sure, on the face of it, a system with only 20 classes seems simpler than one with 200, but verifying the correctness of any of those 20 will be more than 10 times harder than doing the same for one of the 200.
I'm not sure I believe in the dichotomy of "simple internally, complex externally" and vice-versa. I think a well-design system's interface reflects its internals while leaving out crucial details. In other words, you can simplify through abstraction, but not to the point where you are re-defining the problem you have to solve. That's when you get leaky abstractions. A good system will usually be quite complex when you look at the most fine grain of detail, but it will not be needlessly complex (like COM, Corba, or EJBs... yuck!)
Perhaps not surprisingly, all these attitudes make me a fan of microkernels, while simultaneously making me uncomfortable with how paging I/O is typically implemented in microkernels. It's always felt like an abstraction inversion to me...
At the architecture level, complexity is defined as the interconnectedness of things. Things can be functions, modules, classes, whatever. A connection between them is a dependency of some kind -- call, communication protocol, common file format, etc.
The goal of a good design is to reduce the interconnectedness of things as much as possible. I think a lot of people lose sight of this and try instead to create something that looks simple on the outside but is nightmarishly complex on the inside. This is called "simplexity".
Although I can't directly relate it to OS design, I've seen this before in typical OO designs. People who understand the "letter" but not the "spirit" of OOD will look at the requirements, create a class for every noun, and proceed to absorb huge amounts of responsibility into each class. This ultimately leads to all kinds of crazy dependencies between classes. Sure, on the face of it, a system with only 20 classes seems simpler than one with 200, but verifying the correctness of any of those 20 will be more than 10 times harder than doing the same for one of the 200.
I'm not sure I believe in the dichotomy of "simple internally, complex externally" and vice-versa. I think a well-design system's interface reflects its internals while leaving out crucial details. In other words, you can simplify through abstraction, but not to the point where you are re-defining the problem you have to solve. That's when you get leaky abstractions. A good system will usually be quite complex when you look at the most fine grain of detail, but it will not be needlessly complex (like COM, Corba, or EJBs... yuck!)
Perhaps not surprisingly, all these attitudes make me a fan of microkernels, while simultaneously making me uncomfortable with how paging I/O is typically implemented in microkernels. It's always felt like an abstraction inversion to me...
I'm sorry to say you're dreaming. Technical sensibility is rarely involved in the development of standards. A lot of them are pushed by vendors with an agenda (*cough* M$ *cough*) and don't necessarily work very well. Unless you plan to overthrow capitalism, I suspect these sorts of shenanigans will keep happening. I think this is why I've become less interested in studying commercially available technologies and more interested in OS and programming language research in recent years.no, my friends, I think there are plenty of simplicity options not yet implemented, like stop redefining standars every couple of years.
Top three reasons why my OS project died:
- Too much overtime at work
- Got married
- My brain got stuck in an infinite loop while trying to design the memory manager
I guess the efforts made to make GNU software, and beyond, like readily public domain, are a sort of technological socialism, in which the intention is to count with knowledge resources not controlled or decided by the big corporations which are more interested in making big mone more than advancing the state of the art.
Sure, there must be a way of getting a simple-inside-simple-outside product, with a reasonable complexity but neither of the type "#define _number_1_ 1". If not, well, it will be the same history of having to learn to deal with strict logical thinking at the expense of a big (huge actually) inversion of time and anything else, for the whole cycle, over and over, and accumulatively complexer.
Sure, there must be a way of getting a simple-inside-simple-outside product, with a reasonable complexity but neither of the type "#define _number_1_ 1". If not, well, it will be the same history of having to learn to deal with strict logical thinking at the expense of a big (huge actually) inversion of time and anything else, for the whole cycle, over and over, and accumulatively complexer.
- Colonel Kernel
- Member
- Posts: 1437
- Joined: Tue Oct 17, 2006 6:06 pm
- Location: Vancouver, BC, Canada
- Contact:
I've always called it Communism, but I get paid to develop software, so that's my bias.~ wrote:I guess the efforts made to make GNU software, and beyond, like readily public domain, are a sort of technological socialism
I don't think the cycle of increasing complexity can really go on forever. It isn't sustainable. At a certain point, developers having to deal with this crazy technology can't be productive anymore. For example, MS is transitioning away from COM towards .NET. COM was terribly unwieldy and it was just about impossible to grasp any large COM-based system (I lived and breathed OLE DB for many years... it sucked). .NET in comparison is much easier to understand. If it ever becomes bloated and over-generalized like Java, then something else will gain favour among developers and replace it.If not, well, it will be the same history of having to learn to deal with strict logical thinking at the expense of a big (huge actually) inversion of time and anything else, for the whole cycle, over and over, and accumulatively complexer.
Top three reasons why my OS project died:
- Too much overtime at work
- Got married
- My brain got stuck in an infinite loop while trying to design the memory manager
Project XANA (my OS) is sort of an attempt at what ~ is saying here. It's based on Project Xanadu, which originated early on, when things were very simple, and progressed until the glory days of cruft and kludgery. Xanadu's goal was something that I very much agree with -- to make things conceptually intuitive, and therefore simple on both sides.
Herein lies the much-forgotten dichotomy: something can be conceptually intuitive (e.x., math) without being humanly intuitive (e.x., english), and vice versa. Computers operate on a conceptual level, not a human level, and therefore things that are conceptually intuitive work very well on computers. Humanly intuitive things tend to work less well on computers, but this is not anything of importance, because the human intuition is defined by what is learned and absorbed by the humans in question. Humans come in to this world knowing very little. We do not know how to speak, but that quickly becomes the basis for all our thoughts. We do not know how to do math, but quickly we begin to favor division over repeatedly subtracting on our fingers. All tools of the past were conceptually simple for the medium in question, and human intuition was not as much a factor.
Essentially, in my view, good design is what is commonly called "cuspiness" or "hackery" -- it is a solution that is not immediately visible to any but the most trained eye, a solution that is conceptually simple, and once understood by one learning it, humanly simple. That is not to say that one should create systems without the human in mind at all (DOS is an example of such a system, in my opinion, as is something like INTERCAL). On the contrary: one must create something that conforms well to the inherent structure of the human mind (think mnemonics, visualizations, arrows, color coding). However, it does NOT have to be simply a clone of everything popular and overdone.
I echo Ted Nelson's sentiment when he said that the current WIMP GUI paradigm is simply a poor simulation of paper, and I echo his sentiment when I say that is a bad thing. Paper is useful, yes, but it is NOT a computer. Nor is a desktop, or a typewriter, or even a 3d world. Computers are limitless. One should appeal in every user interface to the Turing nature of the Machine! By avoiding such (in the name of "user friendliness") you kill the thing inside you, inside the machine, that yearns for the infinte.
~John
P.S.: Sorry if I waxed philosophical (as I am wont), as my passion does not wane for this train of interlocution .
Herein lies the much-forgotten dichotomy: something can be conceptually intuitive (e.x., math) without being humanly intuitive (e.x., english), and vice versa. Computers operate on a conceptual level, not a human level, and therefore things that are conceptually intuitive work very well on computers. Humanly intuitive things tend to work less well on computers, but this is not anything of importance, because the human intuition is defined by what is learned and absorbed by the humans in question. Humans come in to this world knowing very little. We do not know how to speak, but that quickly becomes the basis for all our thoughts. We do not know how to do math, but quickly we begin to favor division over repeatedly subtracting on our fingers. All tools of the past were conceptually simple for the medium in question, and human intuition was not as much a factor.
Essentially, in my view, good design is what is commonly called "cuspiness" or "hackery" -- it is a solution that is not immediately visible to any but the most trained eye, a solution that is conceptually simple, and once understood by one learning it, humanly simple. That is not to say that one should create systems without the human in mind at all (DOS is an example of such a system, in my opinion, as is something like INTERCAL). On the contrary: one must create something that conforms well to the inherent structure of the human mind (think mnemonics, visualizations, arrows, color coding). However, it does NOT have to be simply a clone of everything popular and overdone.
I echo Ted Nelson's sentiment when he said that the current WIMP GUI paradigm is simply a poor simulation of paper, and I echo his sentiment when I say that is a bad thing. Paper is useful, yes, but it is NOT a computer. Nor is a desktop, or a typewriter, or even a 3d world. Computers are limitless. One should appeal in every user interface to the Turing nature of the Machine! By avoiding such (in the name of "user friendliness") you kill the thing inside you, inside the machine, that yearns for the infinte.
~John
P.S.: Sorry if I waxed philosophical (as I am wont), as my passion does not wane for this train of interlocution .