Page 1 of 1

The Importance of Designing First and On $System Weenies

Posted: Wed Feb 28, 2007 12:05 pm
by Crazed123
The Importance of Designing First
This is a polemic, written because I see many people doing very stupid things.

A majority of posts to this board have been written by relative beginners asking questions about hardware and low-level programming. By itself, this is not a bad thing, but the developers posting these questions seem to give no consideration to how they will fit their new hardware-banging knowledge into a coherent operating system.

Too many people here, though not so many of the "regulars", seem to plunge into operating-system development without giving much real thought to the design of their future operating system. Those who give it some thought usually jump to the conclusion that, even if they do not duplicate the actual design of any Unix variety, they will design their system to implement POSIX. Or DOS. But with real-time scheduling! And messages! And security! And... multi-processing! And distributed computing! And...

And now they've designed a useless system whose desired features won't fit with its desired user-land interface.

I only ask you all: give some thought to designing your operating system before you code! You don't have to create a final design. Most real designs are not final until you've built several layers of libraries and applications on them. But begin your system with one coherent goal in mind, and design everything around that goal.

This means you should not make POSIX the native system-call interface to a micro-kernel. If you must, implement a compatibility layer in user-land, because POSIX simply doesn't suit micro-kernels.

This means you should not plan elaborate IPC facilities for a macrokernel. Aside from networking (which certain distributed systems might call IPC), it isn't actually needed. Instead, concentrate on providing elegant abstractions with which user applications can work.

If you disagree with any of my design recommendations, great! You've actually put the time and thought in to refute my criticisms of what you were going to do. Most people don't even read up on previous designs, much less challenge their own design to themselves.

If your design isn't particularly unique or cool, fine. It doesn't need to be if you put the effort in to make it a coherent design that won't show a million corner-cases to every generalization as soon as you start coding.

On $System Weenies
That was a polemic. This is closer to a rant, but the topics are related.

I hate $System Weenies of all stripes. These are the people who go beyond merely copying system-call interfaces from an existing system. They copy the whole design. Not because they want humbly to learn from it (the reason for many copying Unix), but because they think it's the best design under the sun. When someone mentions using an original design they wonder, "Why would you ever want to do that?"

They also usually have an "enemy system" that they like to hate. For Unix Weenies, its Windows. For Windows Weenies (the few that exist), it's Linux. For Amiga Weenies, it's everybody. For VMS Weenies, it's Unix. For Andrew Tanenbaum, it's everything but Minix. Make even the slightest criticism of their $System and they will immediately brand you a weenie for their enemy system.

You people know who you are. Hopefully you can be made to understand how little you really know.

This relates to the previous polemic because not designing is the chief sin of $System Weenies. It's OK to like a preexisting system. Nobody's claiming that X or Y wasn't as well-engineered as the Weenies say it was. But all systems have their design flaws, desired features the hardware couldn't support, and kludge-like tradeoffs made for speed. Get over it!

Posted: Wed Feb 28, 2007 1:04 pm
by salil_bhagurkar
Well said!

Posted: Wed Feb 28, 2007 2:01 pm
by Crazed123
Your post history reveals no actual design to whatever you've been doing. Do you have one?

Posted: Wed Feb 28, 2007 3:48 pm
by os64dev
bla, bla, bla :wink: . not everyone is capable of designing it correctly and giving it much thought, then again it is also not the interest of many people to do so. myself i'd like to learn the inner workings of the x86, specifically the 64-bit long mode stuff, and osdev is the area where you have the most control over the process. who cares about design in this stage just plunge in and get knowledgeable about the platform and if in the end i am still interested in os devving i'll start over and do a re-design with the improved knowledge. and at that point your remarks are valid.

Posted: Wed Feb 28, 2007 4:09 pm
by Colonel Kernel
Crazed123 wrote:Your post history reveals no actual design to whatever you've been doing. Do you have one?
Heh... Are you the design police now? :)

I also agree with your polemic/rant, especially when it comes to $System weenies talking down new or interesting design ideas just because of the source (*cough* Singularity *cough*). I'm generally disappointed with the lack of maturity that tends to ruin many otherwise interesting threads in short order...

Posted: Wed Feb 28, 2007 4:14 pm
by Crazed123
Colonel Kernel wrote:
Crazed123 wrote:Your post history reveals no actual design to whatever you've been doing. Do you have one?
Heh... Are you the design police now? :)
Naw, but it did kind of seem like he wrote "well said" without actually reading.

Posted: Wed Feb 28, 2007 5:05 pm
by GLneo
*wipes tear from eye* =D>, not really but it's good for a rant...

[edit]seams theres a flame war...[/edit]

Posted: Thu Mar 01, 2007 12:42 am
by distantvoices
huh?

wuzzgoinon?

I daresay that's a bold but true statement which you've posted here, crazed123. Most of the people here who are strolling on and off ranting about what they wanna do all the while asking for hardware nitty gritty don't seem to have an accurate understanding of the meaning of "design, develop, implement" - well, they'd even stumble over things like "facade pattern", "singleton pattern" or "composite pattern" ... what so ever, it runs under "design patterns". I reckon software development knowledge is a prerequistite for os development. you canna pull the osdev stunt without proper knowledge about software development.

I have pulled that stunt. Well. My business is software development, after all, and I'm studying for a masters degree at a university of applied science. In short, I know my stuff.

Posted: Thu Mar 01, 2007 1:05 am
by Solar
I also agree with most of what you said. Luckily I'm the one with the "every OS sucks" blues, or I might have felt adressed by the "Amiga Weenie" part. :twisted:

As for critizising "original designs"... I tend to do that when I get the impression the person who wrote that "original design" did so because he couldn't be bothered with reading a bit about the existing ones. Some "original designs" here come from people who did similar stuff before, are known for being able to read technical docs. But at the same time, there are "original designs" coming from people who wrote their "Hello World" four months ago...

When I post a design concept here, I want people to pry it apart, because it gives me valuable feedback on weaknesses I might have overlooked. I consider it a trait of experienced developers not to take personal offense when an idea of theirs is shot down.

Posted: Thu Mar 01, 2007 3:40 am
by Brendan
Hi,

I think there's a generic problem with the "design then implement" approach - it's extremely difficult to create a good design without experience, and extremely difficult to get experience without doing implementation first.

IMHO a better approach is "research, design, implement, analyze, retry", where the design itself is refined over several iterations, and certain iterations are considered research. For example, there's nothing wrong with doing half a kernel just to get more experience with SMP, doing a linear memory manager and nothing else, or creating some code to compare hardware task switching and software task switching to see which works better for you, and then changing your design according to what you learnt doing it.

The thing is that "research, design, implement, analyze, retry" is mostly naturally occuring. The newest OS developers will learn enough from their first OS to do a better design for their second OS, without realising beforehand.

For this reason, I think it's important not to discourage new OS developers - they are gaining valuable experience, just like the everyone else....
Solar wrote:When I post a design concept here, I want people to pry it apart, because it gives me valuable feedback on weaknesses I might have overlooked. I consider it a trait of experienced developers not to take personal offense when an idea of theirs is shot down.
I want people to pick apart my concepts too. For replying to other people's posts I try to judge how much experience they already have and decide how much to pick apart their design based on this. For example, I'll pick apart posts by Solar or Candy as much as I can (which isn't much usually), while using some restraint for newer developers to avoid overly discouraging them. If someone feels I've picked apart their post too much, then it's because I assumed they have more experience than they do - it's not a reason to be upset.... ;)


Cheers,

Brendan

Posted: Thu Mar 01, 2007 9:04 am
by Crazed123
Brendan wrote: IMHO a better approach is "research, design, implement, analyze, retry", where the design itself is refined over several iterations, and certain iterations are considered research. For example, there's nothing wrong with doing half a kernel just to get more experience with SMP, doing a linear memory manager and nothing else, or creating some code to compare hardware task switching and software task switching to see which works better for you, and then changing your design according to what you learnt doing it.
I agree with the principle but disagree with the application. IMVHO, that way of doing things is better suited to subsystems of operating systems rather than entire operating systems. Would you want to do an entire working kernel kludged together with no design as "research", only to go back and rewrite from scratch using what you learned?

Posted: Thu Mar 01, 2007 9:16 am
by Combuster
well i do, and i did. This being the 4th time.

First time i got stuck in realmode and learned not to mess over with interrupts and such
Second time i found myself hardcoding page tables with no way of fixing it.
The third time i wasnt careful enough to run into an chicken-and-egg problem with my memory manager, which i then had to rewrite into what became the fourth attempt.

After some successes i found myself off my principles with lots of chrome and spinning cubes (\:D/) and decided to start over again, this time fixing the whole memory issue for once and for good, as well as adding address-space support. That was 2 months ago.

In the meantime i've put up a more thorough cpu detection algorithm, kicked the second half of my multiprocessor system into slavery, and ended up with a thread about processor speed on which i still didnt get any useful answers.

I do design things in advance, but even there: practice makes perfect 8)

Posted: Thu Mar 01, 2007 10:34 am
by salil_bhagurkar
Even if i may not have a design i appreciated whatever you said...Thats the reason i said 'well said'..No sarcasm intended... And i dont go on replying to posts that don't interest me...

Posted: Thu Mar 01, 2007 6:35 pm
by Brendan
Hi,
Crazed123 wrote:I agree with the principle but disagree with the application. IMVHO, that way of doing things is better suited to subsystems of operating systems rather than entire operating systems. Would you want to do an entire working kernel kludged together with no design as "research", only to go back and rewrite from scratch using what you learned?
I have - both unintentionally and intentionally.

For example, I thought my last kernel would be my final kernel, but in the end it just wasn't flexible enough. This was the first kernel I wrote that supported SMP (and hyper-threading and NUMA), and I learnt that all the re-entrancy locking, load balancing, performance tweaks, etc just adds too much to a micro-kernel. Basically there was too many dependancies between different parts, and this made it extremely hard to make major changes to pieces of the kernel (i.e. it was unmaintainable in the long term).

After I realised that kernel wasn't going to be my final kernel, I used the code as the basis for research into emulation. The idea here was to find out about the performance issues involved with building an emulator where different processes emulate different CPUs in the same virtual computer. The performance is very dependant on IPC due to the interactions between virtual CPUs.

I learnt a lot about emulators doing this (both interpretting like Bochs and dynamic translation), and also learnt that the way I was handling message buffers sucked - the kernel was checking 32 MB of "message buffer space" for pages that could be freed every time a message was sent or received, which added a relatively large amount of overhead to the IPC. I optimised it a little (so it kept track of the number of pages mapped into the message buffer space at all times) which helped a lot, but the main problem was that the message buffers were just too large. I also found that having a pair of message buffers would make writing applications, etc easier, as you could keep one message in one buffer while you sent/received other messages using the other message buffer.

After this I did some disposable code - partly as an experiment into how hard it is to build an extremely modular OS, but also to get some experience with long mode. It turns out that extremely modular is about twice as hard (but IMHO it's definately worth it due to the huge amount of flexibility it creates).

My current kernel is intended as my final kernel. With some luck it actually might be... :)

I will say one thing though - when I do complete my final kernel (whether it's the current kernel or a later one) it'll make Bill, Steve and Linus cry. :D


Cheers,

Brendan

Posted: Thu Mar 08, 2007 12:16 pm
by mystran
The problem with doing original design is ofcourse that it typically takes quite a bit of time to get it very far. :D

I am currently in process of writing a native x86 compiler for a Lisp dialect similar to Scheme, which I intend to first bootstrap normally, then make it compile JIT, then transform it such that it can "bootstrap on the fly" effectually replacing more or less all of the system including any runtime on the fly (that is, without restart, relying on garbage collector to get rid of old code as well as data), at which point I intend to move it to work on top of bare machine.

My current iteration of compiler (written in Scheme) is able to compile only trivial programs, but the resulting code is able to handle much of the intended semantics with a really minimal runtime. It doesn't assemble on-the-fly yet, requiring GNU toolchain for assembly and linking, but the way the system is designed, it will be much easier to fix those once it's been bootstrapped.

It is also able to avoid heap allocation as long as no "large" (non-immediate) objects are created, closure creation and receiving variable arguments in "rest" lists being currently the only things that implicitly allocate memory. Notably, none of the following requires heap to even be present:

- variable arguments into optional parameters
- multiple return values
- fully general tail-recursion

In reality the current system can't be started without a heap of some sort, but it is possible to write non-trivial code that can survive without allocations after startup. This in turn is useful property to have to be able to implement low-level stuff like the garbage collector with the language itself.

Next thing to add to the compiler will be separate compilation of libraries, such that I can start implementing rest of the runtime system with the language itself... including said garbage collector...

But why?

During my last iteration of OS development, I came to several conclusions:
- the proper way to do what I want, is to use event-driven programming with asynchronous messaging
- programming on top of truly event-driven system with something like C is next to impossible for normal human beings
- if I'm going to need a more suitable language anyway, why constrain the system by using a kernel written in something like C?
- if the only unsafe features of said high-level language are a set of unsafe primitives, and if said high-level language is always compiled from either source-code or some sort of intermediate bytecode, then it's almost trivial to establish system security (as long as we assume or prove the final-phase compiler correct) without hardware mechanisms
- there is no reason why said hardware mechanisms couldn't be used to run untrusted native code in a safe way, while rest of the system enjoys the benefits of language based security (!!!)

While some of this stuff might not be "original" in the sense that much of it has been done in one form or another on some system or another, it is original enough that anyone shouldn't expect anything really working anytime soon. :D

But at least I'm progressing...