Candy wrote:
- Educate your users before allowing them to use the system. Even though it's very hard to push through, educating users (or at least allowing them to be educated without spending another load of cash) will increase the usability of your system, without people installing all sorts of weird software that only claims to protect you (consider the amount of people that install virus scanners, firewalls, privacy tools etc. only to think that they automatically are used and updated..).
While I generally can't fault anyone for recommending more education, in this case I find it misplaced. The current systems are so bad that users are already forced too much about them - far more than they need to about, say, their cars - just to use them on the most basic level. 'Computer literacy' is often just a code phrase for 'forcing them to do it a certain way', and in practice is an excuse for rotten system design.
- Do not publish your system unless you test it thoroughly. Disable compiling of programs using buggy functions (yes, you too posix/iso!) such as strcpy and strcat.
I can't disagree with this part.
Encourage use of a type-safe language (Java, VB).
Type-safety, in the usual sense of static typing, is considerably overrated as a debugging tool, and often requires decidely
unsafe workarounds to accomplish certain goals. I would consider languages like Scheme, Smalltalk, or Python - all of which are dynamically typed, type being a property of the actual objects rather than the variables they are bound to - to be more secure than either VB or Java, for example.
Note that Smalltalk
is type-safe in the sense of only allowing operations on objects that are specified in the class definition, even for polymorphic objects. Methods are bound to the objects at runtime, and only methods which are part of the object's actual class will run.
If not possible, encourage use of a language that allows a programmer to shield himself once after which s/he is permanently shielded (C++).
- Design your system according to a new design. Both the old traditional designs are provably bad (Windows because of the API, Linux because it still supports the same flaws that are in the stdlib for over 30 years now). Consider the implications any of your choices might have on any other function or use. Do not consider your OS designed when you've thought about it for 3 weeks or something similar. Your design isn't tested until you've tested your implementation and found it to be good.
The obvious counter-argument to that is that old designs are stable ones, and their strengths and weaknesses are well-known. A truly new OS design would have to be tested and analyzed from scratch. The weaknesses of this argument should be obvious, however: while the designs are old, the major implementations are all relatively young, and are undergoing constant modification.
- Design your OS on a good basis. Do not use a filesystem such as FAT or an operating base of BIOS functions, if only because they are very buggy, and you cannot be sure that it actually works correctly. Use only code that you can debug. Things you cannot debug can only cause you trouble.
Again, I have to agree wholeheartedly with this.
- You do want your OS to be widespread and available, but not as the "main" OS. Just keep normal competition levels (as soon as they're restored in this market segment) so all OSes get an equal share of users.
Actually, the pattern of a market dominated by one system, with a handful of less common ones, is one which reccurs in the computer field on all levels (i.e., almost all word-processing is with MS Word; nearly all C programming is done using either VC++ or gcc), and seems to be endemic to the market. It even appears among open-source users (how much of a market share do NetBSD and SkyOS have?). As ESR and others have said, market domination by Microsoft (and before them, IBM) is a symptom of the problem, not the problem itself.