Atlantis Project
Posted: Wed Dec 20, 2006 6:34 pm
Since I can't sleep and this board doesn't have a post size limit.... I don't expect anybody to read this but if you do please drop me a line with comment.
The Atlantis Project.
Introduction
There is too much bad programming in this world. Most of the software currently is based for at least 80% on existing software - and that's including the "all-new version" and "rewritten" projects. Open source seems to do this less than corporate software but still, loads of stuff is using old code.
Using old code is not a bad thing per se. If it does what it should do well, it is a good and sound decision. Older code has a number of advantages: people have tested it a long time ago, there are more real-world users for it and it actually runs in an existing product. It also has a number of disadvantages; it is written using an older architecture that was set up for getting it to work, not so much getting it efficient, fast or reliable. It is written in some cases in older or esoteric languages that do not have software support anymore, in some cases not even hardware support exists. The problem and blessing with old code is that it works. A working product should not be replaced by something that does not work as good, making a lot of people hesitant at removing and replacing a number of aspects of existing systems. This leads to a long series of products that do pretty much the same, each with a bit less bugs and a lot more code. Since adding code for errors is easier to test, it's cheaper.
Older code is also written by less experienced programmers. They wrote code without regard to testability, stability, future-proofing or anything. These things were written back then with only one goal in mind - to get it to work. As soon as it works and it doesn't appear to have any bugs, freeze the code and never touch it again. Less experienced programmers tend to prefer lots of simple code over a bit of advanced code. This has its reasons - simple code can be maintained by any fool, you can't make errors in advanced bits of code that show up everywhere since there is no advanced code and experience-years with the actual setup of the code teaches you the flaws and pitfalls that you learn to implicitly avoid. It allows the current concept of seniority, people that work with the code for years are more valuable, not because they just happen to know what's where but because they're familiar with the pitfalls and flaws that you don't just see.
A more advanced design at the lowest level, in more than one way, leads to an eventually simpler and thereby cheaper product. A few examples about the problem I'm trying to illustrate:
NTSC video works at 29.96997 frames per second (rounded). This is to compensate for a problem that engineers faced in the 1950's. This solution seemed to work best at the time it was made. People do not dare to change this to 30.00 in newer systems that have no real reason to work at 29.97 frames, since it might break compatibility with existing devices. The NTSC standard itself requires that you be just that bit below 30.00 frames. Nobody had the balls to step up and say it should be changed. New HDTV NTSC systems work at 59.94 frames per second. They're fully digital and can not suffer from the problem that caused the frame rate. Just plain for compatibility, the 59.94 (29.97 * 2) rate is kept. Adding this quirk in calculation to new systems involves keeping track of the exact offset compared to the 29.97 clock and adjusting it every so often, so that the clock appears to be correct when compared to the oldest systems. An oddity that once had a reason is now kept as a standard because nobody dares challenge it.
The X86 processor series started off as a pretty nice 4-bit microprocessor. It received extensions into 8-bit and 16-bit pretty thoroughly since the processor wasn't widely used and needn't compete with itself - it was clearly better than using the older type. The 16-bit version got a load of public attention and that's when the sh*t happened. When they wanted to make a 32-bit one, they didn't dare break compatibility - what if people wouldn't adopt it? They'd make a load of loss... Just keeping the old product as a default mode would work better, that would keep everybody follow this product. When they were nearing the 64-bit barrier, Intel decided to screw the world and to try the bold way, the real new architecture that removed all the crap that had snuck into this one. The new product flopped - since the rumors told that it was slow at emulating the old one. It could not come close to outperforming the old one on the old one's turf, so nobody dared touch it on its own turf. I've read through the architecture - it's so much better that if they offered me the chance to use one of those machines as platform I'd switch before thinking. Instead, the AMD64 architecture prevailed - since it was fully compatible. Thanks to AMD64 we're stuck with a computer that disables an address line, has two types of address space with completely incompatible instructions, variable instruction lengths with odd decodings, complex logic on your chips that decode this crap into normal instructions and a load of legacy software that has never run reliably.
Windows 2.0 was a nice product, in so far that people actually started to use it. Making a third long story short, the Windows 2.0 API is still pretty much available and making stuff terrible in Windows XP. I'm going to bet $5 on it being the exact same in Windows Vista - you wouldn't want to break compatibility, now would you?
The problem is that all of the things need to cooperate somehow. You can't fix a problem in an interface between the two if you can't fix the other side of the interface - and especially, all instances of the interface. The more users you have, the less likely you are to change the interface. Right now, TV is TV and there are very very few countries or people that are looking into switching to another TV standard. When TV was just coming up, some TV standards only took a few months (!) to be phased in and out. Right now, just hinting at "hey, let's switch all transmission to HD" is very likely to get a very angry comment from half the country you live in, plus about a dozen countries around it if you're in Europe. You can't switch the interface. You cannot even improve on it.
That's not a problem, now is it? If you just get the interface right the first time, you're all set. People work on products to get them to work and as soon as they work somewhat, people accept that as a standard. That means that unless products are only created using engineering methods (and I'm referring to them as engineering methods, this has no relation to actually being an engineer) you're going to get crap interfaces and a load of bugs and associated hacks. It's good for so many things that nobody considers changing that.
I'm setting up the Atlantis Project to stop this idiocy once and for all. It's an overly ambitious project, when you look at it at first, but if you think about it it's not impossible.
The project.
The Atlantis Project intends to redefine the interfaces used commonly between devices to something of a current level. All our currently used interfaces date from somewhere between 1960 and 1980, with hacks added to appear current (your USB stick uses SCSI commands (1979), your harddisk is based on the ST602 interface (1980), your operating system is based on qdos(1980) or multics(1965)). All of these technologies have things that keep them current and run at now-normal speeds. They are all designed around using existing code, making their interfaces very irksome to use.
The project is about redefining the basics of the computer:
- The processor
- The operating system
- The language basis that is used for development
- The user interface
- The methods of programming
- The user experience.
What we see around us are old beasts that require teams of between ten to thousands of people to be kept up to date. This is not because of inherent complexity of the systems in most cases, this is because the systems require interaction with things you can't define. Making an operating system can be done by one person in about a year full-time. Getting the interfaces entirely correct will take him about a year more, making it two years. The problem is that you can't communicate with any existing product.
Developing an X86 OS takes a lot of time, not because the OS is that hard to make but because the system is not intended for running an operating system. The system is intended, first and foremost, for running 16-bit applications on an OS that half the people reading this can only remenisce. Before people get to making an OS proper takes months of development, not only because the interface isn't just not intended for OS development, it's actively aimed at being compatible with existing implementations and only that. If the existing implementations work with the new devices or BIOSes, nobody cares whether it will work with your OS loader or OS driver. It might be a standard, official or not, but people will not test it. Making a driver for all types of hardware on x86 is nigh impossible, the interfaces are not testable and not defined.
There are devices, applications and companies that do conform to new standards that have been set up more recently. These include Adobe, who use GIL (image library) for processing images and Boost for a number of internal operations. That includes Microsoft (yes even them), who use a clean interface called .NET for developing applications that promotes not tying them to a given hardware or software interface. That includes Sandisk, who created the SD card standard that does not rely on emulating a harddisk controller that's decades old.
There are a number of devices, applications and companies that will not convert and/or conform to a new standard. These companies are the companies that are too far entrenched between other companies to change something. This includes Microsoft (also, it's that big) in that it can't kick out FAT support and most of the storage companies that don't kick out SCSI or ATA but instead add another layer of required supporting protocol.
The OS will not be changed by these people, the processor will not be changed. Moreover, Microsoft, the only heavyweight that can actually change something contentually, will not change the way applications are linked and supported (mostly because it's their main source of income). For actual development on a groundbreaking level, we're stuck in a deadlock.
I'm drifting off again.
What I do intend to change:
- The processor interface should be optimized for running instructions, and as much as it can cope with. The processor should not have to perform tasks that can reasonably be transferred to software.
- The operating system should offer a small, lightweight interface that allows userland applications to access devices according to operating modes that are relevant for it. This is pretty hard to differentiate from Unix, but the main point of difference would be that the ioctl function would not be allowed to exist as such. There are a number of minor changes too.
- Traditional libraries should give code base support (making it easier for you to write code). Libraries should not include conversion functions of any kind.
- Applications should be written as a user interface, separate from the actual processing it does, for as far as it can be abstracted away from it. You should be able to write a user interface for a product of which you don't understand the full function. You should, most of all, be able to choose your UI.
- Processing should be defined as a number of functions with descriptions of what they do, in such a way that a library can load a function, or predefined combination of functions, based on its description. The input types, change types and output types of an application should not lie with the application (what the programmer thought of) but with the user (what the user has installed software for).
- User interface design must be tied. It's great to see an interface that blinks on all sides and looks purple from this angle and blue from that angle, but in the end you're not on the computer pur sang. You're on the computer to do something. The user interface can look blue and yellow, but if the OK button is always in the same place you can perform your job faster. The UI should be developed to allow the user to work as the user desires, not as the programmers or developers of an application thought somebody might use it.
To be continued, think I'm going to give sleeping another shot.
The Atlantis Project.
Introduction
There is too much bad programming in this world. Most of the software currently is based for at least 80% on existing software - and that's including the "all-new version" and "rewritten" projects. Open source seems to do this less than corporate software but still, loads of stuff is using old code.
Using old code is not a bad thing per se. If it does what it should do well, it is a good and sound decision. Older code has a number of advantages: people have tested it a long time ago, there are more real-world users for it and it actually runs in an existing product. It also has a number of disadvantages; it is written using an older architecture that was set up for getting it to work, not so much getting it efficient, fast or reliable. It is written in some cases in older or esoteric languages that do not have software support anymore, in some cases not even hardware support exists. The problem and blessing with old code is that it works. A working product should not be replaced by something that does not work as good, making a lot of people hesitant at removing and replacing a number of aspects of existing systems. This leads to a long series of products that do pretty much the same, each with a bit less bugs and a lot more code. Since adding code for errors is easier to test, it's cheaper.
Older code is also written by less experienced programmers. They wrote code without regard to testability, stability, future-proofing or anything. These things were written back then with only one goal in mind - to get it to work. As soon as it works and it doesn't appear to have any bugs, freeze the code and never touch it again. Less experienced programmers tend to prefer lots of simple code over a bit of advanced code. This has its reasons - simple code can be maintained by any fool, you can't make errors in advanced bits of code that show up everywhere since there is no advanced code and experience-years with the actual setup of the code teaches you the flaws and pitfalls that you learn to implicitly avoid. It allows the current concept of seniority, people that work with the code for years are more valuable, not because they just happen to know what's where but because they're familiar with the pitfalls and flaws that you don't just see.
A more advanced design at the lowest level, in more than one way, leads to an eventually simpler and thereby cheaper product. A few examples about the problem I'm trying to illustrate:
NTSC video works at 29.96997 frames per second (rounded). This is to compensate for a problem that engineers faced in the 1950's. This solution seemed to work best at the time it was made. People do not dare to change this to 30.00 in newer systems that have no real reason to work at 29.97 frames, since it might break compatibility with existing devices. The NTSC standard itself requires that you be just that bit below 30.00 frames. Nobody had the balls to step up and say it should be changed. New HDTV NTSC systems work at 59.94 frames per second. They're fully digital and can not suffer from the problem that caused the frame rate. Just plain for compatibility, the 59.94 (29.97 * 2) rate is kept. Adding this quirk in calculation to new systems involves keeping track of the exact offset compared to the 29.97 clock and adjusting it every so often, so that the clock appears to be correct when compared to the oldest systems. An oddity that once had a reason is now kept as a standard because nobody dares challenge it.
The X86 processor series started off as a pretty nice 4-bit microprocessor. It received extensions into 8-bit and 16-bit pretty thoroughly since the processor wasn't widely used and needn't compete with itself - it was clearly better than using the older type. The 16-bit version got a load of public attention and that's when the sh*t happened. When they wanted to make a 32-bit one, they didn't dare break compatibility - what if people wouldn't adopt it? They'd make a load of loss... Just keeping the old product as a default mode would work better, that would keep everybody follow this product. When they were nearing the 64-bit barrier, Intel decided to screw the world and to try the bold way, the real new architecture that removed all the crap that had snuck into this one. The new product flopped - since the rumors told that it was slow at emulating the old one. It could not come close to outperforming the old one on the old one's turf, so nobody dared touch it on its own turf. I've read through the architecture - it's so much better that if they offered me the chance to use one of those machines as platform I'd switch before thinking. Instead, the AMD64 architecture prevailed - since it was fully compatible. Thanks to AMD64 we're stuck with a computer that disables an address line, has two types of address space with completely incompatible instructions, variable instruction lengths with odd decodings, complex logic on your chips that decode this crap into normal instructions and a load of legacy software that has never run reliably.
Windows 2.0 was a nice product, in so far that people actually started to use it. Making a third long story short, the Windows 2.0 API is still pretty much available and making stuff terrible in Windows XP. I'm going to bet $5 on it being the exact same in Windows Vista - you wouldn't want to break compatibility, now would you?
The problem is that all of the things need to cooperate somehow. You can't fix a problem in an interface between the two if you can't fix the other side of the interface - and especially, all instances of the interface. The more users you have, the less likely you are to change the interface. Right now, TV is TV and there are very very few countries or people that are looking into switching to another TV standard. When TV was just coming up, some TV standards only took a few months (!) to be phased in and out. Right now, just hinting at "hey, let's switch all transmission to HD" is very likely to get a very angry comment from half the country you live in, plus about a dozen countries around it if you're in Europe. You can't switch the interface. You cannot even improve on it.
That's not a problem, now is it? If you just get the interface right the first time, you're all set. People work on products to get them to work and as soon as they work somewhat, people accept that as a standard. That means that unless products are only created using engineering methods (and I'm referring to them as engineering methods, this has no relation to actually being an engineer) you're going to get crap interfaces and a load of bugs and associated hacks. It's good for so many things that nobody considers changing that.
I'm setting up the Atlantis Project to stop this idiocy once and for all. It's an overly ambitious project, when you look at it at first, but if you think about it it's not impossible.
The project.
The Atlantis Project intends to redefine the interfaces used commonly between devices to something of a current level. All our currently used interfaces date from somewhere between 1960 and 1980, with hacks added to appear current (your USB stick uses SCSI commands (1979), your harddisk is based on the ST602 interface (1980), your operating system is based on qdos(1980) or multics(1965)). All of these technologies have things that keep them current and run at now-normal speeds. They are all designed around using existing code, making their interfaces very irksome to use.
The project is about redefining the basics of the computer:
- The processor
- The operating system
- The language basis that is used for development
- The user interface
- The methods of programming
- The user experience.
What we see around us are old beasts that require teams of between ten to thousands of people to be kept up to date. This is not because of inherent complexity of the systems in most cases, this is because the systems require interaction with things you can't define. Making an operating system can be done by one person in about a year full-time. Getting the interfaces entirely correct will take him about a year more, making it two years. The problem is that you can't communicate with any existing product.
Developing an X86 OS takes a lot of time, not because the OS is that hard to make but because the system is not intended for running an operating system. The system is intended, first and foremost, for running 16-bit applications on an OS that half the people reading this can only remenisce. Before people get to making an OS proper takes months of development, not only because the interface isn't just not intended for OS development, it's actively aimed at being compatible with existing implementations and only that. If the existing implementations work with the new devices or BIOSes, nobody cares whether it will work with your OS loader or OS driver. It might be a standard, official or not, but people will not test it. Making a driver for all types of hardware on x86 is nigh impossible, the interfaces are not testable and not defined.
There are devices, applications and companies that do conform to new standards that have been set up more recently. These include Adobe, who use GIL (image library) for processing images and Boost for a number of internal operations. That includes Microsoft (yes even them), who use a clean interface called .NET for developing applications that promotes not tying them to a given hardware or software interface. That includes Sandisk, who created the SD card standard that does not rely on emulating a harddisk controller that's decades old.
There are a number of devices, applications and companies that will not convert and/or conform to a new standard. These companies are the companies that are too far entrenched between other companies to change something. This includes Microsoft (also, it's that big) in that it can't kick out FAT support and most of the storage companies that don't kick out SCSI or ATA but instead add another layer of required supporting protocol.
The OS will not be changed by these people, the processor will not be changed. Moreover, Microsoft, the only heavyweight that can actually change something contentually, will not change the way applications are linked and supported (mostly because it's their main source of income). For actual development on a groundbreaking level, we're stuck in a deadlock.
I'm drifting off again.
What I do intend to change:
- The processor interface should be optimized for running instructions, and as much as it can cope with. The processor should not have to perform tasks that can reasonably be transferred to software.
- The operating system should offer a small, lightweight interface that allows userland applications to access devices according to operating modes that are relevant for it. This is pretty hard to differentiate from Unix, but the main point of difference would be that the ioctl function would not be allowed to exist as such. There are a number of minor changes too.
- Traditional libraries should give code base support (making it easier for you to write code). Libraries should not include conversion functions of any kind.
- Applications should be written as a user interface, separate from the actual processing it does, for as far as it can be abstracted away from it. You should be able to write a user interface for a product of which you don't understand the full function. You should, most of all, be able to choose your UI.
- Processing should be defined as a number of functions with descriptions of what they do, in such a way that a library can load a function, or predefined combination of functions, based on its description. The input types, change types and output types of an application should not lie with the application (what the programmer thought of) but with the user (what the user has installed software for).
- User interface design must be tied. It's great to see an interface that blinks on all sides and looks purple from this angle and blue from that angle, but in the end you're not on the computer pur sang. You're on the computer to do something. The user interface can look blue and yellow, but if the OK button is always in the same place you can perform your job faster. The UI should be developed to allow the user to work as the user desires, not as the programmers or developers of an application thought somebody might use it.
To be continued, think I'm going to give sleeping another shot.