Atlantis Project
Atlantis Project
Since I can't sleep and this board doesn't have a post size limit.... I don't expect anybody to read this but if you do please drop me a line with comment.
The Atlantis Project.
Introduction
There is too much bad programming in this world. Most of the software currently is based for at least 80% on existing software - and that's including the "all-new version" and "rewritten" projects. Open source seems to do this less than corporate software but still, loads of stuff is using old code.
Using old code is not a bad thing per se. If it does what it should do well, it is a good and sound decision. Older code has a number of advantages: people have tested it a long time ago, there are more real-world users for it and it actually runs in an existing product. It also has a number of disadvantages; it is written using an older architecture that was set up for getting it to work, not so much getting it efficient, fast or reliable. It is written in some cases in older or esoteric languages that do not have software support anymore, in some cases not even hardware support exists. The problem and blessing with old code is that it works. A working product should not be replaced by something that does not work as good, making a lot of people hesitant at removing and replacing a number of aspects of existing systems. This leads to a long series of products that do pretty much the same, each with a bit less bugs and a lot more code. Since adding code for errors is easier to test, it's cheaper.
Older code is also written by less experienced programmers. They wrote code without regard to testability, stability, future-proofing or anything. These things were written back then with only one goal in mind - to get it to work. As soon as it works and it doesn't appear to have any bugs, freeze the code and never touch it again. Less experienced programmers tend to prefer lots of simple code over a bit of advanced code. This has its reasons - simple code can be maintained by any fool, you can't make errors in advanced bits of code that show up everywhere since there is no advanced code and experience-years with the actual setup of the code teaches you the flaws and pitfalls that you learn to implicitly avoid. It allows the current concept of seniority, people that work with the code for years are more valuable, not because they just happen to know what's where but because they're familiar with the pitfalls and flaws that you don't just see.
A more advanced design at the lowest level, in more than one way, leads to an eventually simpler and thereby cheaper product. A few examples about the problem I'm trying to illustrate:
NTSC video works at 29.96997 frames per second (rounded). This is to compensate for a problem that engineers faced in the 1950's. This solution seemed to work best at the time it was made. People do not dare to change this to 30.00 in newer systems that have no real reason to work at 29.97 frames, since it might break compatibility with existing devices. The NTSC standard itself requires that you be just that bit below 30.00 frames. Nobody had the balls to step up and say it should be changed. New HDTV NTSC systems work at 59.94 frames per second. They're fully digital and can not suffer from the problem that caused the frame rate. Just plain for compatibility, the 59.94 (29.97 * 2) rate is kept. Adding this quirk in calculation to new systems involves keeping track of the exact offset compared to the 29.97 clock and adjusting it every so often, so that the clock appears to be correct when compared to the oldest systems. An oddity that once had a reason is now kept as a standard because nobody dares challenge it.
The X86 processor series started off as a pretty nice 4-bit microprocessor. It received extensions into 8-bit and 16-bit pretty thoroughly since the processor wasn't widely used and needn't compete with itself - it was clearly better than using the older type. The 16-bit version got a load of public attention and that's when the sh*t happened. When they wanted to make a 32-bit one, they didn't dare break compatibility - what if people wouldn't adopt it? They'd make a load of loss... Just keeping the old product as a default mode would work better, that would keep everybody follow this product. When they were nearing the 64-bit barrier, Intel decided to screw the world and to try the bold way, the real new architecture that removed all the crap that had snuck into this one. The new product flopped - since the rumors told that it was slow at emulating the old one. It could not come close to outperforming the old one on the old one's turf, so nobody dared touch it on its own turf. I've read through the architecture - it's so much better that if they offered me the chance to use one of those machines as platform I'd switch before thinking. Instead, the AMD64 architecture prevailed - since it was fully compatible. Thanks to AMD64 we're stuck with a computer that disables an address line, has two types of address space with completely incompatible instructions, variable instruction lengths with odd decodings, complex logic on your chips that decode this crap into normal instructions and a load of legacy software that has never run reliably.
Windows 2.0 was a nice product, in so far that people actually started to use it. Making a third long story short, the Windows 2.0 API is still pretty much available and making stuff terrible in Windows XP. I'm going to bet $5 on it being the exact same in Windows Vista - you wouldn't want to break compatibility, now would you?
The problem is that all of the things need to cooperate somehow. You can't fix a problem in an interface between the two if you can't fix the other side of the interface - and especially, all instances of the interface. The more users you have, the less likely you are to change the interface. Right now, TV is TV and there are very very few countries or people that are looking into switching to another TV standard. When TV was just coming up, some TV standards only took a few months (!) to be phased in and out. Right now, just hinting at "hey, let's switch all transmission to HD" is very likely to get a very angry comment from half the country you live in, plus about a dozen countries around it if you're in Europe. You can't switch the interface. You cannot even improve on it.
That's not a problem, now is it? If you just get the interface right the first time, you're all set. People work on products to get them to work and as soon as they work somewhat, people accept that as a standard. That means that unless products are only created using engineering methods (and I'm referring to them as engineering methods, this has no relation to actually being an engineer) you're going to get crap interfaces and a load of bugs and associated hacks. It's good for so many things that nobody considers changing that.
I'm setting up the Atlantis Project to stop this idiocy once and for all. It's an overly ambitious project, when you look at it at first, but if you think about it it's not impossible.
The project.
The Atlantis Project intends to redefine the interfaces used commonly between devices to something of a current level. All our currently used interfaces date from somewhere between 1960 and 1980, with hacks added to appear current (your USB stick uses SCSI commands (1979), your harddisk is based on the ST602 interface (1980), your operating system is based on qdos(1980) or multics(1965)). All of these technologies have things that keep them current and run at now-normal speeds. They are all designed around using existing code, making their interfaces very irksome to use.
The project is about redefining the basics of the computer:
- The processor
- The operating system
- The language basis that is used for development
- The user interface
- The methods of programming
- The user experience.
What we see around us are old beasts that require teams of between ten to thousands of people to be kept up to date. This is not because of inherent complexity of the systems in most cases, this is because the systems require interaction with things you can't define. Making an operating system can be done by one person in about a year full-time. Getting the interfaces entirely correct will take him about a year more, making it two years. The problem is that you can't communicate with any existing product.
Developing an X86 OS takes a lot of time, not because the OS is that hard to make but because the system is not intended for running an operating system. The system is intended, first and foremost, for running 16-bit applications on an OS that half the people reading this can only remenisce. Before people get to making an OS proper takes months of development, not only because the interface isn't just not intended for OS development, it's actively aimed at being compatible with existing implementations and only that. If the existing implementations work with the new devices or BIOSes, nobody cares whether it will work with your OS loader or OS driver. It might be a standard, official or not, but people will not test it. Making a driver for all types of hardware on x86 is nigh impossible, the interfaces are not testable and not defined.
There are devices, applications and companies that do conform to new standards that have been set up more recently. These include Adobe, who use GIL (image library) for processing images and Boost for a number of internal operations. That includes Microsoft (yes even them), who use a clean interface called .NET for developing applications that promotes not tying them to a given hardware or software interface. That includes Sandisk, who created the SD card standard that does not rely on emulating a harddisk controller that's decades old.
There are a number of devices, applications and companies that will not convert and/or conform to a new standard. These companies are the companies that are too far entrenched between other companies to change something. This includes Microsoft (also, it's that big) in that it can't kick out FAT support and most of the storage companies that don't kick out SCSI or ATA but instead add another layer of required supporting protocol.
The OS will not be changed by these people, the processor will not be changed. Moreover, Microsoft, the only heavyweight that can actually change something contentually, will not change the way applications are linked and supported (mostly because it's their main source of income). For actual development on a groundbreaking level, we're stuck in a deadlock.
I'm drifting off again.
What I do intend to change:
- The processor interface should be optimized for running instructions, and as much as it can cope with. The processor should not have to perform tasks that can reasonably be transferred to software.
- The operating system should offer a small, lightweight interface that allows userland applications to access devices according to operating modes that are relevant for it. This is pretty hard to differentiate from Unix, but the main point of difference would be that the ioctl function would not be allowed to exist as such. There are a number of minor changes too.
- Traditional libraries should give code base support (making it easier for you to write code). Libraries should not include conversion functions of any kind.
- Applications should be written as a user interface, separate from the actual processing it does, for as far as it can be abstracted away from it. You should be able to write a user interface for a product of which you don't understand the full function. You should, most of all, be able to choose your UI.
- Processing should be defined as a number of functions with descriptions of what they do, in such a way that a library can load a function, or predefined combination of functions, based on its description. The input types, change types and output types of an application should not lie with the application (what the programmer thought of) but with the user (what the user has installed software for).
- User interface design must be tied. It's great to see an interface that blinks on all sides and looks purple from this angle and blue from that angle, but in the end you're not on the computer pur sang. You're on the computer to do something. The user interface can look blue and yellow, but if the OK button is always in the same place you can perform your job faster. The UI should be developed to allow the user to work as the user desires, not as the programmers or developers of an application thought somebody might use it.
To be continued, think I'm going to give sleeping another shot.
The Atlantis Project.
Introduction
There is too much bad programming in this world. Most of the software currently is based for at least 80% on existing software - and that's including the "all-new version" and "rewritten" projects. Open source seems to do this less than corporate software but still, loads of stuff is using old code.
Using old code is not a bad thing per se. If it does what it should do well, it is a good and sound decision. Older code has a number of advantages: people have tested it a long time ago, there are more real-world users for it and it actually runs in an existing product. It also has a number of disadvantages; it is written using an older architecture that was set up for getting it to work, not so much getting it efficient, fast or reliable. It is written in some cases in older or esoteric languages that do not have software support anymore, in some cases not even hardware support exists. The problem and blessing with old code is that it works. A working product should not be replaced by something that does not work as good, making a lot of people hesitant at removing and replacing a number of aspects of existing systems. This leads to a long series of products that do pretty much the same, each with a bit less bugs and a lot more code. Since adding code for errors is easier to test, it's cheaper.
Older code is also written by less experienced programmers. They wrote code without regard to testability, stability, future-proofing or anything. These things were written back then with only one goal in mind - to get it to work. As soon as it works and it doesn't appear to have any bugs, freeze the code and never touch it again. Less experienced programmers tend to prefer lots of simple code over a bit of advanced code. This has its reasons - simple code can be maintained by any fool, you can't make errors in advanced bits of code that show up everywhere since there is no advanced code and experience-years with the actual setup of the code teaches you the flaws and pitfalls that you learn to implicitly avoid. It allows the current concept of seniority, people that work with the code for years are more valuable, not because they just happen to know what's where but because they're familiar with the pitfalls and flaws that you don't just see.
A more advanced design at the lowest level, in more than one way, leads to an eventually simpler and thereby cheaper product. A few examples about the problem I'm trying to illustrate:
NTSC video works at 29.96997 frames per second (rounded). This is to compensate for a problem that engineers faced in the 1950's. This solution seemed to work best at the time it was made. People do not dare to change this to 30.00 in newer systems that have no real reason to work at 29.97 frames, since it might break compatibility with existing devices. The NTSC standard itself requires that you be just that bit below 30.00 frames. Nobody had the balls to step up and say it should be changed. New HDTV NTSC systems work at 59.94 frames per second. They're fully digital and can not suffer from the problem that caused the frame rate. Just plain for compatibility, the 59.94 (29.97 * 2) rate is kept. Adding this quirk in calculation to new systems involves keeping track of the exact offset compared to the 29.97 clock and adjusting it every so often, so that the clock appears to be correct when compared to the oldest systems. An oddity that once had a reason is now kept as a standard because nobody dares challenge it.
The X86 processor series started off as a pretty nice 4-bit microprocessor. It received extensions into 8-bit and 16-bit pretty thoroughly since the processor wasn't widely used and needn't compete with itself - it was clearly better than using the older type. The 16-bit version got a load of public attention and that's when the sh*t happened. When they wanted to make a 32-bit one, they didn't dare break compatibility - what if people wouldn't adopt it? They'd make a load of loss... Just keeping the old product as a default mode would work better, that would keep everybody follow this product. When they were nearing the 64-bit barrier, Intel decided to screw the world and to try the bold way, the real new architecture that removed all the crap that had snuck into this one. The new product flopped - since the rumors told that it was slow at emulating the old one. It could not come close to outperforming the old one on the old one's turf, so nobody dared touch it on its own turf. I've read through the architecture - it's so much better that if they offered me the chance to use one of those machines as platform I'd switch before thinking. Instead, the AMD64 architecture prevailed - since it was fully compatible. Thanks to AMD64 we're stuck with a computer that disables an address line, has two types of address space with completely incompatible instructions, variable instruction lengths with odd decodings, complex logic on your chips that decode this crap into normal instructions and a load of legacy software that has never run reliably.
Windows 2.0 was a nice product, in so far that people actually started to use it. Making a third long story short, the Windows 2.0 API is still pretty much available and making stuff terrible in Windows XP. I'm going to bet $5 on it being the exact same in Windows Vista - you wouldn't want to break compatibility, now would you?
The problem is that all of the things need to cooperate somehow. You can't fix a problem in an interface between the two if you can't fix the other side of the interface - and especially, all instances of the interface. The more users you have, the less likely you are to change the interface. Right now, TV is TV and there are very very few countries or people that are looking into switching to another TV standard. When TV was just coming up, some TV standards only took a few months (!) to be phased in and out. Right now, just hinting at "hey, let's switch all transmission to HD" is very likely to get a very angry comment from half the country you live in, plus about a dozen countries around it if you're in Europe. You can't switch the interface. You cannot even improve on it.
That's not a problem, now is it? If you just get the interface right the first time, you're all set. People work on products to get them to work and as soon as they work somewhat, people accept that as a standard. That means that unless products are only created using engineering methods (and I'm referring to them as engineering methods, this has no relation to actually being an engineer) you're going to get crap interfaces and a load of bugs and associated hacks. It's good for so many things that nobody considers changing that.
I'm setting up the Atlantis Project to stop this idiocy once and for all. It's an overly ambitious project, when you look at it at first, but if you think about it it's not impossible.
The project.
The Atlantis Project intends to redefine the interfaces used commonly between devices to something of a current level. All our currently used interfaces date from somewhere between 1960 and 1980, with hacks added to appear current (your USB stick uses SCSI commands (1979), your harddisk is based on the ST602 interface (1980), your operating system is based on qdos(1980) or multics(1965)). All of these technologies have things that keep them current and run at now-normal speeds. They are all designed around using existing code, making their interfaces very irksome to use.
The project is about redefining the basics of the computer:
- The processor
- The operating system
- The language basis that is used for development
- The user interface
- The methods of programming
- The user experience.
What we see around us are old beasts that require teams of between ten to thousands of people to be kept up to date. This is not because of inherent complexity of the systems in most cases, this is because the systems require interaction with things you can't define. Making an operating system can be done by one person in about a year full-time. Getting the interfaces entirely correct will take him about a year more, making it two years. The problem is that you can't communicate with any existing product.
Developing an X86 OS takes a lot of time, not because the OS is that hard to make but because the system is not intended for running an operating system. The system is intended, first and foremost, for running 16-bit applications on an OS that half the people reading this can only remenisce. Before people get to making an OS proper takes months of development, not only because the interface isn't just not intended for OS development, it's actively aimed at being compatible with existing implementations and only that. If the existing implementations work with the new devices or BIOSes, nobody cares whether it will work with your OS loader or OS driver. It might be a standard, official or not, but people will not test it. Making a driver for all types of hardware on x86 is nigh impossible, the interfaces are not testable and not defined.
There are devices, applications and companies that do conform to new standards that have been set up more recently. These include Adobe, who use GIL (image library) for processing images and Boost for a number of internal operations. That includes Microsoft (yes even them), who use a clean interface called .NET for developing applications that promotes not tying them to a given hardware or software interface. That includes Sandisk, who created the SD card standard that does not rely on emulating a harddisk controller that's decades old.
There are a number of devices, applications and companies that will not convert and/or conform to a new standard. These companies are the companies that are too far entrenched between other companies to change something. This includes Microsoft (also, it's that big) in that it can't kick out FAT support and most of the storage companies that don't kick out SCSI or ATA but instead add another layer of required supporting protocol.
The OS will not be changed by these people, the processor will not be changed. Moreover, Microsoft, the only heavyweight that can actually change something contentually, will not change the way applications are linked and supported (mostly because it's their main source of income). For actual development on a groundbreaking level, we're stuck in a deadlock.
I'm drifting off again.
What I do intend to change:
- The processor interface should be optimized for running instructions, and as much as it can cope with. The processor should not have to perform tasks that can reasonably be transferred to software.
- The operating system should offer a small, lightweight interface that allows userland applications to access devices according to operating modes that are relevant for it. This is pretty hard to differentiate from Unix, but the main point of difference would be that the ioctl function would not be allowed to exist as such. There are a number of minor changes too.
- Traditional libraries should give code base support (making it easier for you to write code). Libraries should not include conversion functions of any kind.
- Applications should be written as a user interface, separate from the actual processing it does, for as far as it can be abstracted away from it. You should be able to write a user interface for a product of which you don't understand the full function. You should, most of all, be able to choose your UI.
- Processing should be defined as a number of functions with descriptions of what they do, in such a way that a library can load a function, or predefined combination of functions, based on its description. The input types, change types and output types of an application should not lie with the application (what the programmer thought of) but with the user (what the user has installed software for).
- User interface design must be tied. It's great to see an interface that blinks on all sides and looks purple from this angle and blue from that angle, but in the end you're not on the computer pur sang. You're on the computer to do something. The user interface can look blue and yellow, but if the OK button is always in the same place you can perform your job faster. The UI should be developed to allow the user to work as the user desires, not as the programmers or developers of an application thought somebody might use it.
To be continued, think I'm going to give sleeping another shot.
- AndrewAPrice
- Member
- Posts: 2299
- Joined: Mon Jun 05, 2006 11:00 pm
- Location: USA (and Australia)
Regarding the OS, it seems that we have pretty similar ideas, especially about the specialized device interfaces (no more ioctl) and strict seperation of application logic and UI.
Another cool idea I could think of (IIRC you had already mentioned something like this on MT) is a new way of sandboxing applications and handling permissions, basically: A user or application should only see and have access to resources it needs to accomplish its task. When there's no permission to access a resource, it should simply be hidden away from the application's view (no more unified root directory that contains all files and objects).
I'm looking forward to discussing all this in more detail, btw!
cheers
Joe
Another cool idea I could think of (IIRC you had already mentioned something like this on MT) is a new way of sandboxing applications and handling permissions, basically: A user or application should only see and have access to resources it needs to accomplish its task. When there's no permission to access a resource, it should simply be hidden away from the application's view (no more unified root directory that contains all files and objects).
I'm looking forward to discussing all this in more detail, btw!
cheers
Joe
I've sometimes wondered vaguely the same thing. I usually started at a completely new language and compiler, then the operating system, and build from there. Then I got annoyed at compensating for the x86 architecture, and so on.
One of the problems is if you reject backwards compatibility, you have to start from the very beginning again. For instance, if you reject ANSI and VT100 serial terminals as not good enough, then you have to question ASCII, then the latin character set, then you have to question the English language, then you end up designing a new language for people to speak.
There is no easy point at which to enter this cycle. It's all or nothing.
I remember once seeing on Mega-tokyo someone trying to create a perfect operating system. I don't think it's possible, because to create the perfect operating system, you need to change people's minds, change languages, and so on.
One of the problems is if you reject backwards compatibility, you have to start from the very beginning again. For instance, if you reject ANSI and VT100 serial terminals as not good enough, then you have to question ASCII, then the latin character set, then you have to question the English language, then you end up designing a new language for people to speak.
There is no easy point at which to enter this cycle. It's all or nothing.
I remember once seeing on Mega-tokyo someone trying to create a perfect operating system. I don't think it's possible, because to create the perfect operating system, you need to change people's minds, change languages, and so on.
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
In the end, we all want it better, but everybodys afraid of change. How good and noble our design concepts may be, people attempt to avoid the change.
Flocking behaviour @ work.
We would need some sort of new holocaust to get everybody into a new way of thinking. Something like aliens saying that we should use MIPS or ARM and be taken away if we dont...
The only way to gradually change is to provide an persistent presence of modern technology which does not get sucked into the masses. AtlantisOS might be such a project. The problem is mainly to pull it along for such a time that others will start helping you to drag it before you collapse under its weight.
All I can do is hope you and your followers have enough stamina to last long enough for the next generation to take over.
And maybe one man might not have enough strength and we are hopelessly lost until we are joined together into the "Community OS Project", whenever that might happen.
Flocking behaviour @ work.
We would need some sort of new holocaust to get everybody into a new way of thinking. Something like aliens saying that we should use MIPS or ARM and be taken away if we dont...
The only way to gradually change is to provide an persistent presence of modern technology which does not get sucked into the masses. AtlantisOS might be such a project. The problem is mainly to pull it along for such a time that others will start helping you to drag it before you collapse under its weight.
All I can do is hope you and your followers have enough stamina to last long enough for the next generation to take over.
And maybe one man might not have enough strength and we are hopelessly lost until we are joined together into the "Community OS Project", whenever that might happen.
- Brynet-Inc
- Member
- Posts: 2426
- Joined: Tue Oct 17, 2006 9:29 pm
- Libera.chat IRC: brynet
- Location: Canada
- Contact:
What's wrong with complying with dated standards? POSIX for example?
And for hardware, Sometimes it's a good thing when a new piece of hardware uses a common or modified Chipset, Especially if the manufacture isn't so willing to publish documentation for people like us OS Developers.
Your theories are flawed man..
And for hardware, Sometimes it's a good thing when a new piece of hardware uses a common or modified Chipset, Especially if the manufacture isn't so willing to publish documentation for people like us OS Developers.
Your theories are flawed man..
- Brynet-Inc
- Member
- Posts: 2426
- Joined: Tue Oct 17, 2006 9:29 pm
- Libera.chat IRC: brynet
- Location: Canada
- Contact:
First off, I'm amazed anybody read that.
Not just that, you're seeking in a file that's essentially random-access. I recall some other standard that had just read(), write() and rewind(), where the last would rewind the entire file. I can imagine why the interface was set up as such, but doing this on a flash chip is just silly.
When you want data from an SD card, you send it a write command serially using SPI, wait a few cycles and clock in the data.
When you want data from a USB stick, you detect where it is, inform hubs along the way about the connection, determine its end points, detect the type, connect to it using USB bulk transfers, send it a SCSI identify device command that returns loads of data that's completely irrelevant for a flash device but that must be sent along, determine whether it's a valid request, send a SCSI read command, disconnect from the SCSI bus per the protocol, reconnect for incoming data, download the data and disconnect.
They're in the end the exact same flash chip. Can you tell why I would prefer the first? No, it's not just because SCSI is an older technology.
Modules that take input and produce output, in a generic sense. I'm still working on the interface, but most of it is done. One part of it that's required a bit of attention was settings, which are now more or less defined. Each module has an input type, an output type and a number of settings. It defines the settings and its types using an object tree that's embedded into the module and that can be serialized to a file type that serializes trees (such as XML). The modules can be used by anything that understands the input and/or output type, if only at a basic level. We've had a number of threads in the past where I've brought up this concept, most met with disbelief that it's possible and/or that you can really make it.
For this concept I can tell you that I've made a working version with 33 example implementations (with a few more coming, but I don't have the time to work on them) of a module, which combine in arbitrary ways specified by a script. It has helped me think about the concepts behind it and the performance considerations. It is, in total, around 15000 lines of newly written code. It took me, all together, about 1.5 month to write, not only because the code is mostly self-explanatory, it's also pretty hard to make mistakes in without them being noticed. I cannot give the product out as it was made under contract with my employer for internal use.
For settings a dynamic variable type is used, that can be more or less seen as a variable that can be assigned an expression (as opposed to the value resulting from evaluating the expression) and that informs people interested in it that it's changed. You can also link a number of them together, so that they are always kept identical. This allows you not just to use the modules for input and output but for keeping "live" networks of processing that can do anything you like them to do. You could wrap ghostscript into a module that transforms postscript files into printer-typed documents. If you added a module that transformed X into postscript, you could request X to be printed directly. You don't make an image editor that edits jpg, bmp and gif, you make an image editor that edits images. You make import and export modules that import bmp to image, export image to jpg and so on. Add png support and all the programs in your computer that could handle image can now also handle png image.
End of ramble, pretty much cut short
I didn't come up with this, Pype for instance thought of this first. It's also a fairly generic concept that's not widely used.
I'm mainly struggling with getting it all up right now. I don't have much free time and I expect it not to increase any time soon. The ideas are there and they're workable. They can be implemented. All it takes, really, is a lot of time and motivation.
Well... comply with Posix then, the original one. You can't access files beyond 64k because seek uses a U16. Oh wait, we have lseek, 4GB. And newer versions introduce llseek, with a U64, to allow you to seek even further. llseek can't handle zfs (if I read it correctly as a 128-bit filesystem) so you need lllseek. See the point?Brynet-Inc wrote:What's wrong with complying with dated standards? POSIX for example?
And for hardware, Sometimes it's a good thing when a new piece of hardware uses a common or modified Chipset, Especially if the manufacture isn't so willing to publish documentation for people like us OS Developers.
Your theories are flawed man..
Not just that, you're seeking in a file that's essentially random-access. I recall some other standard that had just read(), write() and rewind(), where the last would rewind the entire file. I can imagine why the interface was set up as such, but doing this on a flash chip is just silly.
When you want data from an SD card, you send it a write command serially using SPI, wait a few cycles and clock in the data.
When you want data from a USB stick, you detect where it is, inform hubs along the way about the connection, determine its end points, detect the type, connect to it using USB bulk transfers, send it a SCSI identify device command that returns loads of data that's completely irrelevant for a flash device but that must be sent along, determine whether it's a valid request, send a SCSI read command, disconnect from the SCSI bus per the protocol, reconnect for incoming data, download the data and disconnect.
They're in the end the exact same flash chip. Can you tell why I would prefer the first? No, it's not just because SCSI is an older technology.
It's just me actually.Combuster wrote: All I can do is hope you and your followers have enough stamina to last long enough for the next generation to take over.
The only real answer is to choose all of them and let the end user decide. If your idea really is better, you will see people changing over. There are some interfaces I'm not going to redefine, simply because they're not bad. One of those is Unicode, another is C++. Those are standards that were created with other bits in consideration (Unicode doesn't prefer little-endian or big-endian, it disprefers both equal. C++ was designed not too long ago, is the currently (imo) most versatile language and is still being worked on very actively). Both aren't encumbered by patents and can be applied as freely as possible.Yayyak wrote: One of the problems is if you reject backwards compatibility, you have to start from the very beginning again. For instance, if you reject ANSI and VT100 serial terminals as not good enough, then you have to question ASCII, then the latin character set, then you have to question the English language, then you end up designing a new language for people to speak.
The separation is one of the points I've been working on for a while, and it's starting to take shape with a few underlying metaphors for which I have nice c++-ish wrappers. The ideas outlined below are backed up by a preliminary C++ version in my archives and a working implementation elsewhere.JoeKayzA wrote: Regarding the OS, it seems that we have pretty similar ideas, especially about the specialized device interfaces (no more ioctl) and strict seperation of application logic and UI.
Modules that take input and produce output, in a generic sense. I'm still working on the interface, but most of it is done. One part of it that's required a bit of attention was settings, which are now more or less defined. Each module has an input type, an output type and a number of settings. It defines the settings and its types using an object tree that's embedded into the module and that can be serialized to a file type that serializes trees (such as XML). The modules can be used by anything that understands the input and/or output type, if only at a basic level. We've had a number of threads in the past where I've brought up this concept, most met with disbelief that it's possible and/or that you can really make it.
For this concept I can tell you that I've made a working version with 33 example implementations (with a few more coming, but I don't have the time to work on them) of a module, which combine in arbitrary ways specified by a script. It has helped me think about the concepts behind it and the performance considerations. It is, in total, around 15000 lines of newly written code. It took me, all together, about 1.5 month to write, not only because the code is mostly self-explanatory, it's also pretty hard to make mistakes in without them being noticed. I cannot give the product out as it was made under contract with my employer for internal use.
For settings a dynamic variable type is used, that can be more or less seen as a variable that can be assigned an expression (as opposed to the value resulting from evaluating the expression) and that informs people interested in it that it's changed. You can also link a number of them together, so that they are always kept identical. This allows you not just to use the modules for input and output but for keeping "live" networks of processing that can do anything you like them to do. You could wrap ghostscript into a module that transforms postscript files into printer-typed documents. If you added a module that transformed X into postscript, you could request X to be printed directly. You don't make an image editor that edits jpg, bmp and gif, you make an image editor that edits images. You make import and export modules that import bmp to image, export image to jpg and so on. Add png support and all the programs in your computer that could handle image can now also handle png image.
End of ramble, pretty much cut short
It's called a capability based system and isn't quite new. I was going for the strongly limited file system, since it also helps the user not lose track of a load of files. Any user file is in your home directory and has timestamps and history tracking and stuff like that. Files in the directory of a program have version numbers and no time stamps. Your program can access its own files and optionally your files, no others. You can explicitly give it other permissions, but the program cannot give them to itself and you can only give them through a fixed and limited number of interfaces, all nonresponsive to program input. In short, you're the owner of the computer, not a program.JoeKayzA wrote: Another cool idea I could think of (IIRC you had already mentioned something like this on MT) is a new way of sandboxing applications and handling permissions, basically: A user or application should only see and have access to resources it needs to accomplish its task. When there's no permission to access a resource, it should simply be hidden away from the application's view (no more unified root directory that contains all files and objects).
I didn't come up with this, Pype for instance thought of this first. It's also a fairly generic concept that's not widely used.
I'm mainly struggling with getting it all up right now. I don't have much free time and I expect it not to increase any time soon. The ideas are there and they're workable. They can be implemented. All it takes, really, is a lot of time and motivation.