Speed : Assembly OS vs Linux

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Speed : Assembly OS vs Linux

Post by Brendan »

Hi,
mallard wrote:This time including such gems as "1960's Unix" (Unix development didn't start until 1970, or late 1969 at the very earliest). :roll:
From the History of Unix wikipedia page:
"On the PDP-7, in 1969, a team of Bell Labs researchers led by Thompson and Ritchie, including Rudd Canaday, developed a hierarchical file system, the concepts of computer processes and device files, a command-line interpreter, and some small utility programs.[2] The resulting system, much smaller than the envisioned Multics system, was to become Unix."
mallard wrote:The idea that a language whose primary appeal is it's portability and simplicity include such things as a GUI library (if it had, you'd never have heard of "C" and instead would be ranting about whatever took its place) is simply absurd.
Sure, just like nobody has ever heard of VisualBasic or Java. If they cared about portability they would've added a GUI library just to keep the language modern. They didn't.
mallard wrote:Many attempts have been made to replace plain text as a format for source code, some of which were even moderately successful (e.g. many BASIC implementations store code pre-tokenised), but time and time again, the advantages of plain text (no special tools required to read it, free choice of editor/IDE, easy to write code-generation tools, etc.) have won out. The overhead of parsing is only reducing as processing capacity increases.
This is only true because people are too lazy to do anything right (imagine if databases and spreadsheets used plain text because developers were too lazy to implement special tools). As long as the file format is open (and mine will be) nothing prevents multiple editors/IDEs; and if you look at all the IDEs you'll see they all parse and most build their own internal abstract syntax tree (for features like syntax highlighting, intelligent code completion, etc) anyway, so not doing things right just makes things harder for IDE developers. For code generation tools, it's actually easier to deal with abstract syntax trees than it is to deal with text; especially when you're inserting into existing source code.
mallard wrote:"Shared libraries are misguided and stupid." Really? If nothing else, the ability to easily fix security issues in all affected programs is a massively useful thing in a modern environment. "very few libraries are actually shared" is utterly false. Sure, some are used more than others and there are often application-specific "libraries", but it doesn't take much exploring with tools like "ldd" or "Dependency Walker" to see how crucial shared libraries are to modern systems.
The ability for the same security vulnerability to effect a large number of different processes isn't a good thing. Things like DLL injection and cache timing attacks (those that rely on the same physical pages being used for the shared library) are not a good thing. Working software breaking because you installed a new application (because the new application came with a slightly newer version of the library) is also not a good thing.

To save memory, you need 2 or more different executables using the same parts of the same shared library at the same time; and even then you still might not save memory (as large amounts of bloat may have been removable through link time optimisation).

In rare cases where something actually does reduce memory usage in practice; it still doesn't justify the disadvantages and probably only indicates that the language's standard library is "overly minimal" (e.g. doesn't cover important things most software needs).
mallard wrote:Executables as bytecode is actually one of his better "ideas". Of course, with Java, .Net and LLVM, it's been done for over a decade. Whether the compiler is "built into the OS" or not is fairly irrelevant. Something that complex and security-sensitive should never run in kernelspace, so it's just a matter of packaging.
That idea is almost 50 years old (O-code) if not older.
mallard wrote:While the "C" ecosystem is far from perfect (I agree with quite a few of Brendan's points, particularly on the obsolete, insecure functions in the standard library), some of these "imperfections" are exactly what's made it popular (simplicity and portability, enough left "undefined" to allow performant code on virtually any platform). If C had never been invented, we'd almost certainly be using something pretty similar.
C definitely deserves a prominent place in history.
mallard wrote:I'm increasingly convinced that if Brendan's super-OS ever leaves the imaginary phase ("planning"), it'll be the 20xx's equivalent of MULTICS...
I hope so - if people see my OS and those concepts make their way into later OSs (just like MULTICS concepts made their way into Unix) I'd consider that a very significant victory.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
embryo

Re: Speed : Assembly OS vs Linux

Post by embryo »

Brendan wrote:..and to be honest, I can't really imagine what it is that I want, or explain it properly.

It's like I'm in some sort of twisted science fiction movie. Some historically important people were meant to make breakthroughs that would've revolutionised programming 20 years ago, and we're meant to be using nice shiny/advanced tools now and joking about the crusty old stuff our forefathers used. However; some nasty time travellers went back in time and killed those important people before they could make their discoveries, and nobody realises what the time travellers have done, so we're still using these old tools. In the back of my mind I know something has gone horribly wrong, and I'm trying to imagine the tools that we would've been using if the time travellers didn't screw everything up; but I don't have all the pieces of the puzzle and struggle to imagine something that nobody (in this corrupted reality) ever got a chance to see. :)
Time travellers had killed some people who were thinking about how to fight the complexity problem. Basically, a human being just unable to cope with more than 7 simple things simultaneously, there's just not enough cache memory for more than 7 things in the human brain. And to solve the complexity problem we need a tool that will keep a task representation within the mentioned range of 1 to 7 things. It should be able to represent a very complex domains (like an OS with rich functionality) in a few objects/pictures/sentences. And every aspect of the domain also should be represented in a few objects/pictures/sentences. Just anything you can imagine about a complex domain should be represented in some very simple manner that is acceptable for our weak brain.

To represent everything in a simple manner a tool should filter out all irrelevant things and keep showing just a few most important. But to be able to create such a tool it's creator should work easily with constructions like 100 times nested ifs or something similar. And if we remember, that there is a limit of 7 nested ifs for an ordinary human (no more than 7 simple things), then we can see what the complexity problem actually is. It can be solved if somebody will make a tool, that is impossible to make without first having the problem solved.

But there is a way out of this circle. The way is not easy and requires a lot of time and efforts mixed with permanent invention of some very tricky algorithms. In short it is about really deep automatic optimization of a program. But the long story is about the all steps required to create such a tool. And there is no need for time travellers to kill anybody just because nobody still haven't accomplished the task of writing the long story, because the story is really long and really requires a lot of inventions.

And I like the Brendan's thrust into the area of the complexity problem. But there was a lot of efforts in wrong directions before, for example, the periodic table was discovered, many people was spending their time to find a mythical aether before physicists had constructed the building of a modern physics and even nobody had known about Americas before humanity discovered the earth's spherical form. So, today there is no good theory and everybody who is trying to create the magic tool first should discover some "periodic table".
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Speed : Assembly OS vs Linux

Post by Rusky »

The concept of shared libraries is extremely important for security reasons, even if the actual implementation as dynamic linking is not. There will always be code that doesn't belong in any particular language's standard library, doesn't belong in the kernel, and needs to be updated without the intervention of every app developer on the platform.

Take networking encryption, for example. It needs to be used from multiple languages, but you definitely don't want to maintain multiple implementations. It doesn't belong in the kernel, but you definitely need to be able to patch it without waiting on the whims of every networked app you ever use.

This means static linking is not even an option. A message-passing server process would work, but that inhibits the optimizer even more than dynamic linking and is essentially the same thing anyway, given a standard interface and well-written servers that don't assume they're singletons.

That leaves install-time linking. If the OS can re-link applications whenever libraries need to be updated, you get both the performance of static linking and the security of dynamic linking. Brendan's install-time compilation from bytecode is actually a really good way to do this, and it solves several other problems at the same time.
rdos
Member
Member
Posts: 3276
Joined: Wed Oct 01, 2008 1:55 pm

Re: Speed : Assembly OS vs Linux

Post by rdos »

Rusky wrote:The concept of shared libraries is extremely important for security reasons, even if the actual implementation as dynamic linking is not. There will always be code that doesn't belong in any particular language's standard library, doesn't belong in the kernel, and needs to be updated without the intervention of every app developer on the platform.
I decided that I would NOT support dynamic linking of the C runtime or of any OS layer. I do support DLLs, but they are used in sane ways.

I find shared libraries to be a huge security risk because by manipulating them you can affect multiple applications that you might not have any idea how to modify if they were statically linked instead.
Rusky wrote: Take networking encryption, for example. It needs to be used from multiple languages, but you definitely don't want to maintain multiple implementations. It doesn't belong in the kernel, but you definitely need to be able to patch it without waiting on the whims of every networked app you ever use.
I have pondered on implementing secure sockets, and I would do it all in kernel, and nothing in user-space. That way applications are unaware of how it works, don't need to link support code, and cannot interfere with security.
Rusky wrote: This means static linking is not even an option. A message-passing server process would work, but that inhibits the optimizer even more than dynamic linking and is essentially the same thing anyway, given a standard interface and well-written servers that don't assume they're singletons.
I'd not even consider implementing this with server processes. No reason to do so as you can handle it in the calling thread in kernel-space.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Speed : Assembly OS vs Linux

Post by Rusky »

rdos wrote:I decided that I would NOT support dynamic linking of the C runtime or of any OS layer. I do support DLLs, but they are used in sane ways.

I find shared libraries to be a huge security risk because by manipulating them you can affect multiple applications that you might not have any idea how to modify if they were statically linked instead.
If your implementation of shared libraries allows applications to affect each other, you've done something horribly wrong. Only privileged entities should be able to update shared libraries, and only if they maintain ABI compatibility with what they're replacing. This is how you fix security holes without rebuilding every application, the same way you might fix a security hole in the kernel (which is effectively dynamically linked to everything).
User avatar
SpyderTL
Member
Member
Posts: 1074
Joined: Sun Sep 19, 2010 10:05 pm

Re: Speed : Assembly OS vs Linux

Post by SpyderTL »

You can also require all of your libraries to be digitally signed.
Project: OZone
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Speed : Assembly OS vs Linux

Post by Brendan »

Hi,
Rusky wrote:If your implementation of shared libraries allows applications to affect each other, you've done something horribly wrong.
Imagine a process that monitors whether the shared library's cache lines were loaded into cache or not (e.g. flush the cache line, then see how long it takes to access it to determine if the cache line was loaded in between the flush and the access). If the "cache line monitoring" process is effected by another process that's using the same shared library and causing the shared library's cache lines to be loaded into cache then you've created a timing side channel that allows information to leak from one process to another.

So; if your implementation of shared libraries allows applications to affect each other (e.g. has cache timing side channels because the shared library's cache lines are shared), you've done something horribly wrong.
Rusky wrote:Only privileged entities should be able to update shared libraries, and only if they maintain ABI compatibility with what they're replacing. This is how you fix security holes without rebuilding every application, the same way you might fix a security hole in the kernel (which is effectively dynamically linked to everything).
Who is this "privileged entity"? Is it an end user (root/admin/superuser) who is unable to tell the difference between "safe new version of library" and "trojan new version of library"; or is it a potentially untrusted third party (e.g. the certificate authority in a digital signature scheme, an unknown repository maintainer, etc); or is it the OS developer themselves (who has taken on the responsibility of doing a security audit on every version of every third party shared library)?

How do you guarantee that ABI/API compatibility was maintained? For example, if 2 independent groups of application developers add their own new features to a shared library, and both groups guarantee that their new version of the shared library has ABI/API compatibility with the old version; then how do you guarantee that both new versions are compatible with each other?


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Speed : Assembly OS vs Linux

Post by Rusky »

Local side channel attacks are a pretty high bar. There's lots of useful room in the security space for just isolating processes from directly modifying the code or data of shared libraries, which is the only thing I was claiming to be horribly wrong.

The same goes for updating shared libraries. In the vast majority of cases you already trust the distributor of your operating system, as well as the distributor of (say) your networking encryption library. This trust includes both distributing what they claim, and keeping it working.

Obviously, if you do need to worry about local side channel attacks or malicious system updates, there are steps you can take. Shared libraries can be isolated without giving up their security benefits just by asking the OS to load a separate copy, or by install-time linking, or by static linking. Your organization can build its own libraries (if it has access to the source), do its own security audits, control its own distribution channels, etc.

None of this diminishes the usefulness of shared libraries in the vast majority of cases.
rdos
Member
Member
Posts: 3276
Joined: Wed Oct 01, 2008 1:55 pm

Re: Speed : Assembly OS vs Linux

Post by rdos »

Rusky wrote:
rdos wrote:I decided that I would NOT support dynamic linking of the C runtime or of any OS layer. I do support DLLs, but they are used in sane ways.

I find shared libraries to be a huge security risk because by manipulating them you can affect multiple applications that you might not have any idea how to modify if they were statically linked instead.
If your implementation of shared libraries allows applications to affect each other, you've done something horribly wrong. Only privileged entities should be able to update shared libraries, and only if they maintain ABI compatibility with what they're replacing. This is how you fix security holes without rebuilding every application, the same way you might fix a security hole in the kernel (which is effectively dynamically linked to everything).
In the distribution of RDOS, everything is controlled, so there is no need for a separate control of shared libraries. We have a single shared library because we need code that changes infrequently, but that is the only reason for it. We also keep language independence in DLLs (resource only DLLs).

Besides, if you share libc, which is enormously large for a small application, you would load much more code (and thus start times will suffer) than if you statically link the few functions that are actually used. The only case when this dynamic linking of libc is useful is in incredibly bloated designs like Windows or Linux where the OS itself starts a huge number of processes without user intervention.
gerryg400
Member
Member
Posts: 1801
Joined: Thu Mar 25, 2010 11:26 pm
Location: Melbourne, Australia

Re: Speed : Assembly OS vs Linux

Post by gerryg400 »

rdos wrote:Besides, if you share libc, which is enormously large for a small application, you would load much more code (and thus start times will suffer) than if you statically link the few functions that are actually used. The only case when this dynamic linking of libc is useful is in incredibly bloated designs like Windows or Linux where the OS itself starts a huge number of processes without user intervention.
Long load times are easily mitigated on MMU platforms by actually 'sharing' the shared libraries and demand loading pages as required.
If a trainstation is where trains stop, what is a workstation ?
Post Reply