Re: Can you run your apps on each other's operating systems?
Posted: Tue Nov 02, 2010 10:41 pm
some of us are more worried about getting our own apps to run before we try to run others.
The Place to Start for Operating System Developers
https://f.osdev.org/
DavidCooper is right, because POSIX, although I don't always agree with its interfaces, is the #1 choice for people implementing a OS API.DavidCooper wrote: A unified API doesn't need to do everything the best way, just so long as it does everything in reasonable, low-complexity ways
Not to mention imprecise in several areas, plain wrong in others (handling of shift seconds is one that immediately springs to my mind), and not as much of a "standard" as you would like (i.e., Linux not adhering to POSIX in some points - one thing I remember is EFBIG vs. EOVERFLOW in open()).KotuxGuy wrote:DavidCooper is right, because POSIX, although I don't always agree with its interfaces, is the #1 choice for people implementing a OS API.
It's (relatively speaking)simple, and widespread(think about all the GNU tools at your disposal, which includes bash and gcc).
Exactly. Especially if you use it as your primary (native) API. I find it more useful to design a new, elegant, API, and then create POSIX compliance with a library.Solar wrote:Yes, POSIX compatibility gives you a huge software base to draw upon. It also severely limits your design decisions - you can't help but having your OS become "yet another Unix flavor", with comparatively little leeway to make a difference.
Nope, you are talking about Desktop operating systems specifically . If your up time is measured in decades and if it is a rock solid system with seamless support for clustering,then lots of people are going to use itThen again, if you cannot run Firefox and OpenOffice, people aren't going to use your OS anyway...
I can understand why now - I've obviously cheated by avoiding most of the complexity, so it's only now that I'm beginning to get a proper understanding of how difficult things have become for everyone else. I have a single 4GB address space, a system which allows co-operative multitasking simply by putting thread details into a carousel (any number of entries per app, and they can add more whenever they want them, along with codings indicating how often they need to be called to keep the app running smoothly), and a memory allocation service which simply hands out memory to apps according to what they ask for on loading and which recovers it all when they close (the app itself can decide which data to shove onto the hard disk when it isn't using it if it generates massive data files, so it shouldn't need to ask for more memory once it's going). The reality is that machines have so much memory available on them these days that it's only in servers that you're going to need complex ways of cramming as much stuff in as possible to minimise costs: if someone's using a machine for personal use, there's oodles of room available for any number of apps they're going to be using even if there's no paging system, unless of course they want to do something like video editing in which case it'll probably use everything the machine's got anyway and they'll have to close almost all the other open apps to stop them getting in the way (though an OS could have a means of hibernating an app to clear it out of memory temporarily without losing your place - maybe most do already, but I wouldn't know). Anyway, why program for functionality that you don't really need these days? If you find you're running out of space, you can probably shove more memory chips into your machine to cure it, and it won't be long before you won't be able to buy a machine with less than 4GB on it to begin with in any case, so why tie yourself up in knots designing something to work in tiny bits of temporarily assigned memory when there's so much of it available even now?Merlin wrote:some of us are more worried about getting our own apps to run before we try to run others.
{OK guys, lock and load...}DavidCooper wrote:One thing I've done with my OS which may sound unwise is completely ignore every memory protection feature available...
So far, so good. Worked for AmigaOS, for example: Bad apps got a bad rep, and no-one used them. Good apps were very thoroughly tested. You didn't get careless with your pointers because there was no protecttion. But no-one considered the Amiga to be fit for "serious" apps (like servers...) either, in no small part because of that very "feature".if you write and debug your code properly, it simply doesn't trash other memory locations. If it's firing off rogue bytes into space, running it through a debugging program should pick that up.
...this opens a completely new can of worms. Spoofing the service that checks registrations. Spoofing checksums. Man-in-the-middle attacks on verification requests. All kind of malicious stuff hackers will love to exploit.If your code doesn't contain viruses, you should be able to register it as virus-free so that people can check that they're getting stuff that won't mess up their machine (by running a simple program to check that the program file is identical to the registered one).
Good-bye hobbyist software development. Well, good-bye commercial software development - no company is going to take that kind of risk.In registering a program as virus free, you'd have to hand over a lot of money which you'd get back after a month if no virus is found in your program, while any antivirus company that finds a virus in it during that month would claim the money as a reward (and the police would be after you in addition).
Given the amount of software running on today's machines before you even get to see your GUI desktop, not feasible.If a program needs to work in a private space in order to keep vital data hidden from snoopers, then the whole computer needs to be that private space, so you need to make sure that there are no snooping programs on your machine in the first place...
You know "Microsoft-signed/certified drivers"? Ever wonder why only the big corps have their drivers signed at all, and why the version signed is nowhere near the latest version with all the functionality that people buy the hardware for? Most especially for gfx boards, but true for virtually every kind of hardware.Any program that looks at what other programs are doing should be counted as a virus unless the writers can fully account for what it's up to and why, so there should never be any snooping going on inside your machine.
Without memory protection of any kind, you cannot recognise when a software snoops a memory zone that is not its own, because you don't know which memory zone belongs to whom, or who reads where.If you take away all the complexity, it actually becomes extremely hard to snoop on another program without directly accessing its memory zone, so it should be easier to recognise a virus than it is with a complex system.
Without memory protection, I cannot defend against code being injected into my browser from the outside. Oops, there goes my "virus free" deposit, again.If you have really secret data on your machine, you'd be a fool to connect it to the net at all, but even so, if you organise things in such a way that a browser never runs any code that comes in from the outside, all your data should be safe anyway...
No non-trivial system is bug-free.It should be perfectly possible to make a specific machine completely hack-proof if it's only running standard software and components from trusted companies.
That's one thing I disagree with. Have you seen seL4? Although, I suppose it depends what you consider trivial... there's no practical way you could secure a whole working system. Still, it's bad to claim that something only very unlikely is impossible.Solar wrote:No non-trivial system is bug-free.
...which is pretty hard to do.berkus wrote:...unless you provide them with some interesting alternatives.Solar wrote:Then again, if you cannot run Firefox and OpenOffice, people aren't going to use your OS anyway...
I don't think(I said think) any of us are making OSes for servers(or any enterprise stuff for that matter). Maybe a few, but not many, AFAIK.Thomas wrote:hi,Nope, you are talking about Desktop operating systems specifically . If your up time is measured in decades and if it is a rock solid system with seamless support for clustering,then lots of people are going to use itThen again, if you cannot run Firefox and OpenOffice, people aren't going to use your OS anyway...
--Thomas
How do Microsoft programs update themselves without being spoofed by other sites? If the OS checks a new program out by analysing it in such a way as to derive a coding from it which would be radically different if a single byte in the program was changed, that could then be sent in encrypted form to the central verification site (not a spoof version of it with a different address) which would reply in encrypted form.Solar wrote:...this opens a completely new can of worms. Spoofing the service that checks registrations. Spoofing checksums. Man-in-the-middle attacks on verification requests. All kind of malicious stuff hackers will love to exploit.
There could be alternatives to putting money in for hobbyists: the most important thing is to know who the software is coming from so that the police know exactly who to arrest if there's a virus in it. Respectable companies aren't likely to lose any money through this process either, unless they're being sloppy about who puts what into their programs.Good-bye hobbyist software development. Well, good-bye commercial software development - no company is going to take that kind of risk.
It would be even easier to spot it if it behaved like that. When you're testing a program to see how it behaves, you'll see exactly what it's up to - you just run it through a monitor program and see where it's reading or writing to memory it shouldn't be using. Bear in mind that my goal is to create artificial intelligence to the point where it can disassemble any program and work out exactly what it does in every aspect, but without passing on what it learns to anyone unless the software is malicious.And all a clever hacker has to do is to write malware that makes it look as if something else did the Bad Thing. Not only would his malware spread unrestrained, he'd drive honest vendors into bankruptcy.
It is if every single piece of that software has been properly checked (though you also have to ask yourself why so much stuff has to run before you see your desktop.Given the amount of software running on today's machines before you even get to see your GUI desktop, not feasible.If a program needs to work in a private space in order to keep vital data hidden from snoopers, then the whole computer needs to be that private space, so you need to make sure that there are no snooping programs on your machine in the first place...
My eventual plan is for A.I. to write all the device drivers, and the applications too, so all the protection stuff will certainly become redundant in time, but long before that point it will be able to check device drivers written by people and to spot faults in them. As it is, there are plenty of people capable of analysing them to look for defects, so the trick would be to offer rewards to people who find viruses and bugs, and the company that wrote the code would have to pay out.You know "Microsoft-signed/certified drivers"? Ever wonder why only the big corps have their drivers signed at all, and why the version signed is nowhere near the latest version with all the functionality that people buy the hardware for? Most especially for gfx boards, but true for virtually every kind of hardware.
Your OS would be running headlong into the same kind of beaurocracy hell.
Of course you can tell which memory zone belongs to which program, and running programs through a monitor will show up exactly where they're looking. If you were one of the people hoping to cash in from spotting malicious code in programs, you could run the program in a protected memory environment anyway, using that to pick up where it's doing things it shouldn't, and converting it's failed calls to the OS (blocked by the protection mechanism) into calls to the OS that do work, so you could watch everything it's doing in that regard even when it's running at full speed. At the moment we have a world in which people download stuff from all over the Web without knowing what they're getting, and ultimately the only real protection they have is if that malicious code has already come to the attention of their antivirus package after it's been spotted doing damage on other people's machines first. The way to clean that up is to have a central place where you can check what you're getting and know that the person who wrote that code can be traced and prosecuted if it's malicious, and almost everything malicious would be spotted very early on by people who specialise in picking up the rewards for finding it (people who would be only too delighted to get their test machines infected).Without memory protection of any kind, you cannot recognise when a software snoops a memory zone that is not its own, because you don't know which memory zone belongs to whom, or who reads where.
How is code going to be injected into a browser from the outside? If I could connect a machine with my OS to the Web and wrote a browser for it, you could send it as much malicious code as you like and it would never be run.Without memory protection, I cannot defend against code being injected into my browser from the outside. Oops, there goes my "virus free" deposit, again.
Many bugs are harmless and can be left in, but harmful ones show up and can be eliminated. The most harmful would be a bug that does no visible damage but which allows a hacker to get into your machine, but I have no doubt that browsers can be designed in such a way that it would be impossible for them to run code they aren't meant to be running.No non-trivial system is bug-free.
Some people(like myself) rarely(if ever) use an office tool, so I guess that could slide...berkus wrote: By alternatives I meant thinking outside the "browser/office package" box.
That would be a valid comparison if I was suggesting that you should allow anything that comes in via the web to be run in your machine without questioning it, but the whole point of my argument is that we're too trusting of what comes in from the Web at the moment, relying on protection features which don't protect because there are so many other complex routes available for malicious code to exploit, and many of those routes probably only exist to enable legitimate programs to get things done that the protection mechanisms get in the way of.berkus wrote:Your mindset very much reminds me of the academic folks at the start of the Internet. Why would you need to encrypt the communication channel between machines? What sense is in making MITM-resistant handshakes? We control the hardware, we control the software, it's PROVEN to be virus-free, right?
It's obviously wrong.
I have a Windows machine with no antivirus on it which I've never connected to the Web. I run a lot of software on it that comes from a wide range of trusted companies, and it's never suffered from any recognisable attack. The real dangers don't come from well-known software companies but rather from dodgy sites on the Net (and from people who can stick things into your machine directly, though that's another issue). It should be possible to eliminate the danger from the Net completely if your software doesn't just stupidly run any old piece of code that finds its way into it. I see no reason why opening attached files should be a danger either if the program opening it doesn't run any code in them. Would your OS be opening itself up to infection just by opening a file? I don't think so - you wouldn't design it to do that. But it seems that a complex system stuffed full of security features just opens it up and runs embedded code regardless, probably because it's doing so within a protected zone where it hopefully won't be able to do any harm.In the future internets paranoia is the only way to survive. You should never trust any other nodes, software, hardware etc, unless you're absolutely positive it is safe to do so. And it's usually not.
There would be room for alternative authorities to make their own judgements on what's safe using whatever method they fancy, so you could switch over to using their services instead (or as well) if you felt that good software was being labelled as malicious for political reasons. You would always have the final word on what runs and what doesn't in your machine.berkus wrote:So you're pro-centralisation, and I'm absolutely sure it will not work. I'm pro-decentralisation completely, for the reason that no central authority could be given an absolute right to decide what is right, what is wrong and what I could do with it.