Hi everyone,
Since several years (high school), I am a passionned developer and what I love the most is challenge, low level programming and so on. This is why I decided that some day I would start to develop a basic OS so I could put the challenge to the extremeand overpass my limits. That way I directly arrived here, and will start the development soon (after some additionnal assembler training).
I do not want to create a very powerful or well-known OS or something like that, just something for me.
But as I build my own, I think I shall use it for several useful points, e.g. different kinds of servers.
I also do not want to start a project I will never complete, so I am wondering this : can I assume that a personalized OS can give higher security againsit different forms of hacking, starting by the fact it can not be tested for leaks that would help hack it ? (I mean, if the hacker does not have any copy of the OS, he cannot try to reveal leaks on his own network) ? Or is there no hope I can get something more securized ? If second, then I will stick to a standard OS for home use only.
Thanks for your answer, hoping I will be able one day to share my OS experience with you all
security assumptions for a custom OS
security assumptions for a custom OS
If you are the king of VB, then you are the king of dumbs - M.G.
Re: security assumptions for a custom OS
Always assume that the attacker has your executable files, its source code, and everything except data meant to be secret (private keys, etc.) because you guard that secret data securely. Claiming that your newer operating system is more secure because nobody else understands it is called "security through obscurity" and it isn't a valid security solution.
Please note that I like to say that I am hacking on my OS, where hacking means creatively working. With that definition, if you work on your OS, then a hacker is hacking on it.
Are you asking whether you should be lazy and make an insecure operating system? If you never hook it up against the network and you are the only user, then sure, you can skip all that. It does make your system less reliable if your kernel doesn't thoroughly validate its input, though.
Please note that today you may well say "I can do without X in my OS", but one day your OS is pretty complete to your original goals and it may turn out you really want X.
If you want security, write down all the things you don't want to happen and how they potentially could happen (thread model) and with that in mind, find suitable defences such that it never can happen. If you then prove that things will never go awry, then your system is secure by definition. The trouble is making sure that your model of your system (that you proved secure) actually corresponds to your actual implementation. (There can be buffer overflows and such, things you didn't consider could happen in your model.)
Please note that I like to say that I am hacking on my OS, where hacking means creatively working. With that definition, if you work on your OS, then a hacker is hacking on it.
Are you asking whether you should be lazy and make an insecure operating system? If you never hook it up against the network and you are the only user, then sure, you can skip all that. It does make your system less reliable if your kernel doesn't thoroughly validate its input, though.
Please note that today you may well say "I can do without X in my OS", but one day your OS is pretty complete to your original goals and it may turn out you really want X.
If you want security, write down all the things you don't want to happen and how they potentially could happen (thread model) and with that in mind, find suitable defences such that it never can happen. If you then prove that things will never go awry, then your system is secure by definition. The trouble is making sure that your model of your system (that you proved secure) actually corresponds to your actual implementation. (There can be buffer overflows and such, things you didn't consider could happen in your model.)
Re: security assumptions for a custom OS
Hi sortie,
thanks for your quick reply (and glad to see my english is readable ^^)
I didn't really considered that I would neglict security whatever the purpose of the OS, security is always important for me (even I can provoke crashes on my own OS if not secure enough !).
But I won't make the different servers development my priority then (and use a Unix based OS to run my servers), they are sufficiently secure.
once again, thanks for the answer and the tip your point is good.
thanks for your quick reply (and glad to see my english is readable ^^)
I didn't really considered that I would neglict security whatever the purpose of the OS, security is always important for me (even I can provoke crashes on my own OS if not secure enough !).
But I won't make the different servers development my priority then (and use a Unix based OS to run my servers), they are sufficiently secure.
once again, thanks for the answer and the tip your point is good.
If you are the king of VB, then you are the king of dumbs - M.G.
Re: security assumptions for a custom OS
Before you think of technical barrier against all kind of attack and exploits, ask yourself, does it worth?
Security is all about economy.
Also note that a mature OS should have eliminated most of the exploits, but if you start from scratch it's very likely you'll face the same and yet you don't have the QA team or large user base to test against them.
Security is all about economy.
No. Someone could gather information about your platform (architecture, base library - for example specific version of newlib, etc), and in theory could inject OS independent code or DDoS by trying all sort of possible exploits and overruns. Doing it without a test bed significantly increase cost, but it's still possible.axiagame wrote:an I assume that a personalized OS can give higher security againsit different forms of hacking, starting by the fact it can not be tested for leaks that would help hack it ?
Also note that a mature OS should have eliminated most of the exploits, but if you start from scratch it's very likely you'll face the same and yet you don't have the QA team or large user base to test against them.
Re: security assumptions for a custom OS
Hi,
The next step is libraries - most OSs are infested. A security hole in one library means that any software that uses that library has a security hole. It's possible to use "services" instead; where a service runs as a process (in its own separate virtual address space) and software communicates with the service via. IPC. For an example, imagine you've got code to compress/decompress data with an accidental security hole, and several applications (web browser, word processor, etc) that uses the compression/decompression code. If the compression/decompression code is implemented as a library then it can access any/all data (and any/all other resources - files, sockets, etc) that any of the applications have access to (e.g. possibly including the web browser's built-in "password manager" data). If the compression/decompression code is implemented as a service then it might only be able to access itself and any data that applications have explicitly sent to it (e.g. not including the web browser's built-in "password manager" data).
Then there's programming languages. Most OSs allow (e.g.) applications and drivers to be written in languages like (e.g.) assembly, C and C++. Programmers need more skill to avoid bugs in these languages, and these languages also make it easier for bugs to go unnoticed. Making people use different languages and/or making people use some sort of safe/managed code, will improve security.
There's also delivery method. For example, almost all open source software is much more vulnerable to a type of "man in the middle" attack; where someone downloads the source code, inserts their own malicious code, then provides malicious binaries to the general public. For a simple example; imagine if you went to your local computer shop and told them you want to supply lots of "Ubuntu Install CDs" for free (to promote open source and/or Linux and/or Ubuntu). Everyone that uses your CD is vulnerable to whatever changes you felt like making, and if these people bother to look at the source code (99.999% of people won't) then they'd be looking at the original (unmodified) source code and aren't going to see any of your malicious changes. If you attempt to do something like this with Windows or OS X; everyone is going to know that something is dodgy (e.g. piracy) before they even see the CD. Basically in both cases you have to trust the supplier of the software, but for open source you also need to trust any people in between the original supplier and the end user.
Finally; most existing OSs were designed for "large boxes" (desktop, workstation and server systems) where getting physical access is harder; and because of this they're aren't designed to prevent access from people with physical access (for example, someone booting a "live" CD and mounting existing file systems to by-pass the installed OS's file system checks). The modern trend is towards mobile systems, where it's much easier to (e.g.) steal someone's laptop or smartphone; and the old "getting physical access is harder" mentality is no longer reasonable. This means that you can/should consider doing things like encrypting file systems; so that physical access alone is not enough to gain access to sensitive data.
For the reasons above; it is possible for an OS to be much more secure than existing mainstream OSs.
For your own personal OS (that you and nobody else uses); the problem is that a full OS takes a massive amount of work. The OS would be secure because there's no applications and no device drivers, not because nobody else has the software.
Cheers,
Brendan
Most current OSs are monolithic. This means that a lot of code (with a lot of bugs) is running at the highest privilege level, which creates a significant security risk (people exploiting those bugs). In addition, it means that the kernel has to have a way to start/load code that will run at the highest privilege level, which creates another significant security risk (e.g. people writing "trojan drivers"). For a micro-kernel; all the device drivers, etc typically run at the lowest privilege level. This mitigates a lot of the risk - if a bug in a driver is exploited or there's a "trojan driver", then the attacker won't be able to access anywhere near as much. In addition, you can more effectively limit what a driver can access (for example, you could say "keyboard drivers don't have permission to use networking" and make it much harder for someone to write a key-logger).axiagame wrote:I also do not want to start a project I will never complete, so I am wondering this : can I assume that a personalized OS can give higher security againsit different forms of hacking, starting by the fact it can not be tested for leaks that would help hack it ?
The next step is libraries - most OSs are infested. A security hole in one library means that any software that uses that library has a security hole. It's possible to use "services" instead; where a service runs as a process (in its own separate virtual address space) and software communicates with the service via. IPC. For an example, imagine you've got code to compress/decompress data with an accidental security hole, and several applications (web browser, word processor, etc) that uses the compression/decompression code. If the compression/decompression code is implemented as a library then it can access any/all data (and any/all other resources - files, sockets, etc) that any of the applications have access to (e.g. possibly including the web browser's built-in "password manager" data). If the compression/decompression code is implemented as a service then it might only be able to access itself and any data that applications have explicitly sent to it (e.g. not including the web browser's built-in "password manager" data).
Then there's programming languages. Most OSs allow (e.g.) applications and drivers to be written in languages like (e.g.) assembly, C and C++. Programmers need more skill to avoid bugs in these languages, and these languages also make it easier for bugs to go unnoticed. Making people use different languages and/or making people use some sort of safe/managed code, will improve security.
There's also delivery method. For example, almost all open source software is much more vulnerable to a type of "man in the middle" attack; where someone downloads the source code, inserts their own malicious code, then provides malicious binaries to the general public. For a simple example; imagine if you went to your local computer shop and told them you want to supply lots of "Ubuntu Install CDs" for free (to promote open source and/or Linux and/or Ubuntu). Everyone that uses your CD is vulnerable to whatever changes you felt like making, and if these people bother to look at the source code (99.999% of people won't) then they'd be looking at the original (unmodified) source code and aren't going to see any of your malicious changes. If you attempt to do something like this with Windows or OS X; everyone is going to know that something is dodgy (e.g. piracy) before they even see the CD. Basically in both cases you have to trust the supplier of the software, but for open source you also need to trust any people in between the original supplier and the end user.
Finally; most existing OSs were designed for "large boxes" (desktop, workstation and server systems) where getting physical access is harder; and because of this they're aren't designed to prevent access from people with physical access (for example, someone booting a "live" CD and mounting existing file systems to by-pass the installed OS's file system checks). The modern trend is towards mobile systems, where it's much easier to (e.g.) steal someone's laptop or smartphone; and the old "getting physical access is harder" mentality is no longer reasonable. This means that you can/should consider doing things like encrypting file systems; so that physical access alone is not enough to gain access to sensitive data.
For the reasons above; it is possible for an OS to be much more secure than existing mainstream OSs.
For your own personal OS (that you and nobody else uses); the problem is that a full OS takes a massive amount of work. The OS would be secure because there's no applications and no device drivers, not because nobody else has the software.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.