Secure? How?
Secure? How?
I think, in the past several months I have seen a number of posts claiming that such and such hobby (or not) OS is going to be secure with little else for details.
That made me think they'd be as secure as Linux, Windows and whatever else is there in common use, if not even more secure.
I wonder what exactly the authors that are making such undoubtedly bold claims mean, basically, how they are going to achieve that level of security.
And, what's more, how many of them have actually studied security and have done things like security design/code reviews, penetration testing and so on or are planning to do that if they haven't yet.
The reasons I am wondering, or let's put it out straight, not believing such claims are simple... Few people get security. Fewer get it right. If you're making an OS just by yourself and it's not something that's pretty much useless for anyone to care about its security, it'll take a lot of time to just make it.
And then, even if it was designed and written with security in mind, at the very least thoroughly pen-test it. But it's not just that.
You aren't probably going to write from scratch things like TLS/SSL or a web browser. You'll likely use an open source implementation. And, as in any software of size and value, there will be bugs. And security bugs, too. We have witnessed not once discovery of them in tried implementations of both open source and closed source software. People keep finding stuff. Developers keep introducing bugs. What was thought secure yesterday may be insecure today, and you may not even know it. You will basically have to trust whatever you borrow or accept it as is and hope for the best as you yourself won't be able to prove it secure or otherwise.
So, what really makes some think they can make their OS secure?
Do you have some novel design ideas that aren't already found in Linux, Windows, etc? If so, have you run them through security guys, what do they think? Or what?
That made me think they'd be as secure as Linux, Windows and whatever else is there in common use, if not even more secure.
I wonder what exactly the authors that are making such undoubtedly bold claims mean, basically, how they are going to achieve that level of security.
And, what's more, how many of them have actually studied security and have done things like security design/code reviews, penetration testing and so on or are planning to do that if they haven't yet.
The reasons I am wondering, or let's put it out straight, not believing such claims are simple... Few people get security. Fewer get it right. If you're making an OS just by yourself and it's not something that's pretty much useless for anyone to care about its security, it'll take a lot of time to just make it.
And then, even if it was designed and written with security in mind, at the very least thoroughly pen-test it. But it's not just that.
You aren't probably going to write from scratch things like TLS/SSL or a web browser. You'll likely use an open source implementation. And, as in any software of size and value, there will be bugs. And security bugs, too. We have witnessed not once discovery of them in tried implementations of both open source and closed source software. People keep finding stuff. Developers keep introducing bugs. What was thought secure yesterday may be insecure today, and you may not even know it. You will basically have to trust whatever you borrow or accept it as is and hope for the best as you yourself won't be able to prove it secure or otherwise.
So, what really makes some think they can make their OS secure?
Do you have some novel design ideas that aren't already found in Linux, Windows, etc? If so, have you run them through security guys, what do they think? Or what?
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
Re: Secure? How?
Basically, everything boils down to "Any form of security succumbs to sufficient effort". I have done a few security-related things professionally, and it all comes down to the fact that at some point you can't practically secure it any further without breaking functionality.
Of course, big companies have big risks and therefore big pockets of money to spend on testing, and that gives them an advantage. Such companies also have a lot of history, legacy, and that gives them a disadvantage. Writing code in assembly or C is significantly more error-prone than if you do the same thing in a language that lack direct memory access. Schemes that grant zero privileges by default also lessen the potential attack surface. Taking a programming language that allows you to write code effectively and legibly also reduces the amount of errors.
All these things can be done by new projects and grant them an inherent advantage. However, I have yet to see any newcomer actually conciously decide to write a microkernel in Rust, instead of thinking they can beat the microsoft developers at their own game.
tl;dr: Wrenchtime.
Of course, big companies have big risks and therefore big pockets of money to spend on testing, and that gives them an advantage. Such companies also have a lot of history, legacy, and that gives them a disadvantage. Writing code in assembly or C is significantly more error-prone than if you do the same thing in a language that lack direct memory access. Schemes that grant zero privileges by default also lessen the potential attack surface. Taking a programming language that allows you to write code effectively and legibly also reduces the amount of errors.
All these things can be done by new projects and grant them an inherent advantage. However, I have yet to see any newcomer actually conciously decide to write a microkernel in Rust, instead of thinking they can beat the microsoft developers at their own game.
tl;dr: Wrenchtime.
Re: Secure? How?
Have you tried to learn more about those claims? Have you discussed the issue with the claimers? It seem you just wondering, but haven't done much of the supposed actions.alexfru wrote:I wonder what exactly the authors that are making such undoubtedly bold claims mean, basically, how they are going to achieve that level of security.
First there are some concepts, that make an OS more secure. If you learn the concepts, then you can decide on OS's security level. And any testing or other huge efforts targeted at the implementation of the concepts are just matter of time. Would there be an OS with really good security concepts employed then it is valid to think of it as about potentially secure OS. But if you expect already polished tool with all time consuming things done, then it is better to look at Linux, Windows or may be some rarely used, but heavily invested in OS.alexfru wrote:And, what's more, how many of them have actually studied security and have done things like security design/code reviews, penetration testing and so on or are planning to do that if they haven't yet.
So, your clam in fact is about lack of time spent on the security part of hobby OS. But do you think there is really enough motivation to allow every osdever to spend required time? Actually you just miss the picture of hobby osdeving.alexfru wrote:And then, even if it was designed and written with security in mind, at the very least thoroughly pen-test it. But it's not just that.
It is about security concepts, but not about huge investment in hobby OS.alexfru wrote:So, what really makes some think they can make their OS secure?
As is already said - if you expect some polished product then you should look some other place to help you, but not hobby osdevers.alexfru wrote:Do you have some novel design ideas that aren't already found in Linux, Windows, etc? If so, have you run them through security guys, what do they think? Or what?
But you can help the osdev community by participating in a process of testing or running "through security guys".
-
- Member
- Posts: 283
- Joined: Mon Jan 03, 2011 6:58 pm
Re: Secure? How?
While you are blatantly saying you think these osdev members "clearly can't know what they are talking about."alexfru wrote:...
Maybe you should explain what security experience you have, so you atleast have a leg to stand on...
- Monk
Re: Secure? How?
I did a combination of security design/code reviews and fuzzing/pentesting for about 1.5 years at Microsoft, if it helps you.tjmonk15 wrote:While you are blatantly saying you think these osdev members "clearly can't know what they are talking about."alexfru wrote:...
Maybe you should explain what security experience you have, so you atleast have a leg to stand on...
- eryjus
- Member
- Posts: 286
- Joined: Fri Oct 21, 2011 9:47 pm
- Libera.chat IRC: eryjus
- Location: Tustin, CA USA
Re: Secure? How?
Perhaps alexfru would consider adding his knowledge to the wiki content...alexfru wrote:I did a combination of security design/code reviews and fuzzing/pentesting for about 1.5 years at Microsoft, if it helps you.
Adam
The name is fitting: Century Hobby OS -- At this rate, it's gonna take me that long!
Read about my mistakes and missteps with this iteration: Journal
"Sometimes things just don't make sense until you figure them out." -- Phil Stahlheber
The name is fitting: Century Hobby OS -- At this rate, it's gonna take me that long!
Read about my mistakes and missteps with this iteration: Journal
"Sometimes things just don't make sense until you figure them out." -- Phil Stahlheber
Re: Secure? How?
Hi,
As far as I can tell, roughly 50% of security vulnerabilities are because programming languages (e.g. C/C++) are bad; and roughly 75% of security vulnerabilities are because the programmer was stupid. Note: There is a minimum of 25% overlap here, where the language and the programmer sucked and its fair to blame either or both.
By fixing the programming languages you could probably halve this; partly by making things like "array index out of bounds" and integer overflows impossible (and not having severely broken old crud like "strcat()" in libraries); and partly by making things less complex so that programmers are able to focus on getting it right.
By fixing the environment (e.g. by moving to "isolated pieces that are only able to communicate via. messaging" to enforce/promote "separation of concerns" and also to minimise risk, by adding finer granularity permissions for those pieces/processes, by removing the "all powerful root user" idiocy, by using a micro-kernel and not "stuff everything and the kitchen sink into CPL=0", etc) you'd probably halve the number of security problems again.
Cheers,
Brendan
I wonder too; but I don't necessarily doubt their claims.alexfru wrote:I think, in the past several months I have seen a number of posts claiming that such and such hobby (or not) OS is going to be secure with little else for details.
That made me think they'd be as secure as Linux, Windows and whatever else is there in common use, if not even more secure.
I wonder what exactly the authors that are making such undoubtedly bold claims mean, basically, how they are going to achieve that level of security.
As far as I can tell, roughly 50% of security vulnerabilities are because programming languages (e.g. C/C++) are bad; and roughly 75% of security vulnerabilities are because the programmer was stupid. Note: There is a minimum of 25% overlap here, where the language and the programmer sucked and its fair to blame either or both.
By fixing the programming languages you could probably halve this; partly by making things like "array index out of bounds" and integer overflows impossible (and not having severely broken old crud like "strcat()" in libraries); and partly by making things less complex so that programmers are able to focus on getting it right.
By fixing the environment (e.g. by moving to "isolated pieces that are only able to communicate via. messaging" to enforce/promote "separation of concerns" and also to minimise risk, by adding finer granularity permissions for those pieces/processes, by removing the "all powerful root user" idiocy, by using a micro-kernel and not "stuff everything and the kitchen sink into CPL=0", etc) you'd probably halve the number of security problems again.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Secure? How?
I am not so sure blaming the language or the programmer is enough to answer this question. I think in some cases the problem happen because the programmer in question is trying to solve a problem for which he does not have all the information required to make a correct solution. You can be a good programmer who fully understands your tools (the language and the compiler you are using for instance) but still make mistakes, not because you wrote faulty code but because the premises were wrong. Call it a logical error if you will.
Fudge - Simplicity, clarity and speed.
http://github.com/Jezze/fudge/
http://github.com/Jezze/fudge/
Re: Secure? How?
It is usually called expertise, also it can be called using word experience. But second can miss some important things. And here somebody like alexfru can notice the lack of expertise and to point to it. But pointing at some general information about somebody sometime has inadequate expertise doesn't help so much. So we just need a discussion about particular security decisions, but we hardly need to discuss that a human can miss something.Jezze wrote:I think in some cases the problem happen because the programmer in question is trying to solve a problem for which he does not have all the information required to make a correct solution.
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
Re: Secure? How?
But regardless, as long as the person in question doesn't describe how he secured it, it's just as broken as security through obscurity - i.e. assume the worst. And using "secure" as a buzzword might be just too common and it's nothing but an nice-to-have feature that never gets implemented in the end. Anybody who happened to have been reading through the snowden news topics can make such a claim without actually having any consequence.
Bottom line, without the author in question we know nothing, and general suggestions are the only thing we are able to discuss.
Bottom line, without the author in question we know nothing, and general suggestions are the only thing we are able to discuss.
-
- Member
- Posts: 96
- Joined: Sat Mar 15, 2014 3:49 pm
Re: Secure? How?
There are two general attack vectors - against code and against users.
There's been a big shift - for example by Apple with iOS and Google with Android - to try and insulate users by curated app-stores and per-app permission models.
Theres still plenty of real 'security' ideas that hobby OSes could explore, especially if you give up the ability to run a wide range of existing software as-is.
For example "capabilities", which can mean either per-action tickets or general process attributes. Worth googling.
And for something completely different, a very old blog post by me about security on the UIQ phone UI (which never reached production) http://williamedwardscoder.tumblr.com/p ... in-android
There's been a big shift - for example by Apple with iOS and Google with Android - to try and insulate users by curated app-stores and per-app permission models.
Theres still plenty of real 'security' ideas that hobby OSes could explore, especially if you give up the ability to run a wide range of existing software as-is.
For example "capabilities", which can mean either per-action tickets or general process attributes. Worth googling.
And for something completely different, a very old blog post by me about security on the UIQ phone UI (which never reached production) http://williamedwardscoder.tumblr.com/p ... in-android
Re: Secure? How?
Yes, it seems as a really good solution. But it requires some flexible permission management, but not the simplistic approach that Google has chosen in Android. Default settings should be ok for ordinary users, but there just must be some means to manage the settings in a preferred way. And the means should be simple, like some flexible group actions where the groups are based on patterns or even some language, that describes them. The granularity of permission check should be really high and it seems that Linux based solutions require very huge efforts to achieve such granularity.willedwards wrote:For example "capabilities", which can mean either per-action tickets or general process attributes.
Does anybody can assess the efforts to allow the Linux to check every process access to any resource with a high granularity in mind (like up to per file permissions for every process or up to per IP permission per process with download/upload limits/notification/logging)?
-
- Member
- Posts: 96
- Joined: Sat Mar 15, 2014 3:49 pm
Re: Secure? How?
I hope after we've researched the subject all will appreciate the tradeoffs involved in Android and iOS's "simplistic" model
Linux supports POSIX capabilities.
Worth also reading up on SELinux.
Generally, these types of 'capability' are not the real kind. The real kind is where every action that needs securing has a unique token ("ticket") associated with it.
See, I did warn you there were two overloads of the word "capability".
For the per-action kind, see this list here, especially calling out L4, EROS and Capsicum.
So there's lots of really real security you can aim for when you are creating a new OS. However, these typically mean throwing out and breaking mainstream apps you might hope to port.
Another website from a Mill veteran is http://www.cap-lore.com/ - enjoy
Linux supports POSIX capabilities.
Worth also reading up on SELinux.
Generally, these types of 'capability' are not the real kind. The real kind is where every action that needs securing has a unique token ("ticket") associated with it.
See, I did warn you there were two overloads of the word "capability".
For the per-action kind, see this list here, especially calling out L4, EROS and Capsicum.
So there's lots of really real security you can aim for when you are creating a new OS. However, these typically mean throwing out and breaking mainstream apps you might hope to port.
Another website from a Mill veteran is http://www.cap-lore.com/ - enjoy
Re: Secure? How?
I haven't done any research into the security side of things yet but would jailing every app in a semi virtual machine be an option?
The idea would be something like how DOSBox mounts a directory as the root file system and the app thinks what it sees is the whole machine. This way the jailer system could automatically add the shared library directories and parts of the user directory to the jailed file system. If an app requires more access than a standard install then the root user would need to add more directories and devices to the apps permissions.
IPC would be seen as RPC and used through a virtual network device.
All of this relies on file level security which also applies to the devices the app can use. Every app starts with only the bare minimum exposed by default and any damage done would be limited to the user's documents as the rest of the filesystem doesn't exist to the app.
It sounds very simple so I'm sure there's huge holes in my current understanding of security.
The idea would be something like how DOSBox mounts a directory as the root file system and the app thinks what it sees is the whole machine. This way the jailer system could automatically add the shared library directories and parts of the user directory to the jailed file system. If an app requires more access than a standard install then the root user would need to add more directories and devices to the apps permissions.
IPC would be seen as RPC and used through a virtual network device.
All of this relies on file level security which also applies to the devices the app can use. Every app starts with only the bare minimum exposed by default and any damage done would be limited to the user's documents as the rest of the filesystem doesn't exist to the app.
It sounds very simple so I'm sure there's huge holes in my current understanding of security.
"God! Not Unix" - Richard Stallman
Website: venom Dev
OS project: venom OS
Hexadecimal Editor: hexed
Website: venom Dev
OS project: venom OS
Hexadecimal Editor: hexed
-
- Member
- Posts: 307
- Joined: Wed Oct 30, 2013 1:57 pm
- Libera.chat IRC: no92
- Location: Germany
- Contact:
Re: Secure? How?
Wrapping every app into a virtual machine isn't the solution to all security problems. As Brendan recently pointed out, a lot of vulnerabilities in code are caused by flaws in the programming language.
The solution to most problems would be if the processor itself would distinguish between code and data. All CPUs known to me are happy with executing something that actually is data. As that's something we can't achive here on the OSDev forums using protected/long mode, so we have to eliminate the other causes; namely careless programming languages (by adding features like bounds checking) and programmers of non-kernel software doing silly/stupid/flawed designs and techniques.
The solution to most problems would be if the processor itself would distinguish between code and data. All CPUs known to me are happy with executing something that actually is data. As that's something we can't achive here on the OSDev forums using protected/long mode, so we have to eliminate the other causes; namely careless programming languages (by adding features like bounds checking) and programmers of non-kernel software doing silly/stupid/flawed designs and techniques.