I promised to give links to the Mill Security talk. Unfortunately, this talk was not streamed live and the attendance - we were guests of Google - was restricted. So it basically didn't get a big announcement up front.
The slides and talk are now online:
http://millcomputing.com/topic/security/
the Mill: Security
-
- Member
- Posts: 96
- Joined: Sat Mar 15, 2014 3:49 pm
the Mill: Security
Last edited by willedwards on Tue Mar 25, 2014 12:06 am, edited 4 times in total.
Re: the Mill: Security
It must be noted, however, that errors and exploits should not be swept under the carpet and recovered from in such a manner that the error/exploitation is still there, but nobody knows about it and it's allowed to go on. In particular, it's better to have DoS than a very robust solution, withstanding all DoS attacks and yet allowing things like EOP or information disclosure to happen. IOW, recovery may not be as simple and easy to do right as it might seem sometimes. Just like the catch-all-exceptions construct in C++, it may sometimes work, but at other times it'll just hide the problem.willedwards wrote: ... it survives and recovers from many detected errors and exploits.
...an instruction format that makes “return-oriented programming” exploits very difficult
...which defends not only against known errors and exploits, but also against unanticipated future failures.
Very difficult != impossible. Once the architecture/system has become widespread enough and of enough interest to the bad kind of hackers, difficult may suddenly become insufficient.
How do you deal with unknown unknown (=unanticipated future failures) in an automatic manner? What's the secret?
-
- Member
- Posts: 96
- Joined: Sat Mar 15, 2014 3:49 pm
Re: the Mill: Security
I'm trying to think how to rephrase the abstract. I think the slides are much better at explaining this, and the abstract just poses questions like yours
Recovers really means fault.
It really is very difficult to find a non-EBB-entry-point that does something meaningful. The instructions are optimized for high-entropy but, because they need to be decoded in fixed time, they are of course plenty low-entropy enough that a random string of bits is not a valid instruction and will fault.
If you make a script to generate fun non-EBB-entries that do anything other than fault, it would be a work of wonder. I am also personally playing with the idea of making a script to brute force each EBB ensuring that any non-entry entry will be guaranteed to cause a fault before any write op; its obviously possible, the question is only how much does it hurt the instruction cache budget? I suspect not very much, and can imagine it being a compiler option or even default.
The bigger exploit is, say, trying to rewrite a pointer to an object so some future invocation of a virtual method will actually execute some target function. There are lots of speed-bumps here too, just from the nature of the Mill CPU. As a belt machine, once a load retires its read-only; you cannot overwrite temporal belt entries in the same way you can overwrite registers on a spatial address model. And with so much of the code pointers hidden behind portals and position independence, its damn difficult to know where you'd want to jump too even though its a shared address space machine.
The talk has been held, so even though we're waiting for the slides to turn up online, I can talk about their contents and the innards of various aspects of robustness if you just lead me there
Recovers really means fault.
It really is very difficult to find a non-EBB-entry-point that does something meaningful. The instructions are optimized for high-entropy but, because they need to be decoded in fixed time, they are of course plenty low-entropy enough that a random string of bits is not a valid instruction and will fault.
If you make a script to generate fun non-EBB-entries that do anything other than fault, it would be a work of wonder. I am also personally playing with the idea of making a script to brute force each EBB ensuring that any non-entry entry will be guaranteed to cause a fault before any write op; its obviously possible, the question is only how much does it hurt the instruction cache budget? I suspect not very much, and can imagine it being a compiler option or even default.
The bigger exploit is, say, trying to rewrite a pointer to an object so some future invocation of a virtual method will actually execute some target function. There are lots of speed-bumps here too, just from the nature of the Mill CPU. As a belt machine, once a load retires its read-only; you cannot overwrite temporal belt entries in the same way you can overwrite registers on a spatial address model. And with so much of the code pointers hidden behind portals and position independence, its damn difficult to know where you'd want to jump too even though its a shared address space machine.
The talk has been held, so even though we're waiting for the slides to turn up online, I can talk about their contents and the innards of various aspects of robustness if you just lead me there
Re: the Mill: Security
It's interesting. While the return address is essentially hidden and can't be overwritten (at all?), this protection doesn't cover function pointers residing in parameters or structures/arrays (local or in the heap). At the very first glance, "return-oriented programming" doesn't appear to be entirely impossible if we look beyond the return statement.
Re: the Mill: Security
You could still overwrite a function pointer if you could find a buffer overflow, etc. But because of regions, you still can't execute arbitrary code, so unless you can find a function pointer that is called repeatedly even when you change it every time, or a series of function pointers that are called in sequence regardless of where they point (and that are all smashable), ROI still isn't going to work. There's also much finer granularity of protection between services so even that's less likely.