Secure? How?

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Secure? How?

Post by Rusky »

Brendan wrote:There is no defence against exploitable hardware bugs like these (at least none that can be implemented before the hardware bugs are known); so for this case "several layers of protection" is just "several layers of pointless bloat that don't prevent exploitable hardware bugs".
Actually, the best mitigation for this sort of hardware bug (besides potentially switching memory types to something better in the future) has been around for a long time- ECC memory. And that one layer of protection is useful against several other attacks (and bugs) as well.
Brendan wrote:This is mostly wishful thinking; because most of these bugs are impossible to detect or prevent (regardless of how much "babysitting bloat" you add). Apple's "goto fail" bug was a perfect example of this.
Just because some (not most) bugs (goto fail) are harder to protect against, does not mean that others (most buffer overflows/null pointer dereferences, heartbleed) are not easier to protect against with various layers that aren't actually bloat (ASLR, W^X, syscall filtering, language-assisted bounds checking, overflow checking). Further, I'm not just talking about runtime protection- like you say, there are language changes that can help here too. Each layer, even if insufficient on its own, makes it less likely that each individual vunlerability will amount to anything.
bewing wrote:OK, but I still disagree with that. Because the practical end result of that is that the OS designers get let off the hook. They have created a braindamaged OS that has a security hole, that can be exploited with a stack smash, say. They add stack smashing prevention to the compiler for userapps and say "look! Problem solved." Except it's not.
No, this doesn't let OS designers off the hook at all- it means more work for them! They have decided to maintain multiple types of protection, in case any one type is exploitable (because they all are in some valid situation or another). Problems in any layer are fixed when they're discovered. The mere possibility of someone being lazy and using a type of protection as a bandaid is completely invalid as an argument against that type of protection.
Brendan wrote:incredibly idiotic compilers like GCC that treat "undefined behaviour" (that it does detect) as an opportunity for optimisation rather than as an error condition.
Off topic, but I've got to say undefined behavior is not so much "detected and exploited" as "assumed never to happen." The detection, when it does happen, is generally treated as a warning (unfortunately).
alexfru
Member
Member
Posts: 1111
Joined: Tue Mar 04, 2014 5:27 am

Re: Secure? How?

Post by alexfru »

Rusky wrote:
Brendan wrote:incredibly idiotic compilers like GCC that treat "undefined behaviour" (that it does detect) as an opportunity for optimisation rather than as an error condition.
Off topic, but I've got to say undefined behavior is not so much "detected and exploited" as "assumed never to happen." The detection, when it does happen, is generally treated as a warning (unfortunately).
I haven't looked at gcc's source code, but I wouldn't be too surprised to find in it that warning generation and optimization are not closely coupled, that is, the decisions to issue a specific warning or to try to make a specific optimization do not affect one another.

And yes, it may look perverse and backwards, but indeed, the approach appears to be "We're not expecting UB and if there is, there is, it's not our fault, write proper code, pal!".

However, I do believe that the UB issue has been taken way too far in the modern C(++) compilers. I understand that almost all of that crazy stuff is allowed per the standard, but I think we may want to update the standard and shift the focus to the programmer from the compiler writer. The language is still needlessly user unfriendly, just as 25 years ago, but it doesn't have to stay that way. We do have better warnings now, which is great and helps, but why stop there?
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Secure? How?

Post by Brendan »

Hi,
Rusky wrote:
Brendan wrote:There is no defence against exploitable hardware bugs like these (at least none that can be implemented before the hardware bugs are known); so for this case "several layers of protection" is just "several layers of pointless bloat that don't prevent exploitable hardware bugs".
Actually, the best mitigation for this sort of hardware bug (besides potentially switching memory types to something better in the future) has been around for a long time- ECC memory. And that one layer of protection is useful against several other attacks (and bugs) as well.
Yes. However, this is one area where Intel really annoys me - they use ECC support for product differentiation purposes, forcing people to get server/workstation class CPU and motherboard (at much higher prices, with very little choice at all for laptop) if they want something that should be "minimum standard".
Rusky wrote:
Brendan wrote:This is mostly wishful thinking; because most of these bugs are impossible to detect or prevent (regardless of how much "babysitting bloat" you add). Apple's "goto fail" bug was a perfect example of this.
Just because some (not most) bugs (goto fail) are harder to protect against, does not mean that others (most buffer overflows/null pointer dereferences, heartbleed) are not easier to protect against with various layers that aren't actually bloat (ASLR, W^X, syscall filtering, language-assisted bounds checking, overflow checking). Further, I'm not just talking about runtime protection- like you say, there are language changes that can help here too. Each layer, even if insufficient on its own, makes it less likely that each individual vunlerability will amount to anything.
Yes - it's a compromise, and for some things (e.g. "no execute") the advantages definitely do outweigh the disadvantages. Possibly the hardest part of security is finding ways that people won't intentionally avoid and/or disable for various reasons (too annoying, too complex, too slow, etc).
Rusky wrote:
Brendan wrote:incredibly idiotic compilers like GCC that treat "undefined behaviour" (that it does detect) as an opportunity for optimisation rather than as an error condition.
Off topic, but I've got to say undefined behavior is not so much "detected and exploited" as "assumed never to happen." The detection, when it does happen, is generally treated as a warning (unfortunately).
Either way it's incredibly idiotic. Any software that gets data from the user and assumes it's valid (without thorough validation) was written by an incompetent moron; and this includes compilers (which take input/source code from the user).

Basically, a compiler has 2 responsibilities - correctly generating valid output for valid input; and correctly generating error messages for invalid input. If the latter is not done, then either it's the compiler's fault or the source language's fault.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
alexfru
Member
Member
Posts: 1111
Joined: Tue Mar 04, 2014 5:27 am

Re: Secure? How?

Post by alexfru »

Brendan wrote:Basically, a compiler has 2 responsibilities - correctly generating valid output for valid input; and correctly generating error messages for invalid input.
Kind of. I mean, in an ideal world, sure. But the compiler isn't even able to determine the possible/maximum call depth (because it has no inputs to the code being compiled and/or the code is too complex to "reason" about it), thus it can't generate an error about an imminent stack overflow.

Or would you draw the line between valid and invalid input somewhere else, like closer to where it currently is?
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Secure? How?

Post by Brendan »

Hi,
alexfru wrote:
Brendan wrote:Basically, a compiler has 2 responsibilities - correctly generating valid output for valid input; and correctly generating error messages for invalid input.
Kind of. I mean, in an ideal world, sure. But the compiler isn't even able to determine the possible/maximum call depth (because it has no inputs to the code being compiled and/or the code is too complex to "reason" about it), thus it can't generate an error about an imminent stack overflow.

Or would you draw the line between valid and invalid input somewhere else, like closer to where it currently is?
"Valid input" is input that conforms to the rules for that input. For example, if you're designing a web page that asks for an email address then you might track down the rules for email addresses in RFC 822 (and also find the RFC that describes the rules for domain names), and then use the rules as the basis for your input validation.

For a C compiler, valid input is input that conforms to the rules of C language.

Valid input doesn't mean the code does what the programmer intended. For example, your source code might be perfectly valid C that describes "infinite recursion"; and the compiler's job should be to correctly convert that into target code (that crashes a run-time due to running out of stack because that's what your perfectly valid C asked for).


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
alexfru
Member
Member
Posts: 1111
Joined: Tue Mar 04, 2014 5:27 am

Re: Secure? How?

Post by alexfru »

Brendan wrote:For a C compiler, valid input is input that conforms to the rules of C language.

Valid input doesn't mean the code does what the programmer intended. For example, your source code might be perfectly valid C that describes "infinite recursion"; and the compiler's job should be to correctly convert that into target code (that crashes a run-time due to running out of stack because that's what your perfectly valid C asked for).
Of course, valid programs aren't the same thing as programs doing exactly what they are intended to. That's clear.

However, this specific example (stack overflow) is, in my understanding, a clear UB, making the program invalid, despite being syntactically correct and irrespective of how close it otherwise is to what the programmer intended it to do.

[You may be picky about the standard not mentioning any call stacks and associated stack overflows, but I can use a number of other UBs as an example (signed integer overflow, modifying the same object more than once between sequence points, etc) of the point I'm trying to make.]

The rules of C establish undefined, unspecified and implementation-defined/specific behavior. You probably don't mean that undefined behavior and the co. constitute valid input simply because the rules cover/call out specific instances of said behaviors. If you did mean these behaviors were valid on the grounds of simply being documented, your argument would reduce to more or less just checking the syntactic validity of the code being compiled, which the code is already checked for quite well, no problem there.

So, I'm confused as to what you are proposing to do or are unhappy about w.r.t. the C language and C compilers.

The C compiler can neither tell the programmer that their program isn't going to do what's intended nor spot all instances of undefined behavior at compile time. The causes for both are the same (insufficient input, inability to solve such and such math problems), the consequences (something not working as intended) are often the same. The only difference is exactly how the programmer fails to translate their ideas of the intended behavior into C code. In one case it could be just a logical mistake, in the other it is misuse of the language, attempting to do what the language standard says not to.

Again, what do you [suggest we] do about it?
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Secure? How?

Post by Brendan »

Hi,
alexfru wrote:So, I'm confused as to what you are proposing to do or are unhappy about w.r.t. the C language and C compilers.
What I'm suggesting is that:

a) C is bad because it has too many rules that are not enforceable by the compiler

b) GCC is bad because it doesn't detect or report "invalid input" where it could (C's rules that are enforceable by the compiler)

c) the combination of bad language and bad compiler unnecessarily increases the number of bugs and security vulnerabilities users are exposed to for no reason whatsoever

d) this is probably the single largest "root cause" of security vulnerabilities

e) with a better language and better compiler the majority of security vulnerabilities could've been avoided without any significant disadvantages


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Roman
Member
Member
Posts: 568
Joined: Thu Mar 27, 2014 3:57 am
Location: Moscow, Russia
Contact:

Re: Secure? How?

Post by Roman »

C has a different philosophy: it is a "cross-platform assembler". It is simple and shouldn't have any restrictions and expects the programmer to think himself.
"If you don't fail at least 90 percent of the time, you're not aiming high enough."
- Alan Kay
cmdrcoriander
Member
Member
Posts: 29
Joined: Tue Jan 20, 2015 8:33 pm

Re: Secure? How?

Post by cmdrcoriander »

Brendan wrote: a) C is bad because it has too many rules that are not enforceable by the compiler
Agreed... :D
Brendan wrote: b) GCC is bad because it doesn't detect or report "invalid input" where it could (C's rules that are enforceable by the compiler)
Not sure I fully accept this, or that 'GCC treats undefined behaviour as an opportunity for optimization' - isn't it more along the lines of 'GCC can't really tell if there's undefined behaviour in a lot of situations so it assumes it's never there, and then optimizes based on that assumption'?
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Secure? How?

Post by Brendan »

Hi,
cmdrcoriander wrote:
Brendan wrote: a) C is bad because it has too many rules that are not enforceable by the compiler
Agreed... :D
Brendan wrote: b) GCC is bad because it doesn't detect or report "invalid input" where it could (C's rules that are enforceable by the compiler)
Not sure I fully accept this, or that 'GCC treats undefined behaviour as an opportunity for optimization' - isn't it more along the lines of 'GCC can't really tell if there's undefined behaviour in a lot of situations so it assumes it's never there, and then optimizes based on that assumption'?
I say the glass is half full, you complain because you think the glass is half empty. Who cares? The fact remains that the glass is not full in either case.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
bewing
Member
Member
Posts: 1401
Joined: Wed Feb 07, 2007 1:45 pm
Location: Eugene, OR, US

Re: Secure? How?

Post by bewing »

Rusky wrote: in case any one type is exploitable (because they all are in some valid situation or another)
I completely disagree with this. A well designed OS running on well designed hardware will not be the tiniest bit exploitable. Sandboxes are not impossible. They are not even particularly theoretically complex. There is no reason whatsoever why any binary blob in userland should ever be able to gain privilege. The only reason we have exploits at all is that the OSes available now are crap, and the hardware they are running on is crap.

I think people are misguided in putting any focus at all on the details of the binary blob. It's irrelevant. Hardware glitches can cause any opcode run by the CPU to produce any output whatsoever. It shouldn't matter. An ASM program can run any sequence of opcodes at any time -- and it shouldn't matter. Looking at the compiler is a red herring. Looking at the language is a red herring. It's all irrelevant.

Until we have a well-built CPU to program that is intrinsically secure, all this talk about security is a waste of breath. Securing some insecure hardware is a losing battle, and not worth the time spent creating all the terribly clever stratagems.

And nothing comes for free. These compiler & etc. strategies add bloat, or add complexity, or slow the code down, or all of the above. All to fight a battle that's lost even before it begins. Foolishness.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Secure? How?

Post by Rusky »

Talk to me when you have your perfect CPU and operating system designed beyond "not be the tiniest bit exploitable."
alexfru
Member
Member
Posts: 1111
Joined: Tue Mar 04, 2014 5:27 am

Re: Secure? How?

Post by alexfru »

Brendan wrote: a) C is bad because it has too many rules that are not enforceable by the compiler
True. IOW, the compiler can help only so much with broken code.
Brendan wrote: b) GCC is bad because it doesn't detect or report "invalid input" where it could (C's rules that are enforceable by the compiler)
Like what? I know that several years ago it didn't warn about indexing an array with an invalid (too large) index in code something like "int a[10]; a[11] = 1;". I also know that it doesn't always spot things like "a = i++;" (a bit more complex expression will cause it miss the problem). What else?
Brendan wrote: c) the combination of bad language and bad compiler unnecessarily increases the number of bugs and security vulnerabilities users are exposed to for no reason whatsoever


For a long time I thought it was mainly a problem of teaching/learning the language properly, of availability of good books and articles. The language is clearly much less intuitive than others, less well defined than assembly language / CPU architecture and makes math even more inhumane.

Brendan wrote: d) this is probably the single largest "root cause" of security vulnerabilities


I'm not sure if it's the largest. You have to be pretty much in a paranoid mode when writing or fixing security-sensitive code. It's not a typical mindset/mode for most software developers. We also tend to overcomplicate things to the point at which it becomes hard to not miss an important edge case as such and not get overwhelmed by the amount of code we're dealing with. From my experience with Windows code I can tell that missing/insufficient/incorrect checks/validation were around the top issues implementation-wise. Even decent C/C++ coders from time to time will forget to check this or that.

Brendan wrote: e) with a better language and better compiler the majority of security vulnerabilities could've been avoided without any significant disadvantages


You'd have to trade some performance to make C well/better defined.
alexfru
Member
Member
Posts: 1111
Joined: Tue Mar 04, 2014 5:27 am

Re: Secure? How?

Post by alexfru »

Roman wrote:C has a different philosophy: it is a "cross-platform assembler". It is simple and shouldn't have any restrictions and expects the programmer to think himself.
Not quite. You never have implementation-defined sizes of types in an assembler or undefined behavior in it (you may have undefined results in some instructions or defined in some interesting way). You have them in C.

What's more, C is not just an advanced macro language on top of your regular assembler. There's no guarantee that a C compiler will consistently do on every architecture what you'd expect. Let's go for an example.

I remember one peculiar bug. The program was crashing and it had something like this in its code:

Code: Select all

// irrelevant stuff above...
unsigned var, pos;
var = ...;
if (condition1)
  pos = ...;
if (var & (1u << pos)) // crash around here
  printf("blah!\n");
// irrelevant stuff below...
It turned out that the pos variable wasn't always initialized (it wasn't when condition1 was false). And when it wasn't, the program would sometimes crash. If you can't yet see how it's possible, consider using a combination of BT (with a memory operand) and JNC instructions to implement if (var & (1u << pos)). And you'd normally just expect (1u << pos) to yield a bogus mask, which would then be applied to var and printf() would then write something at unexpected times. And often times this is what you'd get with such a bug, wrong output without a crash.

Another odd thing that you'd normally not see in an assembler is reordering of instructions that access memory.

The programmer has to think as you write. But they can't have too many assumptions about C or the compiler and they clearly can't expect C to behave as an assembler. And they can't think in vacuum. They need to know precisely what is not OK in C and how much more harmful it can be in C than in hand-written assembly code (if you used SHL + AND + JZ instead of BT + JNC, there would be no crash).

And there are many more no-no's in C than in assembly. It's not so rosy as you write about restrictions. The programmer has to knowingly restrict themselves in order to avoid C-specific traps, typically inexistent in assemblers.
alexfru
Member
Member
Posts: 1111
Joined: Tue Mar 04, 2014 5:27 am

Re: Secure? How?

Post by alexfru »

bewing wrote: I completely disagree with this. A well designed OS running on well designed hardware will not be the tiniest bit exploitable. Sandboxes are not impossible. They are not even particularly theoretically complex. There is no reason whatsoever why any binary blob in userland should ever be able to gain privilege. The only reason we have exploits at all is that the OSes available now are crap, and the hardware they are running on is crap.
There are non-hardware privileges as well. If your web browser or 3rd party app or e-mail program or your bank's website doesn't handle sensitive information properly (passwords and such, non-passwords being sent as clear text), your CPU and your OS can't help much. Your money will be stolen. Your nude photos will leak. Your political, religious and sexual views will find broader audience than you may want. It may cost you family, friends and job.
bewing wrote: I think people are misguided in putting any focus at all on the details of the binary blob. It's irrelevant. Hardware glitches can cause any opcode run by the CPU to produce any output whatsoever. It shouldn't matter. An ASM program can run any sequence of opcodes at any time -- and it shouldn't matter. Looking at the compiler is a red herring. Looking at the language is a red herring. It's all irrelevant.
Right, we all are going to die.
bewing wrote: Until we have a well-built CPU to program that is intrinsically secure, all this talk about security is a waste of breath. Securing some insecure hardware is a losing battle, and not worth the time spent creating all the terribly clever stratagems.

And nothing comes for free. These compiler & etc. strategies add bloat, or add complexity, or slow the code down, or all of the above. All to fight a battle that's lost even before it begins. Foolishness.
I'm afraid an "intrinsically secure CPU" is also a red herring.

Suppose I build a PC system that has no known hardware or software defects (meaning all of the potential bugs have been considered in design and implementation and checked for using code reviews and various testing methods) and all is great. Are you paying 10x-100x for it?

Are you going to never connect it to any kind of network or Internet? Because if you connect, you can no longer guarantee security of the whole system your PC is part of.
Post Reply