Null pointers and pointer safety

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
goku420
Member
Member
Posts: 51
Joined: Wed Jul 10, 2013 9:11 am

Re: Null pointers and pointer safety

Post by goku420 »

Icee wrote:Static analysis undoubtedly makes software better but the thread strikes me as an attempt to say that static analysis is sufficient to mark generic binary code "safe" which I tend to doubt.
This is true. There are bugs like https://gcc.gnu.org/bugzilla/show_bug.cgi?id=59659 where perfectly valid code can lead to the generation of bad binary code (in particular, this bug had GCC unroll the initialization which not only resulted in excessive compilation time, but an excessive binary size).
User avatar
Sik
Member
Member
Posts: 251
Joined: Wed Aug 17, 2016 4:55 am

Re: Null pointers and pointer safety

Post by Sik »

Two random thoughts:

1) You'll need to reanalyze the whole program the moment it dynamically loads or unloads any library. Which is a surprisingly common thing to do (especially when the library is optional and may come from another program e.g. a plug-in or an API to communicate with said software).

2) Often programs are not unsafe on their own but in interaction with other programs.

Case in point about the latter: once we had three bots on a IRC channel that did random things, and one of those was to panic on hearing the word "weegee" (dumb, I know). All was fine until one of us said "weegee" and somebody had decided to make one of those bots call out the other two, who would promptly repeat the word making the former go into panic again and call them back, and... you get the idea, let's just say they went into full blown panic and we had to remove the bots just to cut out the loop.

That was an infinite loop over three different programs running on three different computers that only happened because after a bunch of modifications over time all the worst possible conditions ended up being met. That's... quite the achievement. And while this sounds contrived, this is not really far to the way computers full of things from different vendors tend to lock up: the individual software is fine on its own but the combination of them makes the whole thing explode.

Good luck verifying that software is safe when stuff like that can happen.
LtG wrote:But realistically I'm not sure if any of that is worth the effort when designing a new language that doesn't suck would go a long way of solving that issue _and_ make programming in general a lot saner.
Which again is not gonna matter at all unless you can literally force programmers to use said language and nothing else. Remember, this isn't from the viewpoint of who's making the buggy software, but from the viewpoint of who has to supervise the different software running in the system.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Null pointers and pointer safety

Post by Brendan »

Hi,
Icee wrote:Some replies here are missing an essential thing: in reality you don't get to analyze a snippet of code. You have to decide whether a given program (as in isolated service) is "safe" or not as a whole. This does not really boil down to proof by composition. A program composed of multiple "safe" pieces it not necessarily "safe". I am putting "safe" in irony quotes here because even providing the formal definition of what "safe" is is a huge task by itself, with PhD theses on this subject popping out every now and then.
Actually, it's the opposite. If you prove each piece is safe on its own then you've proven that the whole program is safe; but if you prove that some pieces are not safe on their own then the whole program can still be safe.

For a simple example:

Code: Select all

int myArray[256];

int foo(uint8_t a, uint8_t b) {
    return 1000 / (a+b);
}

void bar(uint8_t a, uint8_t b, int c) {
    myArray[a+b] = c;
}
    
void baz(void) {
    foo(1, 2);
    bar(1, 2);
}
In this case it's easy to prove that "foo()" is unsafe on its own (possible division by zero) and easy to prove that "bar()" is unsafe on its own (potential array index out of bounds); but the whole program is still safe.

Of course these cases can also be a code maintenance disaster because an unsuspecting programmer could write code that does "foo(0,0);" or "bar(200, 200);" and make the whole program unsafe. It's like laying little traps for unsuspecting victims; where a programmer needs to know internal implementation details for "foo()" or "bar()" to use them and can't just treat functions as "opaque black boxes". For larger programs being written by a team of people you want to avoid that; which means that in practice you want to insist on "all functions are safe in isolation". Shared libraries (where the internal implementation of the shared library can be changed in future) are just another example of wanting "all functions are safe in isolation" to avoid a code maintenance disaster.

Note that it can still be possible (using whole program analysis) to prove that a program is safe despite "unsafe in isolation" functions. In this case you still analyse functions in isolation, but instead of merely categorising each function as "proven safe" or "not proven safe" you have to use a much more elaborate/complex scheme where you determine "proven safe under these conditions" (e.g. the "foo()" in the example above would be "proven safe if a+b != 0"). This also means that you need depth first analysis, so that you can merge the conditions for child functions into the parent.

For example:

Code: Select all

void baz(uint8_t a) {
    foo(a, 2);   // Note: "foo()" is proven safe if a+2 != 0,
                 //    therefore this is safe when a != -2
                 //    but a is unsigned and must be >= 0 so a can never be -2
                 //    so this must be safe
    bar(a, 2);   // Note: "bar()" is proven safe if a+2 < 256,
                 //    therefore this is safe when a < 254,
                 //    therefore "baz()" is only safe when a < 254
}
This also means that you can't handle recursion (because you can't do "depth first analysis" in that case). However...

For recursion, if you assume that the function is always safe, you can prove if the function is always safe (or not always safe). For example, this function can be proven to be always safe:

Code: Select all

void baz(uint8_t a) {
    uint8_t temp;
    foo(a, 2);      // Proven always safe
    if(a < 100) {   // Proven always safe
        temp = a+8; // Proven always safe
        baz(temp);  // Safe if "baz()" is always safe
    }
}
For another example, this function can not be proven to be always safe:

Code: Select all

void baz(uint8_t a) {
    uint8_t temp;
    bar(a, 2);       // Only safe if a < 254
    if(a < 100) {    // Proven always safe
        temp = a+50; // Proven always safe
        baz(temp);   // Safe if "baz()" is always safe
    }
}
What this means is that there's still a chance of false negatives where "recursion and not proven to be always safe" is involved. However...

You can take it a step further, like this:

Code: Select all

void baz(uint8_t a) {
    uint8_t temp;
    bar(a, 2);       // Only safe if a < 254
    if(a < 100) {    // Proven always safe
        temp = a+50; // Proven always safe
        baz(temp);   // Safe if "baz()" is safe
                      //    and "baz()" might be safe if temp < 254
                      //        therefore "baz()" might be safe if a+50 < 254
                      //        therefore "baz()" might be safe if a < 204
                      //    and "baz()" might be safe if temp < 204
                      //        therefore "baz()" might be safe if a+50 < 204
                      //        therefore "baz()" might be safe if a < 154
                      //    and "baz()" might be safe if temp < 154
                      //        therefore "baz()" might be safe if a+50 < 154
                      //        therefore "baz()" might be safe if a < 104
                      //    and "baz()" might be safe if temp < 104
                      //        therefore "baz()" might be safe if a+50 < 104
                      //        therefore "baz()" might be safe if a < 54
                      //    and "baz()" might be safe if temp < 54
                      //        therefore "baz()" might be safe if a+50 < 54
                      //        therefore "baz()" might be safe if a < 4
                      //    and "baz()" might be safe if temp < 4
                      //        therefore "baz()" might be safe if a+50 < 4
                      //        therefore "baz()" might be safe if a < -46
                      //    but a is unsigned and can not be < -46
                      //        therefore "baz()" is proven never safe
    }
}
Now...

Did you notice that we had no reason to care if "baz()" halts or not for any of these examples? This is because, even for the "more complex/advanced than is beneficial" analysis, the halting problem is still completely irrelevant.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
goku420
Member
Member
Posts: 51
Joined: Wed Jul 10, 2013 9:11 am

Re: Null pointers and pointer safety

Post by goku420 »

It's true that a programmer violation preconditions of a function (breaking the contact) does not make the entire program unsafe. But the API can be rewritten to reduce programmer errors. For example if foo takes two coordinates of a point, foo instead could be rewritten to take a Point that as an explicit constructor (and by definition a class like Point would have invariants). Therefore there is no way for the programmer to screw up in this case.

Really the halting problem comes in from a compiler developer's point of view when issuing diagnostics for cases like guarded if statements or undefined behavior regarding templates. It would be impossible for the compiler to detect an error and issue a diagnostic in the general case.
Icee
Member
Member
Posts: 100
Joined: Wed Jan 08, 2014 8:41 am
Location: Moscow, Russia

Re: Null pointers and pointer safety

Post by Icee »

@Brendan
I certainly agree with your examples. Unfortunately going from examples to an actual sound verification system is nontrivial, and you still haven't provided any actual ideas on how this can be automated. And you still haven't defined "safe", too.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Null pointers and pointer safety

Post by Brendan »

Hi,
Icee wrote:@Brendan
I certainly agree with your examples. Unfortunately going from examples to an actual sound verification system is nontrivial, and you still haven't provided any actual ideas on how this can be automated. And you still haven't defined "safe", too.
I've never claimed it'd be easy, and have provided some ideas (e.g. "each function analysed in isolation, depth first, with work-around for recursion"). "Safe" is defined by the specification of a language (e.g. for C, it's anything that the language's specification says causes undefined behaviour) and (until I design my language) is not something I should define.

Also note that (in general) I am not an advocate of "safe" (e.g. managed languages, etc) - I'm an assembly language programmer at heart (which probably makes me an advocate of one of the "least safe" languages). The objective of my earlier research was to reduce the time it takes to discover bugs (and not safety); and my main objective here is/was to help people realise that "halting problem" is not a relevant problem.

For fun, let's define "halting problem" like this: a false belief that something is not possible, that halts progress towards finding a solution that is possible.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Icee
Member
Member
Posts: 100
Joined: Wed Jan 08, 2014 8:41 am
Location: Moscow, Russia

Re: Null pointers and pointer safety

Post by Icee »

Brendan wrote:[...] and my main objective here is/was to help people realise that "halting problem" is not a relevant problem.
Because we are now speaking about the C language and undefined behavior in C programs, I can now point you to the C11 standard, section 6.8.5 paragraph 6.
An iteration statement whose controlling expression is not a constant expression, that performs no input/output operations, does not access volatile objects, and performs no synchronization or atomic operations in its body, controlling expression, or (in the case of a for statement) its expression-3, may be assumed by the implementation to terminate.
Shall static analysis ignore this edge case? If not, we are looking at exactly the halting problem. You might argue that a nonterminating loop is not a problem because it will execute code that is otherwise "safe" forever. This is not necessarily true because of compiler optimizations that expect this criterion to be met, generating something different (maybe not even a loop) in machine code (see this article for a related example).
davidv1992
Member
Member
Posts: 223
Joined: Thu Jul 05, 2007 8:58 am

Re: Null pointers and pointer safety

Post by davidv1992 »

Please do note the may in this portion of the standard. Essentially, this is written such that optimization in compilers can be more agressive. This specific paragraph does not pose any explicit requirement on implementations to actually use such information.

As to a definition of safety, there is actually an accepted definition of this (at least in the academic literature), namely that a program can be safely executed for an arbitrary (but finite) number of steps, without touching memory cells it has not before allocated, and not accessing memory cells in a matter not conducive of the type of information they hold. This is a definition that is very much similar to the notion of safety for a typed computational model, where safety is defined as type judgement validity of results when doing finite numbers of steps. In fact, using complex enough type systems on top of them, even assembly languages can be augmented with type information that provides enough information to show such safety. This simply takes the form of typing judgements alongside regular machine code. Regular execution would just discard such typing information, but an executing party could verify the given typing information and prove the program safe in that manner. The paper A Very Modal Model of a Modern, Major, General Type System, by A.W Appel, P Melliès, C.D. Richards and J. Vouillon provides an analysis of a typesystem suitable for use in such situations.

In fact, the java bytecode pretty much takes such an approach to proving safety of programs. The bytecode provides enough typing information that the runtime can verify that (barring any bugs in the runtime, of which there have been very many), the code is safe. In fact, the design of java points out the big problem in creating such a system, namely that it can be very hard to make the functions that interface between verified safe code, and code that isn't, behave properly. A practical problem with doing this for more flexible type systems than that of java is also that the required type systems on the compiled code can become very complex, making the proces of designing and proving them correct rather tricky. Even though proof formalization in tools like coq can help in this, it would remain a massive undertaking.
Icee
Member
Member
Posts: 100
Joined: Wed Jan 08, 2014 8:41 am
Location: Moscow, Russia

Re: Null pointers and pointer safety

Post by Icee »

davidv1992 wrote:Please do note the may in this portion of the standard. Essentially, this is written such that optimization in compilers can be more agressive. This specific paragraph does not pose any explicit requirement on implementations to actually use such information.
True, but in the general case of any code and unspecified compiler "may" means "will", does it not?
davidv1992 wrote:A practical problem with doing this for more flexible type systems than that of java is also that the required type systems on the compiled code can become very complex, making the proces of designing and proving them correct rather tricky. Even though proof formalization in tools like coq can help in this, it would remain a massive undertaking.
I made this point somewhere up the thread, too. Coq is in itself a huge thing to master, and in context of low-level code verification I'd expect better (= faster achieved) results through Isabelle/HOL with possibly some assistance from L3/Sail for the actual bytecode/machine code semantics. Since we are (hooray!) giving links to papers, I'd humbly suggest to take a look at this recent CertiKOS paper.
davidv1992
Member
Member
Posts: 223
Joined: Thu Jul 05, 2007 8:58 am

Re: Null pointers and pointer safety

Post by davidv1992 »

Icee wrote:
davidv1992 wrote:Please do note the may in this portion of the standard. Essentially, this is written such that optimization in compilers can be more agressive. This specific paragraph does not pose any explicit requirement on implementations to actually use such information.
True, but in the general case of any code and unspecified compiler "may" means "will", does it not?
No it does not, and that is precisely why it is so important. Standards documents use a very specific definition of may, namely "may" == "is allowed to, but does not have to". So from the point of someone writing code, that code better do what you want it to do regardless of whether the compiler uses this freedom or not. The reason (in this particular case) that the compiler is given this freedom is to more effectively allow it to remove code that does not produce side effects on the program other than (potentially) make it non-halting, which makes dead code removal a lot more effective in practice.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Null pointers and pointer safety

Post by Brendan »

Hi,
Icee wrote:
Brendan wrote:[...] and my main objective here is/was to help people realise that "halting problem" is not a relevant problem.
Because we are now speaking about the C language and undefined behavior in C programs, I can now point you to the C11 standard, section 6.8.5 paragraph 6.
An iteration statement whose controlling expression is not a constant expression, that performs no input/output operations, does not access volatile objects, and performs no synchronization or atomic operations in its body, controlling expression, or (in the case of a for statement) its expression-3, may be assumed by the implementation to terminate.
Shall static analysis ignore this edge case? If not, we are looking at exactly the halting problem.
You're looking at the C standard saying that (under very specific circumstances) the halting problem can be officially ignored ("may be assumed by the implementation to terminate (without caring if it actually does or doesn't terminate)").
Icee wrote:You might argue that a nonterminating loop is not a problem because it will execute code that is otherwise "safe" forever.
Yes; and I might also argue that (e.g.) a DHCP server that does terminate is a buggy mess that will break the LAN (but is still safe despite being a buggy mess that terminates); and that (e.g.) a text editor that can be proven to terminate (rather that waiting for the user to explicitly tell it to exit in a "not predicatable" manner) is a buggy mess that may lose the user's unsaved work (but is still safe despite being a buggy mess that terminates); and that (e.g.) every single application, GUI, service and device driver falls into the same "buggy mess if it can be proven to terminate" category.
Icee wrote:This is not necessarily true because of compiler optimizations that expect this criterion to be met, generating something different (maybe not even a loop) in machine code (see this article for a related example).
Are there any real examples (that don't merely show that a compiler is bad, and don't merely show that the specification for the C language neither excludes nor requires bad compilers; which are hopefully concise and not buried in pages of babble)?


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Icee
Member
Member
Posts: 100
Joined: Wed Jan 08, 2014 8:41 am
Location: Moscow, Russia

Re: Null pointers and pointer safety

Post by Icee »

Brendan wrote:You're looking at the C standard saying that (under very specific circumstances) the halting problem can be officially ignored ("may be assumed by the implementation to terminate (without caring if it actually does or doesn't terminate)").
It can be ignored by a compiler, but not by an analysis tool.
Brendan wrote:Are there any real examples (that don't merely show that a compiler is bad, and don't merely show that the specification for the C language neither excludes nor requires bad compilers; which are hopefully concise and not buried in pages of babble)?
I'm afraid I won't be able to come forward with an example that you will accept as such. I guess we both made our points, so I'm going to stop posting to this thread for now.
simeonz
Member
Member
Posts: 360
Joined: Fri Aug 19, 2016 10:28 pm

Re: Null pointers and pointer safety

Post by simeonz »

This topic has overflown, but static verification triggers my interest every time. It makes me want to read a little more about it. (Superficially.)

Here is an article I found just now. The authors appear to claim that the halting problem is mostly-decidable in polynomial time (with probability one). For analogy, the simplex algorithm is also mostly polynomial I believe (although the simplex output is continuous in the input and permits perturbation). The way I think about this result is, that although the description of all halting programs is too big to fit (as a program) on a finite portion of a Turing tape, the description of most halting programs is compact enough. This also explains why a person's experience reinforces the intuition that the programs encountered appear easily verifiable by a compact procedure. Most of them indeed are.

One of the authors mentions the article in this SO post. This is the way I found it.

As a side note... specifying side-effects for functions by the programmer would be a serious effort. Input-output specifications could be done, especially with an advanced IDE that suggests properties, but side-effects can be a fluctuating mess. On the positive side, as was already mentioned, some constraints are indeed implicit in the language specification.
StudlyCaps
Member
Member
Posts: 232
Joined: Mon Jul 25, 2016 6:54 pm
Location: Adelaide, Australia

Re: Null pointers and pointer safety

Post by StudlyCaps »

So I made the point that you can't statically analyse every program and come up with a safe/unsafe result because this is analogous to the halting problem. This was a direct reply to somebody saying that it was indeed possible to do and I never tried to say that static analysis is not useful or that you couldn't create a system that for example never executed unsafe code, just that a system which correctly identifies all possible inputs as faulting or not is not possible to create.

I get the argument that you can just say that it doesn't matter and that applications are either verified safe, or blocked. I could imagine circumstances where it's more important to never fault than to be able to execute code which an analyser doesn't understand. Military, or heavy vehicle control systems, medical equipment, that sort of thing. However for a general purpose OS, that's a damn high barrier of entry for apps programmers.

The situation I was thinking of when I initially posted is: given a pointer to a page of virtual memory, is it safe to dereference the pointer? This depends on whether the page is mapped and this depends on system state which could have been changed literally anywhere else in the application. So in some cases you would need to comprehensibly trace system state on all code paths which lead to the de-ref to determine if that page was ever mapped, if it was unmapped and if it's mapped right now. This would seem to be considerably more complex than determining a range of values a variable might hold.
goku420
Member
Member
Posts: 51
Joined: Wed Jul 10, 2013 9:11 am

Re: Null pointers and pointer safety

Post by goku420 »

StudlyCaps wrote:The situation I was thinking of when I initially posted is: given a pointer to a page of virtual memory, is it safe to dereference the pointer? This depends on whether the page is mapped and this depends on system state which could have been changed literally anywhere else in the application. So in some cases you would need to comprehensibly trace system state on all code paths which lead to the de-ref to determine if that page was ever mapped, if it was unmapped and if it's mapped right now. This would seem to be considerably more complex than determining a range of values a variable might hold.
If the program is in an unstable state (i.e, you invoked undefined behavior) the trace system cannot be trusted: for example, what good is a stack trace if you've corrupted the stack? The overhead in creating a trace system that is unlikely to be affected by trashing memory is not worth the single use case of checking if a pointer is safe to dereference as that should be handled by defensive coding.

Even in "safe" languages like Java or C#, NullPointerException and checking if an object is not NULL (due to C#'s "as" keyword, analogous to C++'s dynamic_cast) are still frequent despite the fact that pointers are not exposed to the programmer and the heavy-duty built-in debugging tools. What this illustrates to me is that safe pointers are solving the wrong problem.
Post Reply