writing secure software

Programming, for all ages and all languages.
Perica
Member
Member
Posts: 454
Joined: Sat Nov 25, 2006 12:50 am

writing secure software

Post by Perica »

..
Last edited by Perica on Tue Dec 05, 2006 9:33 pm, edited 1 time in total.
User avatar
Candy
Member
Member
Posts: 3882
Joined: Tue Oct 17, 2006 11:33 pm
Location: Eindhoven

Re:writing secure software

Post by Candy »

Wouldn't say it's normally humanly possible, but it is most certainly technically possible.

If the hardware is impenetrable, with good software, the system is infallible.

There are bugless programs. They are just hard to make, and there aren't many who are willing to invest such long times in actually getting it there.


The more common ways to get to a secure program are:

1. Make sure your system is internally always consistent (check each function for preconditions/postconditions and test them too). If you have validated this:
2. Make sure it can handle ANY POSSIBLE external input with a good response. Make a state machine and ensure that there is no bypass route. Do NOT input any easter eggs or shortcuts-for-yourself, since they are the things that bite you in the back later on. Also, don't assume that people won't enter something of a size you didn't expect, expect all possible. Expect a user to type too fast for your keyboard interrupt handler to cope with, expect the keyboard buffer to fill up, expect the network to overflow, collisions to happen, users doing unimaginable things and somebody accidentally still sending a null pointer in that one place you thought it was impossible. If possible, also check all non-null pointers for validity etc.

Good luck, short answer.
AGI1122

Re:writing secure software

Post by AGI1122 »

I do think it's possible, but I personally think it depends on the size of the software. If it's just a hello world type thing that's very small there isn't many places where it's possible to exploit it at, but with big software it's alot harder because you never know if a change will affect other areas of the software and cause undesired things to happen allowing people to take advantage. Also when it's a group collaberation, it's especially dangerous because of the differences in the code as well as the fact that in most cases it's normally not as well discussed as things should be, usually every body get's a peice then they stick it together, which can lead to problems and incompatabilites that might not be visually present until somebody finds a way to hack it.

So while I do think it's possible, I think it's highly unlikely to create a perfect peice of software that does not have any vulnerabilites.

Also just because nothing has happened to the software doesn't always mean that there isn't a vulnerability. A very good example of this is windows, the only reason so many vulnerabilites are found is because it's so widely used. But if you have software that isn't very popular, then those vulnerabilites usually are not ever found or exploited even though they exist.
distantvoices
Member
Member
Posts: 1600
Joined: Wed Oct 18, 2006 11:59 am
Location: Vienna/Austria
Contact:

Re:writing secure software

Post by distantvoices »

It is possible, but - if you have time constraints to follow and the project head tells you that *it* should be ready yesterday - and you therefore canna do all the testing and user input checking you want - yo canna create a perfect, unhackable product.

Then it is a question of iterative rtesting and reimplementing/implementing of error checking/avoiding of buffer/stack overflows and sorta. That's especially important in bit software projects, where many people work together.

The worst case, as I say is a project head putting a dagger to your @$$ and telling you that you are to hurryx up with the designing/coding stuff. Why els do you think widows has that many bugs to resolve afterwards?
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
zloba

Re:writing secure software

Post by zloba »

humans make mistakes, that is an axiom.
however, humans can also analyze and prove things logically:
(a and (a => b)) => b
the more complex are the dependencies, the harder it is to analyze things.

the task is to make it easier to prove correctness, and to reduce the need to do it only to cases when the code in question changes.

when dependencies in the software become complex, such as 'functions without arguments' whose arguments are (numerous) global variables of a program, you should expect trouble.
even if at a given moment you verify and test correctness, any change can break it.

software should be broken up into small objects with well-defined behaviour (interface, contract), to be accessed only through that interface. they can guard their internal consistency and correct usage, where possible. they can be verified independently and they stay correct, as long as you don't modify the interface.

consider buffer overflows:

if you use char[] buffers everywhere, you obviously have to worry about the length everywhere you write to a buffer. (plus, you are bound to a specific implementation that you are using on the low level all over the place.) that adds to the complexity and greatly increases chances of making a mistake.

if you use an object (say, ByteBuffer), assuming you have verified its implementation for correctness, there's just no way you can overflow such buffer in normal use. you can subsitute a different implementation without affecting its use.

every time i hear of yet another buffer overflow in widely used, production software (pick any example), i keep wondering: what is wrong with these people? this is really getting old.
Schol-R-LEA

Re:writing secure software

Post by Schol-R-LEA »

Mu. The question itself is invalid; it does not parse any more sensibly that asking if a program could be 100% tomorrow morning.

Security is not a thing, or even a goal; it is a process. The very meaning of 'secure' is itself dependent on context, and will vary with the needs of the users, the needs of the developers, the environment(s) it runs in, the phase of the moon, etc.

On a theoretical level, Goedel's Theorem and Turing's solution to the Halting Problem imply that there is some way of compromising any program with a finite computational goal (or at least that is my understanding; I lack the math to fully understand it myself). Similarly, IIUC, any reversible transform (such as an encryption algorithm) can be reversed by brute force if either the original encryption algorithm or the matching decryption algorithm is known. It may not be technically feasible at a given point, but in principle it is always possible.
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re:writing secure software

Post by Solar »

Good point, Schol-R-Lea.

Yes, you can write software that doesn't suffer from buffer overflows. I think you can even write a (small) embedded application that doesn't malfunction no matter what the input, because all interfaces are known and can be 100% tested.

But on your average stock hardware? Hell, they can't even write a stable autodetect for PCI hardware in Linux, because scanning for a certain NIC crashes certain SCSI cards. Too many variables, not even speaking of hardware failure.

Every system can fail. Even the "unfailable" Ariane software failed because they reused a software component from the Ariane IV in the Ariane V. That piece of software was proven correct - but only for the input range possible on the Ariane IV. That wasn't a mistake in software, it was a mistake in procedure.

No, writing 100% secure software is not humanly possible. Murphy is watching you, and he was an optimist.
Every good solution is obvious once you've found it.
distantvoices
Member
Member
Posts: 1600
Joined: Wed Oct 18, 2006 11:59 am
Location: Vienna/Austria
Contact:

Re:writing secure software

Post by distantvoices »

Ha! As I say. Have some leader telling programmers to hurry up and the programmers get in trouble and reuse olden software - and because of lack of time to test it properly, it utterly craps out.

*shrugs* and by the way: saying it is not possible prevents you from making it possible. Or as some are saying:

The ones knowing it canna be done shall not hinder those doing it.

@solar: that's not the os or careful driver programmers fault, if a scsi card decides to crap out, when the pci configuration space is scanned for certain Device/vendor combos. Well -- but I think it is task of the OS to fetch all the devices upon boot up and provide a searchable PCI-Tree - so that PCI devices can be checked for without touching the PCI config space. Hm. After all it doesn't touch the general validity of your statement.

BTW: as eloquent and fluent Schol-R-Lea's use of language usually is, his first sentence does not permit to be parsed cleanly. the 'that' comes into the way of my lingual neurons. They insist that there should be a 'than' but I might be wrong.
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re:writing secure software

Post by Solar »

beyond infinity wrote: saying it is not possible prevents you from making it possible.
I just said it's not humanly possible. Nothing keeps you from trying for a superhuman effort. ;D
@solar: that's not the os or careful driver programmers fault, if a scsi card decides to crap out, when the pci configuration space is scanned for certain Device/vendor combos.
One, don't ask me what really happens, I just know that even latest Knoppix CDs crapped out when I still had that NIC and SCSI controller alongside in my system. ;)

Two, that's part of the problem: Hardware is part of the equation. Your driver software might be correct and well-tested. The SCSI controller might be correct and well-tested. Then they switch to a new controller chip, and run a full regression test on it, and consider it 100% compatible as far as they can tell - but your driver software craps out because you did set a bit that the SCSI engineers never considered.

Too many variables. You don't really control your system, unless you build an embedded system without "open" interfaces.
Every good solution is obvious once you've found it.
distantvoices
Member
Member
Posts: 1600
Joined: Wed Oct 18, 2006 11:59 am
Location: Vienna/Austria
Contact:

Re:writing secure software

Post by distantvoices »

humanly possible for gods sake, man, on this world, humans are doing things prone to err, so what's up with that hairsplitting? :D

Now considering the scsi card and the nic:

I for myself own a decent aha2940 AU scsi interface and a vast variety of NIC's. The only combo which ever caused trouble has been natsemi nic and the scsi interface on certain PCI slots, because one of the two or the driver of one of the two didn't grasp IRQ sharing, and that's where the dog is buried mosta time.

This piece of **** with putting one irq line to four to five distinct devices can cause quite some headache. So writing a *100%* error save driver would consist of a. irq sharing handling or b. of voodoo magic to predict in which bloody pci slot the f**king card is put. Humans can lose nerves. Why shall operating systems not be permitted to experienc the same? *ig*

But as long as I am using my aha2940 AU with any rtl8139 or 3c59x NIC, I have not evr had any trouble with any drivers/distributions. Not with knoppix nor with gentoo nor with suse.
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
zloba

Re:writing secure software

Post by zloba »

Solar
Every system can fail. Even the "unfailable" Ariane software failed because they reused a software component from the Ariane IV in the Ariane V. That piece of software was proven correct - but only for the input range possible on the Ariane IV. That wasn't a mistake in software, it was a mistake in procedure.
i say that it was a software mistake; if a piece of software assumes some conditions, then they (preconditions) become part of the interface of that piece of software. when you (mis)use it contrary to the interface, you can't expect it to function correctly, if that's what the interface says.

the fact that for whatever reason - the precondition wasn't explicitly stated in the first place, not documented properly, not checked when plugging the software into the new system - that doesn't make it any less of an error than an uninitialized pointer.

false implies anything, and anything is exactly what happened.

"when you violate preconditions, anything can happen - a yellow horse can fly out of your screen" (c) my CS prof.
Schol-R-LEA

Re:writing secure software

Post by Schol-R-LEA »

beyond infinity wrote: BTW: as eloquent and fluent Schol-R-Lea's use of language usually is, his first sentence does not permit to be parsed cleanly. the 'that' comes into the way of my lingual neurons. They insist that there should be a 'than' but I might be wrong.
Oops. Yes, that ought to be a 'than'. I'm surprised I missed it, but then, the fact that errors can always appear unexpectedly is a part of what we're discussing, isn't it?
mystran

Re:writing secure software

Post by mystran »

Actually, Goedel's theorem does not say that it is impossible to write programs that can't be compromized by some input.

Say, a trivial turing machine that never does anything but halts is trivially provable to always halt; it doesn't examine it's input at all.
Similarly, any program that examines each byte of it's input at most N times (for some finite N) will "trivially" halt for all possible finite inputs.

The trouble comes from the fact that whether a program halts for a given input is in the general case undecidable. That is, it is possible to come up with programs for which no other (possible) program can figure out (in finite time) whether it halts or not. The big question then, is whether human beings are capable of performing computations other than those possible for any Turing Equivalent system. If we assume that humans are not, then it is possible to write programs for which it's impossible for humans to prove whether it halts (on some specific input) or not. And this is why security must be a process.

Dijkstra used to have the idea that it's stupid to write a program and then prove it correct. Instead, you should derive the program from the specification of the problem in such a way that the derivation of the program is the proof. In other words, you write the proof, and get the program as a side product. This way you end up with a program for which it is possible to prove it correct, and you (hopefully) avoid all the common pitfalls.

This is one of the reasons why functional program are easier to prove correct. I'm not going to claim that it's actually easier to write a correct functional program, but functional programs are definitely easier to reasons about (once they've been written), because each function can be reasoned about separately.

Ofcourse, now that we are talking about proofs, let's make it clear that when a scientist says that something has been proven correct, what he really means is "there exists a proof and there is certain confidence that this proof is correct", because even with a proof there's chance that the proof itself is in error. And with a proof of proof there's still a (smaller) chance, and so on ad infinitum.

So, whether it's possible to write "correct" software ultimately depends on how you define "correct", since you can never reach full certainty of correctness in anything.
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re:writing secure software

Post by Solar »

mystran wrote: Dijkstra used to have the idea that it's stupid to write a program and then prove it correct. Instead, you should derive the program from the specification of the problem in such a way that the derivation of the program is the proof. In other words, you write the proof, and get the program as a side product.
Provided the specification is correct in the first place. 8)
Every good solution is obvious once you've found it.
dh

Re:writing secure software

Post by dh »

only if the software:
o Needed a password for EVERYTHING
o No internet conntection AT ALL (no outside connection of any kind)!
o HEAVY user control and monitoring
o Intergrated into the OS
o 24/7 security on all terminals

currently.. maybe the US pentagon fullfills even half of these (MAYBE).
here in canada, we don't really care for "super security" unless dealing with money ;)
Post Reply