Page 1 of 1

executable validation issues

Posted: Thu May 13, 2004 1:45 am
by Candy
On the subject of protecting a user from malicious applications, such as windows does with its verisigned apps (which is both hackable and not used), would including a app-verify function that asks a special appserver at the site to verify the integrity of the exe (requesting the description for a certain hash of the executable, and if it matches, report that it matches and the app is what it claims to be), and sandboxing the app (which can be turned off) when it doesn't match?

is this a plausible and functional idea? Aside from the need to have a working inet connection, would it be functional?

As an aside, using the domain of an application in the checking would also be able to include whois info on the domain, such as the actual company standing guard, or some monitoring company that offers the hosting (and can thus also be addressed for malpractice such as aiding trojan distribution).

Re:executable validation issues

Posted: Thu May 13, 2004 4:00 am
by Brendan
Hi,
Candy wrote: On the subject of protecting a user from malicious applications, such as windows does with its verisigned apps (which is both hackable and not used), would including a app-verify function that asks a special appserver at the site to verify the integrity of the exe (requesting the description for a certain hash of the executable, and if it matches, report that it matches and the app is what it claims to be), and sandboxing the app (which can be turned off) when it doesn't match?

is this a plausible and functional idea? Aside from the need to have a working inet connection, would it be functional?

As an aside, using the domain of an application in the checking would also be able to include whois info on the domain, such as the actual company standing guard, or some monitoring company that offers the hosting (and can thus also be addressed for malpractice such as aiding trojan distribution).
I can't see why it couldn't work, but I can see a few practical problems (aside from the need for users to have an internet connection).

How would you make it un-hackable? - internet isn't secure and you would be relying on each computer between the end user and your appserver.

Also the internet isn't the same for everyone due to locality - your appserver would need to reacheable from anywhere without too many time-outs, which could end up costing you lots for a really good internet connection (with high enough bandwidth) for the appserver. In addition internet servers tend to go down every now and then. In the case where a moderately important internet server is down lots of your user's applications may end up sandboxed. In the US of A internet is probably more stable than it is here (Australia), and it's probably a lot worse in some countries. You could have several mirrors for you appserver (but then you'd have several points of vulnerability). What if the appserver itself goes down?

What happens when you want to change the domain name and/or IP address of the appserver? If you use the domain of an application for checking what happens when the application writers domain changes?

Are you able to afford the cost of the appservers internet connection and the time it would take to maintain the appserver software itself (and the server/s it's running on)? How much are you charging for a copy of your OS? How will your OS get market share if you don't provide it for free?

Why are MS's verisigned apps hackable, and why doesn't anyone use it? If MS can't do it successfully can it be done successfully?

What things can malicious applications do, and why can't the OS prevent these malicious activities without the appserver and sandboxing?

Please don't take these questions/comments too seriously :)


Cheers,

Brendan

Re:executable validation issues

Posted: Thu May 13, 2004 4:29 am
by Pype.Clicker
- is it wishable that application hash is check every time the application is run ? wouldn't it be better that the OS ensure the application is unmodifiable (or checksums them only at request, or by night, etc).

- Let's say NoTrojan(inc) offers the hash directory as well as the application checker, the server could use a private key to sign hashes and the application checker would use the server's public key (shipped together with the checker) to make sure it truly comes from the server. That way, even corrupted network wouldn't cheat the system.

- How can one ensure that the application is from a *real* trusted source without enforcing BigBrother watching every developer ? What do prevents Ha><0r from registering a trojan-version of MediaPlayer using "|\/|icrosofl" as name, which most users will read as "Microsoft" without noticing the subtle difference ?

Eventually, it will still be up to the end-user to decide if (s)he put trust in the software or not ...

Re:executable validation issues

Posted: Thu May 13, 2004 4:44 am
by Solar
Wouldn't work.

1) All you can prove is the identity of a program - neither its intent or correctness. To be un-hackable, you'd need a system quite alike to TCPA / Palladium (i.e., hardware-supported universal identity verification of your system).

2) You would force every GPL / freeware / shareware / PD programmer to either maintain a server of his/her own, or to pay a company bucks for doing so - instead of dumping the stuff on an upload server and be done with it. Cost kills hobbyists.

3) If the company supplying the hash directory is to be held liable for trojans etc. hashed by them, they will charge significant fees, since they have to insure themself against fraud. They can't verify intent or correctness of the software either.

4) In the end, you still have that popup window "do you really want to run this application by 'X'?". And as we all know, users even click on e-mail attachments from untrusted sources.

You'd come up with huge infrastructure requirements, shutting off hobbyist software development as well as people without internet connection (which are much more numerous than you possibly imagine), with little to no additional benefit for the user. Congrats, you just reinvented TCPA. ;-)

Re:executable validation issues

Posted: Thu May 13, 2004 5:00 am
by Candy
Brendan wrote: How would you make it un-hackable? - internet isn't secure and you would be relying on each computer between the end user and your appserver.
By allowing the user system to pass a hash to the server system, and the connection to be made only over a secure channel (my idea of the secure channels is that they are not on application level (libssl or so) but on socket level).
Also the internet isn't the same for everyone due to locality - your appserver would need to reacheable from anywhere without too many time-outs, which could end up costing you lots for a really good internet connection (with high enough bandwidth) for the appserver. In addition internet servers tend to go down every now and then. In the case where a moderately important internet server is down lots of your user's applications may end up sandboxed. In the US of A internet is probably more stable than it is here (Australia), and it's probably a lot worse in some countries. You could have several mirrors for you appserver (but then you'd have several points of vulnerability). What if the appserver itself goes down?
If any of these points occur, the system would break. The appserver would have multiple mirrors, each application developer has their own appserver, and if one breaks, that means there's a temporary interruption in the users ability to install an application without the sandbox. The user can of course rely on the application being correct (when you buy a CD that worked the previous three times but cannot connect now), or always sandbox it (haxor people signing their trojans (sic)).
What happens when you want to change the domain name and/or IP address of the appserver? If you use the domain of an application for checking what happens when the application writers domain changes?
Two different categories: internet publishers (also share/freeware publishers) and established software companies. The first one usually allows download only, so you can update it instantly. The second has an established name & domain name, and if they go down the entire economy will too, so that's not a point of failure for me to solve :)
Are you able to afford the cost of the appservers internet connection and the time it would take to maintain the appserver software itself (and the server/s it's running on)? How much are you charging for a copy of your OS? How will your OS get market share if you don't provide it for free?
The idea behind the appserver is that it is a way for app developers to bestow another level of trust on the user and to offer upgrades to paying users. My idea is also that I want the users to pay for the upgrades. If nobody wants a new version it won't be developed.
Why are MS's verisigned apps hackable, and why doesn't anyone use it? If MS can't do it successfully can it be done successfully?
That was a guess. I'm not sure whether they are hackable, but by definition the code must be calculated locally and not checked with verisign (because of lack of inet connection).
What things can malicious applications do, and why can't the OS prevent these malicious activities without the appserver and sandboxing?
They can delete all sorts of files, wreck your system or use it for their own purpose. In a sandbox you're notified of what they try to do, and can thus object against your "new mp3 player" sending 25000 mails.
Pype.Clicker wrote: - is it wishable that application hash is check every time the application is run ? wouldn't it be better that the OS ensure the application is unmodifiable (or checksums them only at request, or by night, etc).
No. Users can modify their own applications all they like, they paid for it / installed it on their system. The intent of a user installing software is not for me to enforce. Bad people will do bad things, they don't need the right or easy ability.
followed up...

Re:executable validation issues

Posted: Thu May 13, 2004 5:08 am
by Candy
- Let's say NoTrojan(inc) offers the hash directory as well as the application checker, the server could use a private key to sign hashes and the application checker would use the server's public key (shipped together with the checker) to make sure it truly comes from the server. That way, even corrupted network wouldn't cheat the system.
That's indeed a good plan. The hash could well be signed with a private key.
- How can one ensure that the application is from a *real* trusted source without enforcing BigBrother watching every developer ? What do prevents Ha><0r from registering a trojan-version of MediaPlayer using "|\/|icrosofl" as name, which most users will read as "Microsoft" without noticing the subtle difference ?
You can't. Using a monospace font for those dependant things would solve some issues, but not all (think unicode turkish i without dot instead of an i). The user will still be responsible for what he runs.
Eventually, it will still be up to the end-user to decide if (s)he put trust in the software or not ...
But, if you can help the user by suggesting that the patch they just downloaded does not match the application they're trying to patch, that makes it a lot easier to judge what it'll do.
Solar wrote: 1) All you can prove is the identity of a program - neither its intent or correctness. To be un-hackable, you'd need a system quite alike to TCPA / Palladium (i.e., hardware-supported universal identity verification of your system).
If the identity of a program is the same as the program it's trying to patch, you can be ascertained it's not a trojan pretending to be an Office 2004 beta (see also macnews).
2) You would force every GPL / freeware / shareware / PD programmer to either maintain a server of his/her own, or to pay a company bucks for doing so - instead of dumping the stuff on an upload server and be done with it. Cost kills hobbyists.
No. The only thing that actually differentiates this from the default upload&forget is that you can get some assurance from the developer. In the case of the lazier developers they can upload the stuff to some server, and leave a weak sort of certificate there saying that it is the same file as is advertised on the site, and that N users considered it good and that X users considered it bad. If you don't serve a pricy server the only difference is that the user knows you don't have a pricy server, and that it can therefore not check whether you signed it. Point is, they probably don't care. Big companies want signed software, if you download an ISO you want an MD5 sum to check the validity of the ISO. Most people don't care, and burn it nonetheless. Nothing wrong, but the user doesn't need to know. It's only an advice that's backed up by some cryptography. If there's one thing I'm not going to do is force the user.
3) If the company supplying the hash directory is to be held liable for trojans etc. hashed by them, they will charge significant fees, since they have to insure themself against fraud. They can't verify intent or correctness of the software either.
See previous point, don't require hash.
4) In the end, you still have that popup window "do you really want to run this application by 'X'?". And as we all know, users even click on e-mail attachments from untrusted sources.
That's true. The only thing that's actually different is that you would defaultwise get a sandbox for those apps, and you would not get one for the others. You can sandbox them all, it just makes them slower, that's all.
You'd come up with huge infrastructure requirements, shutting off hobbyist software development as well as people without internet connection (which are much more numerous than you possibly imagine), with little to no additional benefit for the user. Congrats, you just reinvented TCPA. ;-)
I strongly disagree :). I'm not in for a fritz chip to make it unhackable, I'm not in for closing users out of a possibility because of what some THING says etc. TCPA has the intent of securing the system (and the programs) against malpractice of the user. This idea has the intent of securing the system (and the user) against malpractice of a program.

Re:executable validation issues

Posted: Thu May 13, 2004 6:12 am
by Solar
My concept was to provide as secure a "sandbox" as possible for any application, identity proven or not. No read / write outside of the application's installation directory and the dedicated data directory assigned by the user. No system interaction of the application except for well-defined service queries provided by the OS. (I.e., app X can access features of app Y only through OS services, never directly.) Clean and strong versioning control of shared ressources, so an uninstall of software is always "clean" even if other applications have been installed afterwards. (Yes, that means keeping multiple versions of the same library.)

Unidentified (uninstalled) executables get access to generic system services only, and can not query the system for the presence of any additional services (as provided by third-party software) - keeping viruses and trojans limited in their damage potential.

Re:executable validation issues

Posted: Thu May 13, 2004 6:37 am
by Candy
Solar wrote: My concept was to provide as secure a "sandbox" as possible for any application, identity proven or not. No read / write outside of the application's installation directory and the dedicated data directory assigned by the user. No system interaction of the application except for well-defined service queries provided by the OS. (I.e., app X can access features of app Y only through OS services, never directly.) Clean and strong versioning control of shared ressources, so an uninstall of software is always "clean" even if other applications have been installed afterwards. (Yes, that means keeping multiple versions of the same library.)
As noble as the goal may be, lots of users don't give a damn about os security but want their word processor to run a little faster. Scientists want their own apps not to be sandboxed because it should be able to access all data, not just the limited data. A shell should be able to look around on a system, not just look at files it may look at.

This was an attempt at making it possible to allow users to open up security on a file, where I'd help them by giving them information about who made the program, whether they still say it's safe to use (bug reports auto-forwarded to the user), whether you need an update (or whether you can buy an upgrade) and stuff like that. Closing down your system shuts out apps that you want in. Not closing down your system allows malicious programs to be on the inside, whereas you don't want them there.

The only solution I see is sandboxing applications by default, and allowing users to use some form of "install" where they give that program the permission to access their system without a lot of sandboxing checks / glue. If you want a secure system, sandbox all but the most elementary services. If you want a fast system, disable the sandboxing. If you want both, sandbox the untrusted apps, and allow access to the services you use that must access other files.
Unidentified (uninstalled) executables get access to generic system services only, and can not query the system for the presence of any additional services (as provided by third-party software) - keeping viruses and trojans limited in their damage potential.
<installs trojan> - what's the use of that? People are not going to get any smarter. Educate your users (make them smart) or limit the potential damage (fallback support). I'm thinking of doing both, neither has a big problem imo.

Re:executable validation issues

Posted: Thu May 13, 2004 8:15 am
by Schol-R-LEA
I don't know how relevant this is, but I can tell you how Xanadu was supposed to handle most of these issues, in an overall sense. All these issues were addressed on some level, or so Ted claims; I don't know what all the 'solutions' they had were, nor how workable they were. I may have some of the details wrong, as well. Anyway...

First off, Xanadu worked at the level of data shards, for both data and executables; it would completely replace the file system, so that everything would be part of a single hypertext 'docuverse'. A document would be a series of links to all the shards that composed it, and each shard could be linked to independently, in whole or in part; the same shard could be shared by an unlimited number of documents. Anyone could create a link to a shard that was visible to them. Links were bi-directional, so if someone created a link to a shard, it would be visible to anyone else linking to that shard. Both shards and links would have a single logical address, which would remain the same regardless of where it was physically. The addressing system is designed so that a link to a part of a shard would be an extended address based on the original's address, indicated the part desired (as a range of bytes). in the system, as well as fields for verification, encryption (whether it is, and how), ownership, and publication status (who could view it and when, whether some or all viewers would be charged a royalty, and at what rates).

To access a document or run a program, the system would collect all the desired shards and compose the document as needed. If the document was to be changed, only the links would actually be altered, and any new data would be stored as a new shard.

If a shard (or part of one) is to be accessed over the network, a request is put out over the network for a copy of it. It would first attempt to route to the machine it originally came from, but as it would do so, it would check the data cache and transfer logs on each machine in the path, and use the copy that it could find which it could access quickest. Only the required data would be sent, along with it's appropriate control data. As it is routed, a copy would be cached with each machine in the path (yes, these caches would have to be huge ring buffers, and even then a busy server might flush hundreds of meg a day out of their cache), as well as a log of where it came from and where it was sent to (again, the logs would be very large ring buffers). Obviously, some kinds of data would have to be sent entirely (programs, certain kinds of image formats, etc.), but it was assumed at the time that these would be in the minority, and that applications would be made up of many smaller programs working together.

I've got to get going, so I'll leave it there. I can expect all kinds of questions and objections, and I'm honestly not sure how to answer all of them.

Re:executable validation issues

Posted: Thu May 13, 2004 8:11 pm
by xfyj
mind if i sum up all of the above ?
(-

well, (sometimes) on a single machine there are more than one users, the os feeling it should protect one from another, creating all sorts of su scripts, but letting them send unsolicited email messages

on application level, a single app can resemble apocalypse!
nothing prevents it from eradicating the entire user space of the lucky one that attempted its installation, acting on his behalf, assuming his identity on public and private channels, and generally failing to keep a low profile

it should have never been allowed to bother other apps! the resources granted should have been limited - not the entire disk, a single folder would suffice; it wishes to read another applications dlls ? only if the last consends .. thinks merrily about "upgrading" them ? over my dead body!

in memory management business, one can go a great length not to allow a stack from being corrupted; nothing like that can be observed in more folcloric microworlds, like filesystems. all kinds of fauna are allowed to thrive and prosper, leading to misty and treacherous jungles

what does winrar imply with shell integration ? why does mozilla try to load my netscape profile, or call acrobat reader whenever encoutering a pdf ? an executable loading during bootstrap !? like making online purchases .. the site i am buying from was not to be trusted with my card credentials; it would have to send a payment request to my bank, which i had better accept (after logging in my bank account), if i ever wanted to see the product with my own eyes - what do you think the client/server model is for ?

applications come in packages with suspicious md5 signatures, which nevertheless do not compromise the stability or security of the rest, or refuse to unistall!

Re:executable validation issues

Posted: Thu May 13, 2004 8:53 pm
by mystran
I think better than trying to validate a program, would be to limit what a program can do. We are already seeing this in the Windows firewalls, which require the user to explicitly confirm Net-access of different programs.

The real problem, I think, is that in current systems there is a single user-id by which priviledges are granted or denied, and every program started by a user has the priviledges of that user.

I think it is not unreasonable to limit default priviledges of a program to read/write access to it's private installation directory, and the ability to display a window on screen. No general monitoring, no screen-capturing, no general network access..

To get additional priviledges, a pop-up by system could be viewed as requested by the application. These should be grantable on fine-grained basis: allow this program to access this network-host? (on the other hand, it MUST be possible to allow a webserver general network access). To document folders no program would need permanent access, as a system-provided open/save dialogs could be used to directly give the program for ability to request user to open/save a file, and the dialog would then act as the proper grant-confirmation.

One important thing with such a scheme would be to make a difference between a program and it's plugin. I mean, if IE has full net-access, and IE load a spyware-plugin, the plugin should NOT have full net-access simply because it was loaded by IE!

So it seems even priviledges per-process is not enough..
well.. whatever.. I still don't know my point.. (see 64-bit OSDEV)

Re:executable validation issues

Posted: Fri May 14, 2004 4:02 am
by Pype.Clicker
imvho, the solution to software "trustability" comes as a system in which each (application,user) comes with a dedicated security policy. As mystran said, an application should receive *some* of the credential from the user (the main shell having naturally the full credential ;), but the calling application as well as the system should be able to restrict the priviledges the callee will have.

It's still the same problem: if your word application has *no* right to access word documents *but* those you explicitly gave it through a system dialog box and have no chance at all to open an MP3 music (simply because it has not requested the 'MP3-reader' property at installation time), you sensibly reduce the amount of harm a program could possibly do, even if a trojan is involved...