Developer keys and digitally signed code

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
linguofreak
Member
Member
Posts: 510
Joined: Wed Mar 09, 2011 3:55 am

Developer keys and digitally signed code

Post by linguofreak »

Antti wrote:
Brendan wrote:Note: For my OS, every developer gets a key, and every process they write is digitally signed with the developers key. By signing their code they take full responsibility for that process. If one of their processes does anything malicious the developers key is revoked, and the OS refuses to execute any of the code they've ever written. There are no excuses.
This may be a little bit too strict policy. What if there were a good software company writing applications (definitely non-malware) but for some reason the company is acquired by another company that likes to add malware to their products. The obvious solution is to say that a new key is required. However, it may be difficult to identify when a new key is required if everything is technically the same. This no excuses policy may be used against itself, i.e. destroy a whole history of good products.

That is why "no excuses" should be reconsidered.
"No excuses" should be reconsidered because the system administrator should have ultimate say on what keys get whitelisted or blacklisted. It's fine for the OS to provide recommendations based on behavior that the developer has observed, but there should always be a way for the administrator to override those recommendations in either direction.
Last edited by Antti on Tue Aug 04, 2015 6:20 am, edited 1 time in total.
Reason: Topic title edited
Antti
Member
Member
Posts: 923
Joined: Thu Jul 05, 2012 5:12 am
Location: Finland

Re: Developer keys and digitally signed code

Post by Antti »

Created a new topic from the massive "Concise Way to Describe Colour Spaces" thread discussion.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Developer keys and digitally signed code

Post by Brendan »

Hi,
Antti wrote:
Brendan wrote:Note: For my OS, every developer gets a key, and every process they write is digitally signed with the developers key. By signing their code they take full responsibility for that process. If one of their processes does anything malicious the developers key is revoked, and the OS refuses to execute any of the code they've ever written. There are no excuses.
This may be a little bit too strict policy. What if there were a good software company writing applications (definitely non-malware) but for some reason the company is acquired by another company that likes to add malware to their products. The obvious solution is to say that a new key is required. However, it may be difficult to identify when a new key is required if everything is technically the same. This no excuses policy may be used against itself, i.e. destroy a whole history of good products.

That is why "no excuses" should be reconsidered.
Very few developers write software and then release it. They used to, but the Internet changed that. Now; they release a half-finished version, then a patch, then an update, then... To cope, an OS needs a good/seamless auto-update facility that updates software in the background without any end-user hassles or end-user involvement.

If a good software company is acquired by another company that likes to add malware to their products; then the old (good) software gets auto-updated in the background and becomes new (bad) software without any end-user involvement.

Now think about "no excuses". Imagine you're a company that's considering putting malware in one new product; but you know that if you do all of your software will be effectively wiped off of every computer in the world within 5 days and after that it'll be impossible to make any profit. Will you decide to put malware in one new product, or not?

Basically; it's a deterrent intended to make sure that nobody ever tries to put malware in their software to begin with.

The real problem is making sure that innocent developers don't get blamed/punished; which means that the OS has to make it impossible for someone to inject malware into someone else's software. For my OS this shouldn't be hard - an executable can't be modified before its executed (because digital signatures prevent that) and there's no easy way to modify an executable while its being executed (because processes live in their own isolated space that almost nothing else can touch).
linguofreak wrote:"No excuses" should be reconsidered because the system administrator should have ultimate say on what keys get whitelisted or blacklisted. It's fine for the OS to provide recommendations based on behavior that the developer has observed, but there should always be a way for the administrator to override those recommendations in either direction.
Imagine a piece of hardware (a device) that's able to determine if (e.g.) a person requesting an account (user name, password) and certain permissions (e.g. access to files belonging to a group) should be granted access or not. Like all devices, this hardware would have a device driver and would be controlled (via. the device driver) by the OS. This hardware is called "the administrator". The device is unusual because it's made out of flesh and not electronics, and the device driver it uses ("admin tool") is also unusual due to the "electronics to flesh" interface the device uses; but it's still just a peripheral controlled by the OS. If there is no "administrator" (e.g. the device was faulty and hasn't been replaced by the system owner yet) then the OS keeps going without the functionality that the device provides. Of course for high reliability I'd recommend using redundant devices (e.g. 2 or more administrators so that if one device fails there's no loss of functionality).


Cheers,

Brendan
Last edited by Antti on Tue Aug 04, 2015 6:23 am, edited 1 time in total.
Reason: Topic title edited
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Antti
Member
Member
Posts: 923
Joined: Thu Jul 05, 2012 5:12 am
Location: Finland

Re: Developer keys and digitally signed code

Post by Antti »

Brendan wrote:Imagine you're a company that's considering putting malware in one new product; but you know that if you do all of your software will be effectively wiped off of every computer in the world within 5 days and after that it'll be impossible to make any profit. Will you decide to put malware in one new product, or not?
What if the only purpose for acquiring the company is to wipe their software off? I know this may be a little bit far-fetched but I am sure there are valid reasons for doing it. An extremely powerful denial-of-service attack?

Sounds like seeing it from a hobbyist's point of view. Are you personally taking full responsibility for you work or is it the employer taking it? I am afraid there are many problems with personal developer keys if you are an employee. For hobbyists this all works well if there is an application written by a trusted "Antti" and it is signed by him. Things get complicated if an application is written by a (currently trusted) "OSDev Corporation" and there are several developers working on the application. A single person may be trustworthy during his/her whole career but it is not the case with a company.

The other problem is that programmers working for their employer could not always write applications to be as high-quality as they personally wanted to write. There are always resource issues and things that prevent them from doing that. Mixing their work and hobby, assuming that a single person has one key, is likely to cause some serious problems. Perhaps high-profile developers are able to choose whom they work for but it is not always so. Some developers are forced to work to make a living and they are not always in a position to choose what software projects meet their personal quality standards. However, they may write excellent software in their spare time.
Antti
Member
Member
Posts: 923
Joined: Thu Jul 05, 2012 5:12 am
Location: Finland

Re: Developer keys and digitally signed code

Post by Antti »

I apologize for the negative tone. Some parts of my previous message were written before you replied so the the discussion flow is not very smooth. After all, I think the idea is good but I am worried about having so strong control on user's computer.
User avatar
bluemoon
Member
Member
Posts: 1761
Joined: Wed Dec 01, 2010 3:41 am
Location: Hong Kong

Re: Developer keys and digitally signed code

Post by bluemoon »

Brendan wrote:The real problem is making sure that innocent developers don't get blamed/punished; which means that the OS has to make it impossible for someone to inject malware into someone else's software. For my OS this shouldn't be hard - an executable can't be modified before its executed (because digital signatures prevent that) and there's no easy way to modify an executable while its being executed (because processes live in their own isolated space that almost nothing else can touch).
I wouldn't be so sure. History shows that signature is not unbreakable. In the ideal cases there is no exploits or bug in your system and algorithms, do you use 1024 bits or 2048 bits for the signature, how about if few years later it becomes inadequate and how do you handle "legacy" software that was signed with insecure methods?
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Developer keys and digitally signed code

Post by Brendan »

Hi,
Antti wrote:
Brendan wrote:Imagine you're a company that's considering putting malware in one new product; but you know that if you do all of your software will be effectively wiped off of every computer in the world within 5 days and after that it'll be impossible to make any profit. Will you decide to put malware in one new product, or not?
What if the only purpose for acquiring the company is to wipe their software off? I know this may be a little bit far-fetched but I am sure there are valid reasons for doing it. An extremely powerful denial-of-service attack?
In this case, it's still an act of self-sabotage because everyone will know the name of the company that did it.

From my perspective it's more important to avoid vendor lock-in (via. open specifications for file formats and messaging protocols) so that users can switch to a competing product if/when something like this actually happens.
Antti wrote:Sounds like seeing it from a hobbyist's point of view. Are you personally taking full responsibility for you work or is it the employer taking it? I am afraid there are many problems with personal developer keys if you are an employee. For hobbyists this all works well if there is an application written by a trusted "Antti" and it is signed by him. Things get complicated if an application is written by a (currently trusted) "OSDev Corporation" and there are several developers working on the application. A single person may be trustworthy during his/her whole career but it is not the case with a company.
The key would belong to whoever/whatever releases the software - the company and not any individual developer working for the company. It's the company's responsibility to ensure that no individual puts malicious code in their product.
Antti wrote:The other problem is that programmers working for their employer could not always write applications to be as high-quality as they personally wanted to write. There are always resource issues and things that prevent them from doing that. Mixing their work and hobby, assuming that a single person has one key, is likely to cause some serious problems. Perhaps high-profile developers are able to choose whom they work for but it is not always so. Some developers are forced to work to make a living and they are not always in a position to choose what software projects meet their personal quality standards. However, they may write excellent software in their spare time.
A key won't be revoked just because of plain old bugs and/or poor quality; and "intentionally malicious" (e.g. virus, trojan) doesn't happen by accident.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Antti
Member
Member
Posts: 923
Joined: Thu Jul 05, 2012 5:12 am
Location: Finland

Re: Developer keys and digitally signed code

Post by Antti »

Brendan wrote:From my perspective it's more important to avoid vendor lock-in (via. open specifications for file formats and messaging protocols) so that users can switch to a competing product if/when something like this actually happens.
That is exactly what could be the reason for wiping off the old product. Making it possible to wipe off a working application is a scary thought. If I write something today, I like it to be judged by what it is and not by something else I could do in the future. For example, I could trust you to write a mission critical application today. It would be foolish for me to think you are trustworthy after ten years from now. Of course, it is very likely that you will stay trustworthy but do you think I should count on that?
Brendan wrote:The key would belong to whoever/whatever releases the software - the company and not any individual developer working for the company. It's the company's responsibility to ensure that no individual puts malicious code in their product.
Sounds better. The discussion is starting to make sense after a misunderstanding of mine. In general, criticizing ideas without suggesting anything to solve the problems (or even understanding them correctly) feels very much like stepping out of the comfort zone. Nevertheless it is something I did. Something like this face expression from Doom:
Attachments
Doom.png
Doom.png (1.03 KiB) Viewed 4628 times
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Developer keys and digitally signed code

Post by Brendan »

Hi,
bluemoon wrote:
Brendan wrote:The real problem is making sure that innocent developers don't get blamed/punished; which means that the OS has to make it impossible for someone to inject malware into someone else's software. For my OS this shouldn't be hard - an executable can't be modified before its executed (because digital signatures prevent that) and there's no easy way to modify an executable while its being executed (because processes live in their own isolated space that almost nothing else can touch).
I wouldn't be so sure. History shows that signature is not unbreakable. In the ideal cases there is no exploits or bug in your system and algorithms, do you use 1024 bits or 2048 bits for the signature, how about if few years later it becomes inadequate and how do you handle "legacy" software that was signed with insecure methods?
That vulnerability doesn't break the signature (and to me seems to be the result of large amount of stupidity). From this article about it:
computerworld.com wrote:"It's a problem in the way Android handles APKs that have duplicate file names inside," Oliva Fora said Tuesday via email. "The entry which is verified for signature is the second one inside the APK, and the entry which ends up being installed is the first one inside the APK -- the injected one that can contain the malicious payload and is not checked for signature at all."
I'm using 2048-bit RSA. If it becomes inadequate in a few years then it'll become inadequate for everything (not just my OS).

Much more likely is that it'll go from "virtually impossible to break" to "extremely difficult to break" slowly (over 10 years); and I'll be able to add support for something stronger and give software developers 2 years to switch before removing support for the older keys.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Antti
Member
Member
Posts: 923
Joined: Thu Jul 05, 2012 5:12 am
Location: Finland

Re: Developer keys and digitally signed code

Post by Antti »

A minor addition to my previous post. It may be hard to convince customers that products are reliable if the author is able to self-destroy the whole product line. Are customers buying a subscription-like license? Although that seems to be common today, I would not recommend it for an alternative OS. I recommend exactly the opposite. Make people sure that their computers are safe and their verified applications would just keep running whatever the author did. Of course, this all could and should be optional.

That is the deviation from the "no excuses" rule that I was suggesting.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Developer keys and digitally signed code

Post by Brendan »

Hi,
Antti wrote:A minor addition to my previous post. It may be hard to convince customers that products are reliable if the author is able to self-destroy the whole product line. Are customers buying a subscription-like license? Although that seems to be common today, I would not recommend it for an alternative OS. I recommend exactly the opposite. Make people sure that their computers are safe and their verified applications would just keep running whatever the author did. Of course, this all could and should be optional.

That is the deviation from the "no excuses" rule that I was suggesting.
Software has been licensed (not purchased) for as long as I can remember. It doesn't make much difference if you get a 1 month licence each time you pay a monthly fee; or if you get a "forever" licence by paying once.

If a company wants to deliberately self destruct, then they can do that anyway (e.g. just release a normal update that happens to "self-uninstall" and make that the last/only available version). However; if a company deliberately self destructs (using any method) then any customers who have paid for the right to use their software can probably sue the company (depending on the licence, etc).

Now; imagine a piece of software that does everything right and passes all tests, and has been very reliable for 10 years, but is actually a trojan with some back-door trigger or time delay that has remained undetected for those 10 years. If you find out a company has been putting malicious code into their products recently; what reason do you have to trust older software they've provided, and should they be able to continue to sell the "potentially also malicious" older version?

Now think about "trust" - why does anyone trust any company? The only reason I trust any company (that I can think of) is the knowledge that if a company betrays its customers it will damage the company more than it will damage me.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Developer keys and digitally signed code

Post by Rusky »

Brendan wrote:Now; imagine a piece of software that does everything right and passes all tests, and has been very reliable for 10 years, but is actually a trojan with some back-door trigger or time delay that has remained undetected for those 10 years.
Sounds like a pretty special case to me.

Old, working software should not stop working (on the same hardware and OS version) under any circumstances, and for that reason updates need to be possible to opt out of and to roll back.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Developer keys and digitally signed code

Post by Brendan »

Hi,
Rusky wrote:
Brendan wrote:Now; imagine a piece of software that does everything right and passes all tests, and has been very reliable for 10 years, but is actually a trojan with some back-door trigger or time delay that has remained undetected for those 10 years.
Sounds like a pretty special case to me.

Old, working software should not stop working (on the same hardware and OS version) under any circumstances, and for that reason updates need to be possible to opt out of and to roll back.
Normally? Maybe.

When someone has been deliberately publishing malicious code? No, preventing people from publishing malicious code in the first place (by making sure the risks of being caught out are far higher than any potential gains) is more important. Let these companies create malicious Windows software instead.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Developer keys and digitally signed code

Post by Rusky »

People and corporations don't just start pushing malicious code in otherwise-good code. Malicious code is either pushed all on its own through vulnerabilities or phishing, or in wrappers and installers by desparate sites like download.com or sourceforge.net.

Remove actual malicious code, don't remove useful code just by association. The downsides for users are far too great to be worth the marginal protection against malware.

And it is marginal, especially on an OS with a better security model and a better software distribution story- fewer vulnerabilities, fewer ways to go phishing, and fewer installers to bundle crap with.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Developer keys and digitally signed code

Post by Brendan »

Hi,
Rusky wrote:People and corporations don't just start pushing malicious code in otherwise-good code. Malicious code is either pushed all on its own through vulnerabilities or phishing, or in wrappers and installers by desparate sites like download.com or sourceforge.net.

Remove actual malicious code, don't remove useful code just by association. The downsides for users are far too great to be worth the marginal protection against malware.
I'm removing the association (e.g. no malicious code injection via. shared library or file system tampering, an installer and updater built into the OS, etc) to allow malicious code to be identified; then removing the desire to create malicious code; then (if all else fails, which should never actually happen and is therefore fairly irrelevant) crushing people that still publish malicious code into a fine powder to make sure anyone else considering doing the same has second thoughts.

The downside for users is 10 minutes to find and download an alternative piece of software (followed by a life time of feeling safe from both malicious software and vendor lock-in).


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Post Reply