Page 1 of 2

Posted: Sat Oct 28, 2006 3:11 pm
by Brendan
Hi,
gaf wrote:
Brendan wrote:I need a good way to ensure my OS hasn't been tampered with since last boot that doesn't involve asking an adminstrator for an "authorisation password" each time the computer is booted.
How often has it happend to you that somebody hacked your operating-system while you were out for lunch ?
For me, this is a complex topic. Imagine you've got a network with computers scattered everywhere (like a University campus), and all of those computers are part of a distributed cluster and act like a single large computer. To prevent things like packet sniffers all networking is encrypted, and to prevent theft (including laptops) all hard disk partitions are encrypted. To make it work, all computers that are part of the cluster need to share a "master key". Without this encryption key the OS can't access it's own file systems or talk to other computers on the network.

The problem is how the computer obtains this master key during boot. It's too important to transfer over the network or store on a hard disk, because that makes it easy to obtain. Without special hardware, a trusted administrator needs to type in a "master password" when the OS boots, so that the OS can get the master key from that.

There's a few alternatives here - each computer could have biometrics (e.g. a USB fingerprint scanner so that the administrator only needs to do a fingerprint scan), or you could install the OS so that this security is disabled (e.g. if there's physical security instead, or it's a few games machine on a home LAN).

With trusted computing the hardware can make sure my software is running, and then allow my OS to get the key from hardware. This is the only option that works without the adminstrator being required (and without the security being disabled).

I am still trying to find alternatives, and I'd very much like to get rid of the master key completely (it's a single point of failure), but so far I've been unable to find any suitable method of doing it.
gaf wrote:
Brendan wrote:If I give my accountant a copy of all my financial details, I'd like to be able to prevent the accountant from accessing this information if I change to a different accountant, even when he has several copies of the file.
I would guess that such cases should already be covered by regular law in most states: The accountant may not reveil confidential information in the first place (NDA) just like your physician may not chat about your health. This system has worked quite until today so that I actually see little reason for change.
Laws only protect you from people who are unwilling to break those laws. For example, if you wanted to kill someone badly enough, there's nothing stopping you as long as you're prepared to accept the consequences.
gaf wrote:
Brendan wrote:Some large media companies might use it to limit access to their content, but I couldn't care less - I don't buy (or pirate) their content anyway so it won't effect me.
It's quite naiv to think that you just won't be affected by it, as digital rights management will change the way we all access our media. It's not only a problem for 14 year old kids copying video games: Everybody that listens to music, plays games, watches movies, reads books, manuals or even newspapers will have to deal with rights management.

Actually I'm not agains paid content in general. After all it might actually increase the amount of contents available as it allows companies to sell their services online. This must however not interfere with my rights as a customer: If I bought something I want to own it "physically" (private copy, right to use it as often and wherever I want to, etc). Art just shouldn't be charged by the meter..
Society in general will need to come to terms with how DRM is used, but if people refuse to buy something because of the restrictions placed on it then companies will have little choice but to reduce those restrictions, and if people are willing to pay the price dispite the restrictions then who are we to say those restrictions are bad?

For an example, how is going to the video shop and renting a movie for 2 nights different to going to a web site and paying the same money to rent the same movie for the same amount of time?
gaf wrote:
Brendan wrote:Other large companies (e.g. Microsoft) might use DRM to screw their users out of more cash, but that's good too - more reasons for people to shift to Linux or some other OS like mine...
And you really think that linux users wouldn't be affected by this ? You'll still have to buy all the "trustworthy" programs to access the taxed content (music, videos, ebooks, websites).
Not really - I can choose to go without this content if I think it's not worth the price and restrictions.

Your problem isn't with the technology involved with trusted computing or DRM, it's with the way this technology might be used in capitalist economies (where consumer choice is meant to influence a company's finances, rather than a company's finances influencing consumer choice).


Cheers,

Brendan

Posted: Sun Oct 29, 2006 9:11 am
by gaf
Berndan wrote:With trusted computing the hardware can make sure my software is running, and then allow my OS to get the key from hardware. This is the only option that works without the adminstrator being required.
In my opinion getting the key from the TCP hardware isn't inherently more secure than obtaining it over the network or from the harddisk. The password is still physically present on the machine and I am sure that there'll eventually be ways to circumvent the DRM security scheme.

Personally I like the idea of using a usb stick to unlock the computer. Every user can be given a USB token that holds the masterkey aswell as all other passwords of that person. As long as the USB token is plugged-in all of the user's data and files are accessible, once it gets removed the user gets logged-out automatically.

The big advantage is that the encryption key is physically separated from the machine. Only users that have a USB key may access data or the network, two factor authentication could be used to further improve security.
Brendan wrote:I am still trying to find alternatives, and I'd very much like to get rid of the master key completely (it's a single point of failure), but so far I've been unable to find any suitable method of doing it.
Couldn't public key encryption be used to exchange data between nodes ? All data gets encrypted using the public key and only the intended receiver may decrypt it with its private key. Local data on hardisks could be encrypted using a different key on each node making an exposed key less of a security risk.
Brendan wrote:Laws only protect you from people who are unwilling to break those laws. For example, if you wanted to kill someone badly enough, there's nothing stopping you as long as you're prepared to accept the consequences.
And if I were determined to steal your ERM protected data nobody could stop me from doing it. Imagine that you were asked by Ageia to write a linux driver for their upcomming physX card. They send you some top-secret information about the cicuits of the card and the programming interface. You write the driver and the ERM protected files they sent you become inaccessible once you've transmitted your work to the company.

Two weeks later you competitor Havok contacts you showing some interest in the inforamtion that was given to you by Ageia. Could you provide them with the information if you really wanted to ? You've spent some time writing the driver and thus probably remember the more important details by heart. Apart from that you most likely still have the drivers source-code on your hardisk. If you're a criminal mastermind that planned betraying Ageia from the start, you might even have taken some photos of the circuits using you 1970 analoguous camera.

In my opinion ERM only created a false feeling of security. Everything that you showed to others is potentially exposed and the only way to stop people from misusing your information is mutual trust and contracts. If you want to reduce the risk just don't provide any more information than absolutly necessary and think twice before trusting strangers (use your brain dioctrine ;)).
Brendan wrote:Society in general will need to come to terms with how DRM is used, but if people refuse to buy something because of the restrictions placed on it then companies will have little choice but to reduce those restrictions, and if people are willing to pay the price dispite the restrictions then who are we to say those restrictions are bad?
You're assuming that there's a healthy competition between the companies that allow costumers to choose what they want to pay for. You also rely on the average user making reasonable decisions and refusing to take offers that aren't acceptable (have you watched TV lately ?).

You could aswell expect a 3rd world mine worker to go on strike if the wages aren't even sufficient for everydays expenses. We just aren't as independant as we like to think we are. Unless the consumers gets protected from the power of the big compaies we'll have to live by their conditions as we're just not strong enough to make a difference.

In my opinion computing and especially the internet are historical oppertunities due to their democratic nature that allows everybody to participate. Digital rights management is a step towards the professionalization of this unique platform. I just don't want it to become yet another media controlled by big companies that provide their consumers with mediocre productions.
Brendan wrote:Your problem isn't with the technology involved with trusted computing or DRM, it's with the way this technology might be used in capitalist economies
So the problem is not nuclear power but the politicians that used it for the wrong reasons ? You're probably right and yet I wonder if we wouldn't be better off if it had never been invented. In my opinion it's to easy to say that you're just a engineer/scientist that isn't liable for what will be done with his work. The scientists working on the manhattan project are just as responsible for the eradication of Hiroshima and Nagasaki as the bomber pilots that dropped the bomb. Technological progress changes our society and I do belief that you always have to consider the outcome of your work.

regards,
gaf

Posted: Sun Oct 29, 2006 12:31 pm
by Brendan
Hi,
gaf wrote:
Berndan wrote:With trusted computing the hardware can make sure my software is running, and then allow my OS to get the key from hardware. This is the only option that works without the adminstrator being required.
In my opinion getting the key from the TCP hardware isn't inherently more secure than obtaining it over the network or from the harddisk. The password is still physically present on the machine and I am sure that there'll eventually be ways to circumvent the DRM security scheme.
The idea would be to use the TCP hardware to store the key securely after OS installation, so that only the OS can obtain the key after that. If this can be done, then the administrator can provide the "master key" during OS installation, but wouldn't need to provide the master key for subsequent boots.

To be honest I still haven't figured out how this is meant to work, or how secure the TPM chip actually is. I just don't have enough prior knowledge of cryptography to make much sense of their specifications. When the OS gets to the stage where strong security is desireable, I'll need to do a huge amount of research. :cry:
gaf wrote:Personally I like the idea of using a usb stick to unlock the computer. Every user can be given a USB token that holds the masterkey aswell as all other passwords of that person. As long as the USB token is plugged-in all of the user's data and files are accessible, once it gets removed the user gets logged-out automatically.
This requires a certain level of trust. If you assume all of my source code is publicly available, then anyone who has the USB token could create some software that uses the token to gain access to the resources of the cluster.
gaf wrote:The big advantage is that the encryption key is physically separated from the machine. Only users that have a USB key may access data or the network, two factor authentication could be used to further improve security
For identifying a specific user, it's a very nice idea. My trouble is identifying the machine, not the user. For an example, consider a headless computer in a back room. When this computer boots, how does the rest of the cluster know it's authorised and hasn't been tampered with since it was authorised?
gaf wrote:
Brendan wrote:I am still trying to find alternatives, and I'd very much like to get rid of the master key completely (it's a single point of failure), but so far I've been unable to find any suitable method of doing it.
Couldn't public key encryption be used to exchange data between nodes ? All data gets encrypted using the public key and only the intended receiver may decrypt it with its private key.
For public key encryption the sender knows that only the receiver can receive the data, but the receiver doesn't know that the sender actually sent it. For example, if a computer receives an encrypted "chmod a+r /home/foo/" command, it doesn't know who encrypted the command. If it was encrypted by a user logged into my OS on another computer then the sender would've done the necessary security checking, so this is fine. If a cracker running a different OS used a packet sniffer to obtain the public key and used it to encrypt and send the command, then that's not so good.
gaf wrote:Local data on hardisks could be encrypted using a different key on each node making an exposed key less of a security risk.
That is an option, but in some situations it's a little too restrictive. For e.g. if one computer blows up, being able to shift the IDE drives to another computer would be nice. For flash memory the OS or user can copy files onto the encrypted flash memory (including the file permissions) and those files could be accessed from any computer in the cluster.(if the user logs in and the file permissions allow them access).
Brendan wrote:Laws only protect you from people who are unwilling to break those laws. For example, if you wanted to kill someone badly enough, there's nothing stopping you as long as you're prepared to accept the consequences.
And if I were determined to steal your ERM protected data nobody could stop me from doing it. Imagine that you were asked by Ageia to write a linux driver for their upcomming physX card. They send you some top-secret information about the cicuits of the card and the programming interface. You write the driver and the ERM protected files they sent you become inaccessible once you've transmitted your work to the company.[/quote]

In general "100% secure" is impossible. The idea is to make unauthorised access impractical, which involves determining how sensitive the information is and using that to determine what "impractical" means, then finding suitable methods of implementing security to suit.

For the PhysX card, the effort involved with stealing the design should be more than the effort it'd take to develop something similar from scratch (or licence it).
gaf wrote:You're assuming that there's a healthy competition between the companies that allow costumers to choose what they want to pay for. You also rely on the average user making reasonable decisions and refusing to take offers that aren't acceptable (have you watched TV lately ?).

You could aswell expect a 3rd world mine worker to go on strike if the wages aren't even sufficient for everydays expenses. We just aren't as independant as we like to think we are. Unless the consumers gets protected from the power of the big compaies we'll have to live by their conditions as we're just not strong enough to make a difference.
For capitalism, theory isn't always the same as reality. If your car broke down, would you blame your cat? If capitalism is broken, would you blame DRM?

I'd probably try to find ways of fixing the car instead of blaming the cat. I'd also probably try to find ways of fixing capitalism instead of blaming DRM.
gaf wrote:In my opinion computing and especially the internet are historical oppertunities due to their democratic nature that allows everybody to participate. Digital rights management is a step towards the professionalization of this unique platform. I just don't want it to become yet another media controlled by big companies that provide their consumers with mediocre productions.
The internet is a commercial thing. For everything on the internet, someone paid something to get it there, and you pay something to access it.
gaf wrote:
Brendan wrote:Your problem isn't with the technology involved with trusted computing or DRM, it's with the way this technology might be used in capitalist economies
So the problem is not nuclear power but the politicians that used it for the wrong reasons ? You're probably right and yet I wonder if we wouldn't be better off if it had never been invented. In my opinion it's to easy to say that you're just a engineer/scientist that isn't liable for what will be done with his work. The scientists working on the manhattan project are just as responsible for the eradication of Hiroshima and Nagasaki as the bomber pilots that dropped the bomb. Technological progress changes our society and I do belief that you always have to consider the outcome of your work.
Is it bad to kill 1 person if it saves 2 other people's lives? Take a look at this page in the wikipedia - deciding between "good" and "bad" is easy in cartoons....


Cheers,

Brendan

Posted: Sun Oct 29, 2006 4:20 pm
by gaf
Brendan wrote:For public key encryption the sender knows that only the receiver can receive the data, but the receiver doesn't know that the sender actually sent it.
Maybe you could also use an algorithm similiar to EAP-TLS to establish a secure connection. The basic idea seems to be to a Diffie-Hellman key exchange (wiki) to calculate a common symetric key that is then used to transfer the actual data.
Brendan wrote:For identifying a specific user, it's a very nice idea. My trouble is identifying the machine, not the user. For an example, consider a headless computer in a back room. When this computer boots, how does the rest of the cluster know it's authorised and hasn't been tampered with since it was authorised?
What you're thinking of is a homegeneous network of trusted nodes. As TCP ensures that all nodes are running certified software there's no need for any barriers or security checks between the computers. The network is secure as no unauthorised user could enter it.

There's an alternative approach that does not assume the nodes to be trustworthy. As hacked machines could enter the network the nodes must simply require access checks to protect their resources. A user specific capability on a USB stick could be used to decide whether access should be granted:

Let's assume that you've just entered the network to get a file from some remote node. Your computer would establish a secure connection to the other machine using something similiar to TLS. Using this connection a message is sent asking for the file. The receiver compares the key included in the message with the entries in the ACL of the file. Access to the file is only granted if a match is found.

Using this scheme remote resources can only be accessed if the user has the required capabilities on his USB stick. Whether his machine can be trusted or not is irrelevant for the network.
Brendan wrote:I'd also probably try to find ways of fixing capitalism instead of blaming DRM.
As I've already said I do not oppose DRM in general. Maybe I am just a bit more pessimistic when it comes to trusting the self-regulation of our captalistic systems. If digital rights management is however controlled by resonable laws it could indeed be a very useful invention.
Brendan wrote:Is it bad to kill 1 person if it saves 2 other people's lives? Take a look at this page in the wikipedia - deciding between "good" and "bad" is easy in cartoons....
I've just read the article and I can't say that it really changed my opinion. At the time when the two cities were destroyed Japan was already losing the battle. The war in Europe was over so that the US could concentrate on the Pacific and and the Soviet Union was just about to give up its neutrality and attack Japan (according to the article the US knew that). There's no doubt that there was a strong fraction in Japan that supported the idea of a total war but the overall situation for the country was so hopeless that they had been forced to surrender rather sooner than later.

In my humble opinion intentionally murdering 200'000 civilians is never acceptional even if it might have stopped the war a bit earlier. Nobody has the right to make such trades with human lifes.

regards,
gaf

Posted: Sun Oct 29, 2006 8:56 pm
by Brendan
Hi,

I think, in general, what I'm after is entirely impossible. The problem is authentication, rather than cyrpography.

As an anology, imagine you've got a room full of 20 complete strangers. All strangers can overhear everything said by all other strangers, and none of these strangers have any prior knowledge (a secret). I want an algorithm where any given stranger can know whether or not to trust any other stranger.

To make the anolgy make sense, I'll rewrite it.

Imagine you've got a room full of 20 computers. All computers can intercept all communication between all other computers, and none of these computers have any prior knowledge (a secret). I want an algorithm where any given computer can know whether or not to trust any other computer.

Because this is impossible, the only alternative is to remove one of the limitations. Either give each stranger a secret or introduce them beforehand so they aren't strangers (I thought about making it so their conversations can't be overheard, but couldn't see how that helped or how it could be implemented over an untrusted network). Introducing the strangers is similar to giving them a secret - it's all just prior knowledge.

After accepting this compromise (strangers must have some form of prior knowledge), the problem becomes storing and retrieving this prior knowledge securely (e.g. storing it in a trusted computing chip), or alternatively providing them with the prior knowledge securely when they need it (e.g. a trusted administrator typing it in when the machine boots).

I've been reading through some of the trusted computing chip's specifications. The basic idea seems to be hashing and logging. The CPU starts running code in the TPM chip (not in the BIOS), and at each stage a hash is created and logged in the TPM chip before the next stage is started. Somewhere before the OS's boot loader is started these logs are checked to verify that nothing has changed. The OS's boot code continues this by hashing, logging and checking, so that by the time the OS is operational it can know that nothing it relied on (including itself) has been changed. From here the OS can use a digital signature in the TPM chip for authentication purposes, or to retrieve an encryption key it previously stored in the TPM chip. This solves my problems entirely.
gaf wrote:
Brendan wrote:For public key encryption the sender knows that only the receiver can receive the data, but the receiver doesn't know that the sender actually sent it.
Maybe you could also use an algorithm similiar to EAP-TLS to establish a secure connection. The basic idea seems to be to a Diffie-Hellman key exchange (wiki) to calculate a common symetric key that is then used to transfer the actual data.
From the wikipedia page:

"In the original description, the Diffie-Hellman exchange by itself does not provide authentication of the parties, and is thus vulnerable to man in the middle attack. The man-in-the-middle may establish two distinct Diffie-Hellman keys, one with Alice and the other with Bob, and then try to masquerade as Alice to Bob and/or vice-versa, perhaps by decrypting and re-encrypting messages passed between them. Some method to authenticate these parties to each other is generally needed."

For EAS-TLS it seems I'd need a trusted third party (a RADIUS authentication server (?)) and digitally signed certificates stored locally somewhere.
gaf wrote:There's an alternative approach that does not assume the nodes to be trustworthy. As hacked machines could enter the network the nodes must simply require access checks to protect their resources. A user specific capability on a USB stick could be used to decide whether access should be granted:

Let's assume that you've just entered the network to get a file from some remote node. Your computer would establish a secure connection to the other machine using something similiar to TLS. Using this connection a message is sent asking for the file. The receiver compares the key included in the message with the entries in the ACL of the file. Access to the file is only granted if a match is found.

Using this scheme remote resources can only be accessed if the user has the required capabilities on his USB stick. Whether his machine can be trusted or not is irrelevant for the network.
I wasn't thinking of having any users. :lol:

Hopefully, as computers are randomly turned on and off they automatically authenticate themselves with other computers and start sharing sensitive data, regardless of where users (if any) are.

This sounds insane, but consider a cluster of 3 computers where the cluster is being used as a CVS server. Computers A and B are turned on, computer A crashes (hardware failure). Computer B notices that computer A isn't responding and sends a "wake on LAN" broadcast packet. Computer C wakes up, boots, authenticates, and then starts sychronising it's local file system with the files on computer B (while handling requests from the internet), most likely beginning with an up-to-date list of usernames and passwords to determine who has access to CVS.


Cheers,

Brendan

Posted: Mon Oct 30, 2006 8:03 am
by gaf
Brendan wrote:By the time the OS is operational it can know that nothing it relied on (including itself) has been changed. From here the OS can use a digital signature in the TPM chip for authentication purposes, or to retrieve an encryption key it previously stored in the TPM chip. This solves my problems entirely.
Up to here it actually only solves the first part of the problem. You now know that the operating system is reliable as nobody has tampered with it since the last boot. The more difficult part is to convince the other nodes that you're running trust-worthy software.

From what I know the trusted platform modul will eventually implement two mechanisms to support remote attestation:

a) Privacy CA is similiar to a certificate-based authorisation as it uses a third party to vertify the identitiy of the participants.
b) Direct Anonymous Attestation uses an algorithm based on zero-knowledge proofs (wiki). The idea is to show that the provided AIK is valid and derived from the EK by demonstrating that it can be used.
Brendan wrote:I wasn't thinking of having any users. :lol:
Well, I wasn't necessarily talking about physical users or actual persons. For me a user is just an abstract entity that holds access permissions to a set of resources (files, printers, etc). The only alternative is to allow everybody to access everything, which is probably not really a wise decision.
Brendan wrote:As an anology, imagine you've got a room full of 20 complete strangers. All strangers can overhear everything said by all other strangers, and none of these strangers have any prior knowledge (a secret). I want an algorithm where any given stranger can know whether or not to trust any other stranger.
My point still is that the user should authenticate the computer. It doesn't matter which operating system I use or if the computer itself is reliable. If I trust a computer by using it, the network can assume that it's safe to send my files to it. My idea is that a regular user can't access all data but only files and resources that he's authorised to use. The worst think that could happen is that I expose my own stuff. If I don't want that, it's my business to look after my files.

The "secret" in my scenario is the user key stored on the USB key. All data that gets exchanged between the nodes is encrypted, without the user key you can't really do much with it.

regards,
gaf

Posted: Mon Oct 30, 2006 12:28 pm
by Brendan
Hi,
gaf wrote:
Brendan wrote:By the time the OS is operational it can know that nothing it relied on (including itself) has been changed. From here the OS can use a digital signature in the TPM chip for authentication purposes, or to retrieve an encryption key it previously stored in the TPM chip. This solves my problems entirely.
Up to here it actually only solves the first part of the problem. You now know that the operating system is reliable as nobody has tampered with it since the last boot. The more difficult part is to convince the other nodes that you're running trust-worthy software.
If all computers know the same secret (a master key), then this can be used to encypt/decrypt storage devices and network connections. If a computer can't encrypt or decrypt then it doesn't know the secret and isn't trusted, and if it can encrypt and decrypt then it does know the secret and can be trusted.
gaf wrote:My point still is that the user should authenticate the computer. It doesn't matter which operating system I use or if the computer itself is reliable. If I trust a computer by using it, the network can assume that it's safe to send my files to it.

My idea is that a regular user can't access all data but only files and resources that he's authorised to use. The worst think that could happen is that I expose my own stuff. If I don't want that, it's my business to look after my files.
For me, a trusted user may be able to authenticate a computer, but normal users can't.

For normal "client/server" situations what you're describing is comon practice, and quite suitable. For a "peer-to-peer" distributed system like mine, it's not that simple.

For example, if a hard disk on one computer is full, the OS will free up hard drive space by automatically moving less often used files to other computers. If a hard disk is empty, the OS will try to make "redundant copies" of files. In both cases sensitive data may be sent to any (trusted) computer, regardless of whether or not the current user (if any) has access to that data.

If you log in on computer A and open "mySecrets.doc" in a wordprocessor, then computer B might run the wordprocessor and load the file from computer C. While the wordprocessor is running computer D might be providing spellchecking services. The computer you are actually using might do nothing, except displaying the video and sending keyboard/mouse packets.

In this case you need to be able to trust all computers that have been authenticated, and all computers that could possibly be authenticated in the future.


Cheers,

Brendan

Posted: Mon Oct 30, 2006 2:16 pm
by Candy
Brendan wrote:If all computers know the same secret (a master key), then this can be used to encypt/decrypt storage devices and network connections. If a computer can't encrypt or decrypt then it doesn't know the secret and isn't trusted, and if it can encrypt and decrypt then it does know the secret and can be trusted.
What if one of the computers is ever compromised?
For example, if a hard disk on one computer is full, the OS will free up hard drive space by automatically moving less often used files to other computers. If a hard disk is empty, the OS will try to make "redundant copies" of files. In both cases sensitive data may be sent to any (trusted) computer, regardless of whether or not the current user (if any) has access to that data.
How do you quota distributed systems? How do you prevent abuse by authorized users?
If you log in on computer A and open "mySecrets.doc" in a wordprocessor, then computer B might run the wordprocessor and load the file from computer C. While the wordprocessor is running computer D might be providing spellchecking services. The computer you are actually using might do nothing, except displaying the video and sending keyboard/mouse packets.
How fast a network connection do you need? What if you want to do this with internet-connected computers, that might have an asymmetric connection or a very slow connection? Think cable or 56k modems, possibly dialing in with a GSM in a very rural area.

Posted: Mon Oct 30, 2006 5:05 pm
by Brendan
Hi,
Candy wrote:
Brendan wrote:If all computers know the same secret (a master key), then this can be used to encypt/decrypt storage devices and network connections. If a computer can't encrypt or decrypt then it doesn't know the secret and isn't trusted, and if it can encrypt and decrypt then it does know the secret and can be trusted.
What if one of the computers is ever compromised?
Then all computers are compromised. This isn't much different to a normal OS (in both cases, once the OS's security is compromised all computers running that instance of the OS are compromised).

The main difference is how much easier it would be to compromise a distributed system (more points of weakness), which is why I need stronger security than any normal OS would.
Candy wrote:
For example, if a hard disk on one computer is full, the OS will free up hard drive space by automatically moving less often used files to other computers. If a hard disk is empty, the OS will try to make "redundant copies" of files. In both cases sensitive data may be sent to any (trusted) computer, regardless of whether or not the current user (if any) has access to that data.
How do you quota distributed systems? How do you prevent abuse by authorized users?
I'm not sure it'd be possible for me to support quotas. For example, if the cluster consists of 2 sub-nets connected by a router and that router crashes, then the cluster becomes 2 seperate halves with no way of knowing how many resources a user is using on the other half. The same problem happens for laptops (where the laptop might be connected to the cluster 2 hours a day, and used elsewhere for 6 hours a day).

How does any other system prevent abuse by authorized users? IMHO the only thing you can do is have logging/auditing so you can find out how did what after it's too late, limit access to minimise damage, and give the system owner the power to lock the administrator ("root user") out of the system (for some organisations, the most trusted user isn't an administrator - e.g. a bank manager).
Candy wrote:
If you log in on computer A and open "mySecrets.doc" in a wordprocessor, then computer B might run the wordprocessor and load the file from computer C. While the wordprocessor is running computer D might be providing spellchecking services. The computer you are actually using might do nothing, except displaying the video and sending keyboard/mouse packets.
How fast a network connection do you need? What if you want to do this with internet-connected computers, that might have an asymmetric connection or a very slow connection? Think cable or 56k modems, possibly dialing in with a GSM in a very rural area.
Hmm. First, let me split some hairs between "need" and "want". How fast a network connection needs to be would depend on how much data needs to be transferred over that connection in a given amount of time. How much network speed you want would depend on what sort of performance you find acceptable and how expensive networking is. Neither of these things can really be pre-determined (they both depend on too much that isn't known).

If someone needs twenty 10 gigabit ethernet cards for a cluster of 5 machines to get acceptable performance for a specific purpose, then that's what they'll have to get. I'm hoping to get good performance for normal office use by using common 100 Mhz ethernet for up to 20 normal computers/users (if I fail to do that something is severely messed up).

From my point of view it's more about using available bandwidth (and other resources) efficiently - gathering information and implementing algorithms that make intelligent choices based on that information. For example, a process that interacts with a user would normally be run on the computer that the user is using, unless there's some reason not to (e.g. if the user is using a 25 MHz 80486 and wants to compile a large program and there's a quad CPU Opteron server doing nothing on the same 100 Mhz ethernet).


Cheers,

Brendan

Posted: Mon Oct 30, 2006 6:48 pm
by B.E
Brendan wrote: For me, this is a complex topic. Imagine you've got a network with computers scattered everywhere (like a University campus), and all of those computers are part of a distributed cluster and act like a single large computer. To prevent things like packet sniffers all networking is encrypted, and to prevent theft (including laptops) all hard disk partitions are encrypted. To make it work, all computers that are part of the cluster need to share a "master key". Without this encryption key the OS can't access it's own file systems or talk to other computers on the network.
Have you herd of AES encryption, It uses a private/public key method of encyption, and on top of that the messages that are sent could be signed by the sender, so that the recver can identify who sent to, This way only the computer who the information is going to, is able to decript the message and Itedentify who sent it.

The problem with DRM is that corrpret companies can restrict the time that the product is usable. Which in the case of an accountant, is what you want, but this mechinism can be use by software manufactures to get more money out of you. For example say you brought a copy of full version Vista from M$. now say that M$ want to make Vista expire every year. This means that you will have to pay a M$ a fee every 12 month to use your computer. It also means that your data is not accessable untill you reregister with M$.

Posted: Mon Oct 30, 2006 8:11 pm
by Brendan
Hi,
B.E wrote:Have you herd of AES encryption, It uses a private/public key method of encyption, and on top of that the messages that are sent could be signed by the sender, so that the recver can identify who sent to, This way only the computer who the information is going to, is able to decript the message and Itedentify who sent it.
How are the private keys and the digital signature stored ready for boot?
B.E wrote:The problem with DRM is that corrpret companies can restrict the time that the product is usable. Which in the case of an accountant, is what you want, but this mechinism can be use by software manufactures to get more money out of you.
In this case the company would be open to class actions involving false advertising and/or fair use (unless the customer knows about it in advance, in which case it's the customers choice).
For example say you brought a copy of full version Vista from M$. now say that M$ want to make Vista expire every year. This means that you will have to pay a M$ a fee every 12 month to use your computer. It also means that your data is not accessable untill you reregister with M$.
In this case it's easier for Microsoft to produce an insecure product and then charge you a monthly fee for security software. This avoids false advertising (as long as they don't say it's 100% secure, the software does what it was advertised to do) and also avoids fair use problems (you aren't prevented from trying to use the software, even if it is full of spyware, trojans and viruses).

IMHO companies will find ways to get more money out of people with or without DRM...


Cheers,

Brendan

Posted: Tue Oct 31, 2006 12:05 am
by Candy
B.E wrote:Have you herd of AES encryption, It uses a private/public key method of encyption, and on top of that the messages that are sent could be signed by the sender, so that the recver can identify who sent to, This way only the computer who the information is going to, is able to decript the message and Itedentify who sent it.
AES == Rijndael == symmetric encryption. It is intended as replacement to DES, also a symmetric encryption algorithm, for purposes of encrypting blocks in the knowledge of the key. Public key algorithms need a certain subset of the possible keys that is commonly so restricted that to get a properly large subset of valid keys you need to have a much much larger set of tentatively possible keys. Symmetric encryption can do with 128 or 256 bit keys for, as far as we can tell, a century or more of security, whereas for public-key cryptography more than 1024 bits are more or less required, and 2048 bits are recommended for high-security applications that need to last at least 30 years.

Also, public key cryptography isn't naturally safe for man-in-the-middle attacks, which is one of the most important things you want to think about with your new system. What happens when I loop a trusted system connection through one machine with theoretically infinite processing power. Can I use what it sends to make myself appear trusted? In case of non-TTP and non-pre-exchanged keys public key cryptography, yes.
The problem with DRM is that corrpret companies can restrict the time that the product is usable. Which in the case of an accountant, is what you want, but this mechinism can be use by software manufactures to get more money out of you. For example say you brought a copy of full version Vista from M$. now say that M$ want to make Vista expire every year. This means that you will have to pay a M$ a fee every 12 month to use your computer. It also means that your data is not accessable untill you reregister with M$.
There are more problems with DRM'ed software, the least of which is the incredible amount of trouble normal users are loaded with. I asked the IT department at the company I work to figure out how to properly install a certain program. I asked it in the beginning of september (4 sept or so), they've recently told me they're still working on it. It's not a stupid IT department either.

Posted: Tue Oct 31, 2006 5:31 am
by B.E
I'll get back to you on this tomorrow night, but just something to think about before then, how does SSL and SSH make sure your comunicating to the right computer.

Posted: Tue Oct 31, 2006 5:59 am
by Pype.Clicker
B.E wrote:I'll get back to you on this tomorrow night, but just something to think about before then, how does SSL and SSH make sure your comunicating to the right computer.
when you first contact a machine with SSL/SSH, you're prompted the hash of the machine's public key and you're supposed to have a look at it and tell the computer "yes, this is truly the machine i want to connect to". The your comp. registers the other comp.'s address and will check machine identity by itself in the future sessions. If the machine's identity ever changes (e.g. because you reinstall it from scratch or because someone stole your IP address), you'll have a security warning message when you'll try to connect (which you usually solve by manually remove the obsolete identity from "known_hosts" or by phoning to your sysadmin :P)

Posted: Tue Oct 31, 2006 10:35 am
by Candy
B.E wrote:I'll get back to you on this tomorrow night, but just something to think about before then, how does SSL and SSH make sure your comunicating to the right computer.
They don't. They rely on you to either check that with a fingerprint or to check that with a prior-exchanged key in a file.