What kind of security and protection systems is everyone using? I ask because I'm extremely confused about them right now and could use some clarity on the subject.
One specific question: do your systems enforce mandatory access controls?
Security and Protection
Mandatory access control is wonderful idea if you are working in military, intelligence, or governmental security environment. Rest of us need something a bit more flexible.
The idea of MAC is to be mandatory from the point of view of a user... in the sense that if you write a document, it's going to be protected according to the system policy, whether or not you want that...
So I find it highly unlikely that anyone here is even considering anything like "mandatory access control".
That said, there are two basic approaches to security (other than special purpose MAC style policies):
- access control lists
- capabilities
Capabilities are like car keys. If you have the key to my car, you can drive it. If you don't have the key, you can't drive my car unless you manage to break into it (=hack the system).
ACL are... well lists of those allowed to access (and what type if any). If you're name is in the list, you can pass, if not, then you can't.
Obviously these do more or less the same thing. Difference is that with capabilities, as long as you have the key, nobody needs to care who you are. With ACLs you need to tell the system first who you are (or carry a sign tag) so that the system can then go and check if you are in the list. Also, because ACLs tend to be pretty heavy weight (both in size and in management) and capabilities are essentially just system-protected pointers, capabilities are more suited for really fine-grained stuff.. like "this process of mine can access this and this and this, and this other process that and that, but neither can touch anything else, unless they requests them from me directly."
Obviously if it was this simple, everyone would use capabilities, but most systems actually use ACLs! And that's probably mostly because capabilities are kinda hard to distribute.. especially without some sort of ACL. With an ACL system, as long as you know who somebody is (or what user some process belongs to) you can go and check if the name is in the list or not, and that's it.
As concrete examples of some typical systems, in unix every process has a numeric attribute which says "this process belongs to this user" and if that number is zero, the process can do whatever it pleases, otherwise it's requests to open files (mostly) are checked against said file's ACL.
In Windows the situation is basicly the same, except processes have security tokens... which probably allow a lots of stuff, but I'd bet 99% of processes have exactly one security token (equal to user that started the process) because if you try to figure out the deal with security tokens by reading microsoft documentation, you'll start thinking you'd like to try writing DCOM stuff in assembler... no wait, was it so that security tokens would actually involve DCOM in some cases?
Some tips: for more info about capabilities, look at the E programming language, or the Keykos/EROS/Coyotos line of systems (and ignore stuff like POSIX capabilities which miss the point).
As for actually implementing stuff... mm... I could post my last design here as an example when I bother to formalize it..
The idea of MAC is to be mandatory from the point of view of a user... in the sense that if you write a document, it's going to be protected according to the system policy, whether or not you want that...
So I find it highly unlikely that anyone here is even considering anything like "mandatory access control".
That said, there are two basic approaches to security (other than special purpose MAC style policies):
- access control lists
- capabilities
Capabilities are like car keys. If you have the key to my car, you can drive it. If you don't have the key, you can't drive my car unless you manage to break into it (=hack the system).
ACL are... well lists of those allowed to access (and what type if any). If you're name is in the list, you can pass, if not, then you can't.
Obviously these do more or less the same thing. Difference is that with capabilities, as long as you have the key, nobody needs to care who you are. With ACLs you need to tell the system first who you are (or carry a sign tag) so that the system can then go and check if you are in the list. Also, because ACLs tend to be pretty heavy weight (both in size and in management) and capabilities are essentially just system-protected pointers, capabilities are more suited for really fine-grained stuff.. like "this process of mine can access this and this and this, and this other process that and that, but neither can touch anything else, unless they requests them from me directly."
Obviously if it was this simple, everyone would use capabilities, but most systems actually use ACLs! And that's probably mostly because capabilities are kinda hard to distribute.. especially without some sort of ACL. With an ACL system, as long as you know who somebody is (or what user some process belongs to) you can go and check if the name is in the list or not, and that's it.
As concrete examples of some typical systems, in unix every process has a numeric attribute which says "this process belongs to this user" and if that number is zero, the process can do whatever it pleases, otherwise it's requests to open files (mostly) are checked against said file's ACL.
In Windows the situation is basicly the same, except processes have security tokens... which probably allow a lots of stuff, but I'd bet 99% of processes have exactly one security token (equal to user that started the process) because if you try to figure out the deal with security tokens by reading microsoft documentation, you'll start thinking you'd like to try writing DCOM stuff in assembler... no wait, was it so that security tokens would actually involve DCOM in some cases?
Some tips: for more info about capabilities, look at the E programming language, or the Keykos/EROS/Coyotos line of systems (and ignore stuff like POSIX capabilities which miss the point).
As for actually implementing stuff... mm... I could post my last design here as an example when I bother to formalize it..
The real problem with goto is not with the control transfer, but with environments. Properly tail-recursive closures get both right.
OK, I do know what those are. I just wonder I how people actually use them.
For example, where are capabilities or ACLs stored? With what principal (process, user, thread, etc.) are they associated? How do they encode object identifiers and available rights?
Does anyone know about what're called "split capabilities"?
For example, where are capabilities or ACLs stored? With what principal (process, user, thread, etc.) are they associated? How do they encode object identifiers and available rights?
Does anyone know about what're called "split capabilities"?
Capabilities are typically owned by a process, and they name some object and the available interface. Unix file descriptors are in a sense capabilities. The straightforward implementation is to have one table of capabilities for each process, and each capability contains name of an object (well, pointer to the entity implementing the service and some "session cookie" type identifier for the service to identify the instance).
You could have capabilities also contain some bitmap of available rights, but you could also just have the service figure that stuff out with the session cookie type thingie. The thing that makes them secure is that processes can use them and pass them around and whatever, but can't modify them, and OS takes care of filling the cookie into the requests.
ACLs function exactly the other way round: you have one ACL for each object that somebody might want to use. For example in a filesystem you have one ACL for each file and directory. In Unix said ACL is limited to one user, one group, and the rest, with 3 types of access each. In Windows (well NT) you have full ACLs which you can look at by selecting properties of any file and looking at the "Security" tab of the resulting window. Implementing shouldn't be much of a problem.
As for split capabilities, Split Capabilities for Access Control is pretty easy read and describes the idea, which is to use capabilities for naming resources, and something like ACL for controlling permissions. You then need both the name and the permission to access a resource.
You could have capabilities also contain some bitmap of available rights, but you could also just have the service figure that stuff out with the session cookie type thingie. The thing that makes them secure is that processes can use them and pass them around and whatever, but can't modify them, and OS takes care of filling the cookie into the requests.
ACLs function exactly the other way round: you have one ACL for each object that somebody might want to use. For example in a filesystem you have one ACL for each file and directory. In Unix said ACL is limited to one user, one group, and the rest, with 3 types of access each. In Windows (well NT) you have full ACLs which you can look at by selecting properties of any file and looking at the "Security" tab of the resulting window. Implementing shouldn't be much of a problem.
As for split capabilities, Split Capabilities for Access Control is pretty easy read and describes the idea, which is to use capabilities for naming resources, and something like ACL for controlling permissions. You then need both the name and the permission to access a resource.
The real problem with goto is not with the control transfer, but with environments. Properly tail-recursive closures get both right.
Re: Security and Protection
Hi,
Each thread has a "thread ID", and this thread ID is always sent with every message to the receiver. The receiver is responsible for implementing any protection that is suitable for it's specific case.
Once the receiver has the thread ID it can ask the kernel which process the thread belongs to, which executable file the thread belongs to and/or which user it belongs to.
There's 2 system level processes that are considered part of the OS itself - the Virtual File System and the Device Manager. The kernel keeps track these processes from boot. All other processes are started by one of these system level processes, and all processes/threads are responsible for keeping track of who they trust (and how much).
For an example, during boot the Device Manager might find a USB controller. The Device Manager starts a USB controller device driver and remembers who it is. The USB controller device driver might see what's connected to it and start a flash memory device driver. Now the Device Manager trusts the USB driver and the USB driver trusts the flash memory driver, but the Device Manager doesn't trust the flash memory driver.
The flash memory driver might send a "connect me to the VFS" message back to the USB controller driver (who checks if it trusts the sender), and the USB controller driver sends a "connect thread X to the VFS" message back to the Device Manager (who checks if it trusts the sender). Then the Device Manager sends a "connect to thread X" message to the VFS and the VFS checks if the message came from the Device Manager. After this the VFS sends a "Hello" message to the flash memory driver (who checks if the message came from the VFS). After this the flash memory driver and the VFS driver know they can trust each other.
Of course the VFS would only trust the flash memory driver enough for it to work. For example the VFS would trust the flash memory driver as a storage device (e.g. allow it to be mounted by a file system) but wouldn't trust the flash memory driver to access "/home/me/my_personal_secrets.txt".
There is one special type of process that the Device Manager can start. It's called a "User Interface" process, and is used as an interface between GUIs/applications and certain device drivers (video, sound, keyboard, mouse, etc). When the Device Manager starts a User Interface it tells the kernel to remember that the new process is a User Interface. Once a User Interface process has been connected to video card/s, etc (by the Device Manager) it displays a login prompt.
Every process has an "Executable ID" from the file permissions (in the file system), and a "User ID" (inherited from who-ever started the process).
When a user logs in the User Interface sends a "set my User ID" message to the VFS, and the VFS checks the login details and asks the kernel to set the User Interface process' User ID if the details are valid. If the kernel wasn't told the process is a User Interface (by the Device Manager) then then kernel refuses to set the User Interface's User ID.
After a user logs in to a User Interface successfully the User Interface will read an "init" file in the user's home directory, and start processes indicated by this init file (e.g. GUIs, CLIs or any other full screen applications) - one for each virtual screen. These new processes (and any more processes they start) will inherit the same User ID as the User Interface.
For file permissions, any thread can ask the VFS to do file I/O operations and the VFS can check which process the thread belongs to from the IPC system, find out the Executable ID and User ID from this and then check if the thread has permission from this.
All files belong to an executable or a user (but never both). This means that (for e.g.) the Device Manager can't access a user's files even though it's one of the most trusted executables in the OS, and any application can create files that no user can access (although administrators can delete them).
Of course there will also be group permissions, so that (for e.g.) a file can be read by any user or executable in the "HTML" group and written to by any user or executable in the "web development" group. This is mostly the same as standard *nix file permissions (except that executables have their own Executable ID and administrators aren't given full access).
BTW for file system security there's a major compromise between flexibility/complexity and usability. As an example, for an exercise in a course I'm doing I had to setup a Windows 2000 server with file sharing (with different users and groups of users given different access to different shared resources). I found the file permission system in Windows 2000 to be much more flexible than the standard Unix system, but I also found it to be a severe pain in the neck - the extra flexibility/complexity caused more hassles than it was worth.
From my perspective you can have an extremely flexible file permission system, but most system adminstrators won't fully understand it and will leave security holes. A simpler (less flexible) file permission system is much easier to understand, much less likely to be mis-used, and therefore more likely to be secure in practice.
Cheers,
Brendan
For me the IPC mechanism is the basis for security, but the IPC mechanism doesn't have any pre-defined type of protection.Crazed123 wrote:What kind of security and protection systems is everyone using? I ask because I'm extremely confused about them right now and could use some clarity on the subject.
Each thread has a "thread ID", and this thread ID is always sent with every message to the receiver. The receiver is responsible for implementing any protection that is suitable for it's specific case.
Once the receiver has the thread ID it can ask the kernel which process the thread belongs to, which executable file the thread belongs to and/or which user it belongs to.
There's 2 system level processes that are considered part of the OS itself - the Virtual File System and the Device Manager. The kernel keeps track these processes from boot. All other processes are started by one of these system level processes, and all processes/threads are responsible for keeping track of who they trust (and how much).
For an example, during boot the Device Manager might find a USB controller. The Device Manager starts a USB controller device driver and remembers who it is. The USB controller device driver might see what's connected to it and start a flash memory device driver. Now the Device Manager trusts the USB driver and the USB driver trusts the flash memory driver, but the Device Manager doesn't trust the flash memory driver.
The flash memory driver might send a "connect me to the VFS" message back to the USB controller driver (who checks if it trusts the sender), and the USB controller driver sends a "connect thread X to the VFS" message back to the Device Manager (who checks if it trusts the sender). Then the Device Manager sends a "connect to thread X" message to the VFS and the VFS checks if the message came from the Device Manager. After this the VFS sends a "Hello" message to the flash memory driver (who checks if the message came from the VFS). After this the flash memory driver and the VFS driver know they can trust each other.
Of course the VFS would only trust the flash memory driver enough for it to work. For example the VFS would trust the flash memory driver as a storage device (e.g. allow it to be mounted by a file system) but wouldn't trust the flash memory driver to access "/home/me/my_personal_secrets.txt".
There is one special type of process that the Device Manager can start. It's called a "User Interface" process, and is used as an interface between GUIs/applications and certain device drivers (video, sound, keyboard, mouse, etc). When the Device Manager starts a User Interface it tells the kernel to remember that the new process is a User Interface. Once a User Interface process has been connected to video card/s, etc (by the Device Manager) it displays a login prompt.
Every process has an "Executable ID" from the file permissions (in the file system), and a "User ID" (inherited from who-ever started the process).
When a user logs in the User Interface sends a "set my User ID" message to the VFS, and the VFS checks the login details and asks the kernel to set the User Interface process' User ID if the details are valid. If the kernel wasn't told the process is a User Interface (by the Device Manager) then then kernel refuses to set the User Interface's User ID.
After a user logs in to a User Interface successfully the User Interface will read an "init" file in the user's home directory, and start processes indicated by this init file (e.g. GUIs, CLIs or any other full screen applications) - one for each virtual screen. These new processes (and any more processes they start) will inherit the same User ID as the User Interface.
For file permissions, any thread can ask the VFS to do file I/O operations and the VFS can check which process the thread belongs to from the IPC system, find out the Executable ID and User ID from this and then check if the thread has permission from this.
All files belong to an executable or a user (but never both). This means that (for e.g.) the Device Manager can't access a user's files even though it's one of the most trusted executables in the OS, and any application can create files that no user can access (although administrators can delete them).
Of course there will also be group permissions, so that (for e.g.) a file can be read by any user or executable in the "HTML" group and written to by any user or executable in the "web development" group. This is mostly the same as standard *nix file permissions (except that executables have their own Executable ID and administrators aren't given full access).
BTW for file system security there's a major compromise between flexibility/complexity and usability. As an example, for an exercise in a course I'm doing I had to setup a Windows 2000 server with file sharing (with different users and groups of users given different access to different shared resources). I found the file permission system in Windows 2000 to be much more flexible than the standard Unix system, but I also found it to be a severe pain in the neck - the extra flexibility/complexity caused more hassles than it was worth.
From my perspective you can have an extremely flexible file permission system, but most system adminstrators won't fully understand it and will leave security holes. A simpler (less flexible) file permission system is much easier to understand, much less likely to be mis-used, and therefore more likely to be secure in practice.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
I'm going to use capabilities. My kernel is a microkernel first of all.
Basically when a server wants to register a capability, it tells the auth server, who generates a capability for the calling server. A capability is an 8 byte string that a process uses to prove it has authority for the action bein taken. For example, the console server will request the "/System/Capabilities/Map_own_pages" capability from the auth server who will send the console server the capability(which was created by the virtual kernel server). Then the console server calls the kernel and tells it to map certain pages AND it sends the capability string to the kernel who checks it against the capability it created and makes sure it is the same. If it is, the kernel allows the action else it refuses.
This is how all servers perform their authentication.
The auth server determines which context can access what by checking the context's user id and group id. Since capabilities are implemented sort of like a file system(the auth server provides a file system), the capabilities inherently have a system of user and group id rights. However, capabilities can be added to a user at anytime. In addition until everything is started up(multi-user mode started), the auth server allows all processes to receive capabilities. These boot-time processes are then given root priveleges.
Basically when a server wants to register a capability, it tells the auth server, who generates a capability for the calling server. A capability is an 8 byte string that a process uses to prove it has authority for the action bein taken. For example, the console server will request the "/System/Capabilities/Map_own_pages" capability from the auth server who will send the console server the capability(which was created by the virtual kernel server). Then the console server calls the kernel and tells it to map certain pages AND it sends the capability string to the kernel who checks it against the capability it created and makes sure it is the same. If it is, the kernel allows the action else it refuses.
This is how all servers perform their authentication.
The auth server determines which context can access what by checking the context's user id and group id. Since capabilities are implemented sort of like a file system(the auth server provides a file system), the capabilities inherently have a system of user and group id rights. However, capabilities can be added to a user at anytime. In addition until everything is started up(multi-user mode started), the auth server allows all processes to receive capabilities. These boot-time processes are then given root priveleges.