StudlyCaps wrote:This system is not without precedent, for many years now graphics drivers have contained compilers which generate machine code from shader and parallel computing languages for uploading and execution on the GPU. These use a restricted language and similarly code which cannot be expressed as operations of the GPU is not recognized as a valid program.
This is/was not enough to prevent Intel graphics hardware from being exploited to run code (presumably bitcoin mining) which continued to cripple graphics performance long after the exploiting process (a web script) had been terminated. As with signing, it's not a complete security solution.
nullplan wrote:Driver signing will exclude malicious drivers, yes, but not insecure ones.
Remember Sony's rootkit! If you accept driver code from 3rd-party vendors, you are at some degree of risk of receiving malicious code. Note that it's not necessarily possible to tell the difference. If the code you received is vulnerable, there's no way to tell if the vulnerability was an honest mistake or a deliberate backdoor.
I have a couple of positive ideas, but they're rather outside the code space. Code auditing would catch some vulnerabilities, and signing may guarantee that only audited code will run. Contracting suppliers to produce secure drivers with penalties for vulnerabilities found may also help, if you can get any suppliers to sign such a contract.
I've actually been thinking of requiring all code be distributed (and perhaps even stored) in a high level language. Compilation can be extremely fast if you don't need excessive optimization and many tasks can be done with remarkably little code. I mean to make the experiment and see how it goes. Also, it's the cheap and easy way with a native Forth.
h0bby1 wrote:Additionally you have things like vx32 or 64 bits equivalent to do machine code parsing/disasembling, to check memory access and code path, and filter instructions.
Uh... vx32 itself is specific to the x86_32 since it requires a segment register trick, but I don't think your idea requires it. Machine code parsing/disasembling may be done on any architecture. However, what happens if memory access is computed and your checks can't figure out what it might be computed to? It's basically the class of problem which gets lumped under the halting problem with the claim it won't work, but here's something which could work for this specific case: Memory address computations may be altered to clamp the results to safe values. If available, MIN and MAX instructions seem the obvious choice, but may ruin the computation if you haven't evaluated it correctly. (What happens if separate computations require subsequent addresses, all of which are clamped to a specific maximum?) Another route would be bit-wise AND as the fastest possible instruction to limit the range, and then add a base address. Out of bounds access then wraps, which may be easier to debug. This is what I was thinking of doing with all untrusted code in my Forth; modifying Forth's relatively few memory load and store instructions to do this before compiling.
nullplan wrote:rdos wrote:Actually, you don't even need to be able to install a software driver into the system. Any PCIe device could start writing anywhere in memory, including taking over or taking down the operating system. Driver signing, JIT compilation, and user account models won't help a bit to fix this.
OK, evil hardware is a whole different can of worms. The OS is basically powerless against it, so only install trustworthy hardware! That's why Thunderbolt now has an authorization procedure. Thunderbolt is basically exporting PCIe to an external interface.
So IOMMU will do nothing to stop this? Ouch! :/
nullplan wrote:But I was talking about well behaved hardware and evil drivers. That's what a restriction on the drivers (like running them at lower privilege) is supposed to consider.
Yeah, but again, remember Sony's rootkit! Sony are a hardware manufacturer first and foremost, and they're absolutely one of the companies most people would think of as reputable. "Only install trustworthy hardware" is another non-code solution.
rdos wrote:In the area of viruses, trojans, and indeed evil drivers & hardware too, the biggest risk factor is how widespread an OS is. The creators obviously won't bother to attack a hobby OS that almost nobody uses, while Windows, Linux, and Macs will be primary targets. I also don't think building something superior in this area would make your OS popular, and so this is a poor argument. Generally speaking, all these protective measures create obstacles for users, and so are not popular. If your OS is not popular, it won't become widespread.
Mostly true, but if you have a specific enemy and/or enough money is involved, popularity won't matter.
rdos wrote:Another thing is that filesystem-based protection measures are not very hard to break if you have a mature OS. For instance, I plan to do ext and ntfs drivers that just ignore the ACL lists and allows the OS to read & write anything it likes. Would be a fun way to spy on closed systems I cannot log in to.
So you as an OS supplier are
planning to ship malware now?
Resist the temptation! (says a guy who eats far too much chocolate and bacon.)
rdos wrote:Besides, you can potentially stop a PCIe device from doing anything it likes on PCI (disable bus mastering), but I fear this is just a software setting that an evil device could ignore too. Then you could selectively enable bus mastering on trusted devices only.
I'd love an authoritative statement on whether the bus mastering setting is software or not.