Hi,
embryo2 wrote:Brendan wrote:embryo2 wrote:If you want trusted computing then you must buy trusted hardware. Where's the AML's place in the picture of missed efforts?
I don't understand your question (I get a parse error for "the picture of missed efforts").
It means you need to do something to ensure that millions PCs are sold not with the purpose to distribute a secret rootkit and to track personally you with it.
It means I need to do whatever I can to defend against malicious hardware (because sooner or later you can guarantee there will be malicious hardware); even though it's impossible be 100% protected against malicious hardware.
It also means I need to do whatever I can to defend against malicious software. For example, if someone has "100% guaranteed and verified" trustworthy hardware, then the AML is still going to be in RAM and any software the runs before my OS boots can inject malicious code into the AML, and if the OS uses AML it has no real way of knowing if it's been tampered with. Of course UEFI secure boot could have been used to protect against software that runs before the OS boots, but that's far from trivial too.
embryo2 wrote:To ensure it you can convert the AML into the ASL form and read it carefully. It is an example of an effort required. But if you aren't bothered with such efforts, then how AML can help you or hurt you? It can't. Because it have no connection with your will to spend some time on required efforts.
In my opinion there's only really 4 choices:
- Have an OS that sucks (no power management, poor support for IO APICs, no way to turn computer off, doesn't support laptop backlight control, etc) because it doesn't use AML and doesn't provide any alternative
- Have an OS that doesn't use AML and does provide alternative/s (e.g. motherboard drivers)
- Have an OS that is a security and stability disaster because it assumes the AML is neither malicious nor buggy even tough the AML may be malicious and/or buggy
- Have an OS that uses AML, but also has tries to mitigate the risk (e.g. check if a hash of the AML is in a white-list of "known good" hashes, and some way to guarantee that the white list is trustworthy and hasn't been tampered with).
For these, two of the choices aren't good choices for my project (partly because it's a distributed system and needs much stronger defences than a normal desktop/server OS because of that).
The time needed just to support AML is massive all by itself. The additional time needed to mitigate the risks of using AML isn't small either. In a comparison between "OS doesn't use AML and provides alternatives/motherboard drivers" and "OS uses AML and tries to mitigate the risk" the difference in time needed isn't very much, and the "motherboard drivers" approach has multiple other benefits that justify any additional time.
embryo2 wrote:Brendan wrote:Just grab an AML disassembler (there's one in virtually every Linux distro) and take a look at your own (real not virtual) computer's AML. Look for anything that depends on the "\_OS" object.
_OS and _OSI things are used by vendors to circumvent some hacks the big boys sometime use. It's not the intention of the _OS and _OSI to lie to you. A vendor just must to implement something in a non standard way if Microsoft wants it. But the standard is aware of such behavior and gives vendors a back door for their products to be compatible. And there's no intention to limit somehow other OSes by the means of _OS or _OSI. Vendors implement different ways to achieve the same result for Microsoft and for other OSes. The only problem with such approach is the testing efforts required. In case of Microsoft a vendor has almost unlimited testing support (hundreds millions of users will test it), while in case of "generic OS" the tests are performed only by the vendor's testing lab. It means there could be bugs and even some missed functionality (due to project timelines and easyness of low priority problem fixing postponement). But it doesn't mean the difference between "a Microsoft way" and "generic OS way" of implementing hardware features must be too serious. And the outcry of some bloggers is mostly due to the "it's unfair to be big!" emotional thinking.
Regardless of what the intention was; the fact remains that in practice firmware developers do disable most functionality/features unless the OS says it's (a version of) Windows and says it supports the interfaces defined by Microsoft for Windows. In practice there is only "Microsoft's way" or "crippled and unstable way".
embryo2 wrote:Brendan wrote:You can't avoid the "Windows specific behaviour" trap. With a lot of work you could figure out the Windows specific behaviour; but you'd have to do that for each different "OS name" Microsoft uses (in the past, now, and in the future) and each "_OSI" OS defined string that Microsoft define.
I can use generic approach. If I meet some serious problem, then I can think about some hacks like tracing the Windows specific behavior. But while there's no information about how great is the difference - it's just a kind of panic if you decide that it's better to implement a thousand motherboard drivers than just to select the generic approach. The standard way most probably will lead to the lesser amount of efforts spent. And it is very probable that the difference will represent itself in the range from 2 to 3 orders of magnitude.
The difference will be "there are no laptop where everything works properly on your OS" vs. "for 99% of laptops everything works on Windows".
embryo2 wrote:Brendan wrote:If I could compile ACPICA (with a compiler that will never exist) and link it to something else (with a linker that will never exist); then calling it would be mostly trivial.
Do you mean it's too hard for you to build the ACPICA on a Linux PC?
I mean "self hosting" (being able to build everything for the OS on the OS itself). I mean that as soon as I'm able I'm going to wipe Linux off of every computer I own (and install my OS instead) and never touch Linux for the rest of my life.
embryo2 wrote:Brendan wrote:Again, the AML interpreter alone is just the tip of the iceberg (almost completely useless without a huge amount of other code, plus a whitelist/blacklist, plus "replacement AML" for all the known buggy computers).
What is the purpose of the "huge amount of other code"? What is the purpose of the whitelist/blacklist? How big is the problem with buggy computers?
Essentially, you are claiming, that the cost of dealing with buggy computers is higher than the cost of gathering low level information for all the mentioned buggy computers plus all not mentioned bug free computers followed by the thousand driver development efforts. And the "thousand driver development efforts" include the efforts spent on development the AML for all mentioned and not mentioned computers.
I'm saying the effort involved in writing an AML interpreter, working around the ""Windows specific behaviour" trap and mitigating the security/stability problems is a massive amount of work to end up with a "no better than what every other OS has already" failure.
I'm saying the total effort involved in writing motherboard drivers is more in theory (to support all motherboards); but I'll only bother with a few of the most common motherboards (and let volunteers do others later) and so it's actually far less work for me in practice; and that "more total work in the long run" is justified by the "better than what every other OS has" advantage.
embryo2 wrote:Brendan wrote:For all "modern" (since about 1995) 80x86 motherboards the legacy ROM area/s used by devices (e.g. video card, etc) can be converted to normal RAM.
Do you mean the normally workable areas of RAM are hidden and replaced with the memory mapped registers and ROM? And it is implemented in a standardized manner?
I mean that (for BIOS systems) PCI device ROMs are copied to RAM (in the legacy area starting at 0x000C0000) and then that RAM is set to "read only" in the memory controller (and this is a required part of the relevant specifications for PCI device ROMs); but the memory controller is chipset specific and you need chipset specific code to be able to change that RAM back to "read write" in the memory controller.
Essentially; for all "modern" (since about 1995) 80x86 motherboards the legacy ROM area/s used by devices (e.g. video card, etc) can be converted to normal RAM by chipset specific code.
embryo2 wrote:If it is the case then you can obtain some megabyte of additional memory (in contrast with tenths of gigabytes available). But at what cost? At the cost of collecting the information for thousands of motherboards? Followed by the time, required for the implementation of the thousands of drivers.
For this one specific case, I'd expect to gain between 128 KiB and 256 KiB of RAM for almost zero cost (because I'm going to need motherboard drivers for multiple other reasons anyway).
To be more correct; I'm saying "the sum of all the benefits justifies the effort" and you're saying "each individual benefit in isolation doesn't justify the effort all by itself". We're both right; it's just that "the sum of all benefits" is what matters and "each individual benefit in isolation" is mostly irrelevant.
embryo2 wrote:Brendan wrote:For Intel's chipsets it's typically well documented by their datasheets. For other manufacturers it can be hard to get documentation (and in that case I'd just skip it).
Well, now you have limited the scope to the Intel's motherboards only.
No, I'm not limiting the scope. Think of it like this:
- if full information is available the motherboard driver can support 100% features
- if only chipset information is available the motherboard driver can support 90% of features
- if no information is available (e.g. just a disassembly of the AML and nothing else) the motherboard driver can only support 60% of features
- if the OS used AML alone (no motherboard driver) it'd only support 50% of features (which is worse than every single "with motherboard driver" case)
Who cares? Intel mostly make chips and chipsets (and provide the datasheets for their chips and chipsets); and this is the information needed.
embryo2 wrote:It means with the old hardware in question you are required to develop even more motherboard drivers. And to find even more information. And to do it in such a manner that modern standards won't be affected (a special code for old hardware in addition to the already large code for the new hardware). And to somehow maintain your leading edge over the Windows you just need to implement more feature than was available for Windows in the old hardware. You should be creative and find ways to squeeze some resemblance of the modern hardware behavior from the old motherboards (or do you want two OSes, one for new hardware and one for the old?).
The interface that motherboard drivers provide will be exactly the same regardless of whether the computer supports ACPI or not. It makes no difference to the rest of OS.
embryo2 wrote:Brendan wrote:For a 20 year old motherboard (e.g. from before ACPI existed), it's no different to writing a motherboard driver for a new motherboard - you can still get information from the datasheet, still support whatever features the hardware supports (without bothering with older interfaces like APM), still create a 3D model, etc.
Would the information be always available and your time were unlimited, then yes - it would be possible to redefine the world. But I'm afraid that not all conditions are present.
That's faulty logic.
I don't need to write all the drivers. I have to provide an OS, interfaces and tools that are impressive enough to encourage other people to write drivers. Note that this applies to all drivers (e.g. video card drivers, sound card drivers, network card drivers, and motherboard drivers).
embryo2 wrote:Brendan wrote:Normally I provide a "firmware development kit" with working example code for Bochs; so that (eventually) if someone wants to write a "boot from ROM boot loader" they only need to worry about writing/modifying any motherboard/chipset specific things that are needed before the OS starts.
The kit is an interesting idea. But do you mean the hardest part (information and implementation) should be performed by the others? It's not the attractive case for the majority of developers.
It's not something the majority of developers would do. It's something that companies would pay developers to do for specific "hardware+software" products. For example, if you were a company planning to manufacture 1 million smart watches that run my OS, then (instead of spending time writing generic firmware for the hardware) you might implement a "boot from ROM boot loader" to reduce development time and reduce costs and improve boot times.
Note that I have toyed with the idea of providing custom designed boxes one day. For example, I could write the "boot from ROM boot loader" for one cheap Atom board, then sell a small range of "cluster extenders" (where people can buy a little box from me and plug it into their LAN to add processing power or storage space or more users to their existing computers).
embryo2 wrote:Brendan wrote:embryo2 wrote:Do you mean your "double check and tweak" solves the security issue with the untrusted hardware? A hundred pages of cryptic text is really helpful here.
I wouldn't expect anyone to use source code auto-generated by a "clever" utility without bothering to check it. I wasn't planning to provide a hundred pages of cryptic text (and have no idea where you got a silly idea like that from). It'd generate normal source code, with normal/descriptive function names, and with comments (like "// Code to determine IO APIC input from PCI slot" and "// Add code to support FOO here!"), etc.
I suppose you know that reading the ASL is not a very hard exercise. And even if there's no comments like "it is for PCI, stupid!" it's still perfectly readable and the amount of efforts required to understand it's logic is not essentially bigger, than the efforts required to understand your auto-generated code.
And?
The utility would generate a "bare bones" motherboard driver, where maybe 25% of the code is generic "customisable" functionality (stubs, etc), 50% is code to deal with the interface between the motherboard driver and the OS, and 25% is auto-converted from AML.
embryo2 wrote:And you can note an interesting thing - while the ASL is available for virtually every Linux user, they still are concerned with the rootkits and everything alike. May be there is something wrong with your reasoning?
The package is available; but most people have no idea what AML is and never install the package, and the small number of users who understand AML have better things to do than reading their AML every time they boot to see if it's been tampered with.
Something is wrong with the assumption that users should have to care.
embryo2 wrote:Brendan wrote:Brendan wrote:Also don't forget that (if I find out later that I have no choice) nothing would really prevent me from having a "generic motherboard driver" that does use AML internally, and having that as a fall-back for cases where there's no (more suitable) motherboard driver.
embryo2 wrote:It would be wise to create the "fall-back" driver and after some time just to forget about the world-wide contest "let's try to find a hardware information".
No. As soon as I provide a "generic motherboard driver" I destroy any hope that people will ever bother to write motherboard drivers that don't suck, and the OS will suck forever.
You are too quick to change your mind towards the hardest stance possible.
I haven't changed my mind at all - if I find out later that I have no choice, then I can provide a "generic motherboard driver" even though I want to avoid that as much as possible.
Cheers,
Brendan