Re: Os and implementing good gpu drivers(nvidia)
Posted: Sun Nov 08, 2015 12:36 am
Hi,
Yes; if you add a whole layer of bloated puss (a managed environment) in theory you can check for more errors at run-time than hardware can. However; in practice (at least for 80x86) hardware has the ability to check for mathematical errors (even including things like precision loss - see FPU exception conditions), array index overflows ("bound" instruction), integer overflows ("into" instruction), and also has the ability to split virtual address spaces up into thousands of smaller segments; and for every single one of these features the CPU can do it faster than software can, and still nobody uses them because everyone would rather have higher performance.
I think I know you well enough to know that what you're alluding to is Java's native libraries; where Java doesn't allow assembly in normal programs at all, but does allow native libraries because managed environments are too restrictive/crippled to support important things without them (e.g. OpenGL) and too retarded/broken to get decent performance without them; and where Java's developers (Sun/Oracle) were smart enough to know that discarding "managed" and allowing all of the problems of "unmanaged" is necessary for the language to be usable for more than just toys.
Think of a utility like "grep" as a compiler that compiles command line arguments (e.g. a regular expression or something) into native machine code (where that native machine code is the deterministic finite automaton) and then executes that native machine code. For an "ahead of time" compiler that jump table doesn't exist when "grep" is being compiled (it only exists at run-time after its been generated by grep); and a smart compiler can't optimise something that doesn't exist.
Yes, you can have a warning about the massive performance decrease (e.g. display some sort of "orange coffee cup" logo so people know its running in a managed environment).
When there's no hardware virtualisation; most whole system emulators (VMWare, VirtualPC, VirtualBox, Qemu, etc - all of them except Bochs) fall back to converting guest code into run-time generated native code. In a managed environment software can't generate and execute its own native code.
I agree - it's better not to use proposed "recovery" (for both managed and unmanaged), and far better to kill the process and generate a nice bug report before important details are obscured by foolish attempts to ignore/work around a problem that should've been detected and fixed before the software was released.
Hardware protection is necessary because "100%" is never 100%; regardless of whether we're talking about an ahead of time compiler that detects "100%" of bugs before its too late (before software is released) or a managed layer of bloat that detects "100%" of bugs after its too late (after software is released).
I still think that eventually, after UEFI is ubiquitous and nobody cares about BIOS (e.g. maybe in the next 10 years if we're lucky) Intel might start removing some of the old stuff (e.g. develop "64-bit only" CPUs and sell them alongside the traditional "everything supported" CPUs, and then spend another 10+ years waiting for everything to shift to "64-bit only").
This is also why we can't just have fast and large L1 caches (and why L1 cache sizes have remained at 64 KiB for about a decade while Intel have been adding larger/slower L2, L3, ..). Increasing the size decreases performance/latency.
Cheers,
Brendan
Are you going to paint it red? I heard red things go faster...embryo2 wrote:Yes, in the end a good compiler will give us best speed.Ready4Dis wrote:The argument was that you guys said you won't lose speed and will gain tons of security at the same time.
Except it doesn't necessarily kill the application at all (signals, hardware exceptions reflected back to the process, etc). I've already explained that this is a language design thing and not a managed vs. unmanaged thing. The only reason you think the application must be killed is that (despite the fact that most unmanaged languages do provide facilities for the process to attempt recovery after a crash) very few people use these facilities because very few people care if a buggy application is killed (as long as it doesn't effect other processes or the kernel as a whole).embryo2 wrote:The checks are not free (development complexity, vendor lock, less space for caches, more power consumed etc). And "anything dangerous" is still dangerous - it kills the application.Brendan wrote:In general; unmanaged has better performance because it doesn't need to check for anything dangerous (because hardware already checks that for "free").
Yes; if you add a whole layer of bloated puss (a managed environment) in theory you can check for more errors at run-time than hardware can. However; in practice (at least for 80x86) hardware has the ability to check for mathematical errors (even including things like precision loss - see FPU exception conditions), array index overflows ("bound" instruction), integer overflows ("into" instruction), and also has the ability to split virtual address spaces up into thousands of smaller segments; and for every single one of these features the CPU can do it faster than software can, and still nobody uses them because everyone would rather have higher performance.
As soon as you allow assembly, any code can do anything to anything (e.g. maliciously tamper with the JVM) and it's no longer "managed". Also note that most "managed environments" (e.g. Java, .NET, etc) exist for portability (so that the same byte-code can run on multiple different platforms) and not to protect babies from their own code; and as soon as you allow assembly you've killed the main reason for them to exist (portability).embryo2 wrote:Assembly is acceptable for managed environments just like it is acceptable to allow user to run any dangerous code if he wishes and understand the consequences.Brendan wrote:For some specific cases (and not "in general"), where things like hand optimised assembly is beneficial managed environments are a worse performance problem.
I think I know you well enough to know that what you're alluding to is Java's native libraries; where Java doesn't allow assembly in normal programs at all, but does allow native libraries because managed environments are too restrictive/crippled to support important things without them (e.g. OpenGL) and too retarded/broken to get decent performance without them; and where Java's developers (Sun/Oracle) were smart enough to know that discarding "managed" and allowing all of the problems of "unmanaged" is necessary for the language to be usable for more than just toys.
Translation: You weren't able to understand what I wrote.embryo2 wrote:If jump table is accessible to a smart compiler it can optimize it. In the worst case it can issue a warning about possible performance decrease and developer can fix the too complex code.Brendan wrote:For other specific cases managed environments are far worse for performance. An example of this is run-time generated deterministic finite automatons (used by utilities like "grep"); where to really achieve max. performance you want to generate a jump table and code for each state, and where the code generated for each state does something a bit like "state00: nextByte = goto state00jumpTable[buffer[pos++]];".
Think of a utility like "grep" as a compiler that compiles command line arguments (e.g. a regular expression or something) into native machine code (where that native machine code is the deterministic finite automaton) and then executes that native machine code. For an "ahead of time" compiler that jump table doesn't exist when "grep" is being compiled (it only exists at run-time after its been generated by grep); and a smart compiler can't optimise something that doesn't exist.
Yes, you can have a warning about the massive performance decrease (e.g. display some sort of "orange coffee cup" logo so people know its running in a managed environment).
Translation: You were able to understand perfectly, but chose not to so that you can continue denying reality.embryo2 wrote:Didn't get your point. What's wrong with Qemu? And how it related to managed?Brendan wrote:For other specific cases the performance problems caused by managed environments become severe. An example of this is things like Qemu; where (assuming there's no hardware virtualisation - e.g. emulating ARM on an 80x86 host) you can kiss goodbye to acceptable performance as soon as you hear the word "managed".
When there's no hardware virtualisation; most whole system emulators (VMWare, VirtualPC, VirtualBox, Qemu, etc - all of them except Bochs) fall back to converting guest code into run-time generated native code. In a managed environment software can't generate and execute its own native code.
For "managed" its the same problem - a bug can corrupt anything that its allowed to modify. If correct code can do "myObject.setter(correctValue);" then buggy code can do "myObject.setter(incorrectValue);".embryo2 wrote:The problem is the code can corrupt memory within the guarded boundaries. So, it's better not to use proposed "recovery".Brendan wrote:For unmanaged, code that failed to do simple things correctly can still attempt to do much more complicated recovery (which in my opinion is completely insane to begin with) by using things like exceptions and/or signals that were originally generated from hardware detection.
I agree - it's better not to use proposed "recovery" (for both managed and unmanaged), and far better to kill the process and generate a nice bug report before important details are obscured by foolish attempts to ignore/work around a problem that should've been detected and fixed before the software was released.
I'm ignoring things like Apache (which is still the most dominant web server and is written in an unmanaged language), that creates a new process for each connection so that if any process crashes/terminates all the other connections aren't effected?embryo2 wrote:The way here is very different. The example you are trying to ignore is still the same - web server refuses to serve just one page and every other page is served successfully.Brendan wrote:In the same way; a managed environment can be sane and terminate the process without giving incompetent programmers the chance to turn their failure into a massive disaster.
That's why I used quotation marks - "100%" of bugs and not 100% of bugs.embryo2 wrote:If 100% bugs are detected at compile time then I recompile the environment and need no hardware protection.Brendan wrote:I think you missed the point here. Where "100%" of bugs are detected at compile time, a managed environment is a pointless pile of bloat and you still need hardware protection to protect from bugs in the managed environment.
Hardware protection is necessary because "100%" is never 100%; regardless of whether we're talking about an ahead of time compiler that detects "100%" of bugs before its too late (before software is released) or a managed layer of bloat that detects "100%" of bugs after its too late (after software is released).
I agree - bugs in critical code are extremely important (regardless of whether it's a layer of pointless "managed environment" bloat, or a kernel or anything else). Note that this is why a lot of people (including me) think micro-kernels are better (and why Microsoft requires digital signatures on third-party drivers, and why Linux people wet their pants when they see a "binary blob").embryo2 wrote:Bugs in critical code also extremely important. The extra code base introduced with the environment isn't so much an issue if we remember the size of Linux kernel and drivers, for example.Brendan wrote:Now we have the situation where a fool is ignoring facts. Bugs in the environment are extremely important (far more important than any bug in any one application that will only ever effect that one application); and "most bugs in JRE" is not the same as "all bugs in JRE".
Correct analogy. CPU's features should be optimal, not minimal just because some fool decides that software can do the same protection checks slower.embryo2 wrote:Wrong analogy. Instruction set should be optimal, not minimal.Brendan wrote:What you're saying is as stupid as something like: It can be proven that a Turing complete CPU can be implemented with a single instruction and nothing else; so anything more than one instruction and nothing else just consumes more power.
In theory, it would be better to simplify Intel's instruction set and remove old baggage nobody uses any more (e.g. most of segmentation, hardware task switching, virtual8086 mode, real mode, the FPU and MMX, etc) and also rearrange opcodes so instructions can be smaller (and not "multiple escape codes/bytes"). In practice, the majority of the silicon is consumed by things like caches, and backward compatibility is far more important than increasing cache sizes by 0.001%.embryo2 wrote:I see it as a way to allow users to add extra parallel computations when they dealt with massive number crunching, for example. But may be it's better to simplify the Intel's instruction set and to trade the silicon for more arithmetic units?Brendan wrote:The fact is that for a lot of things software is a lot less efficient and consumes far more power than implementing it in hardware would. Note that this is also why Intel is planning to add FPGA to Xeon chips and why Intel already added FPGA to some embedded CPU - so that (for specialised applications) people can program the hardware/FPGA to get efficiency improvements that software can't achieve.
I still think that eventually, after UEFI is ubiquitous and nobody cares about BIOS (e.g. maybe in the next 10 years if we're lucky) Intel might start removing some of the old stuff (e.g. develop "64-bit only" CPUs and sell them alongside the traditional "everything supported" CPUs, and then spend another 10+ years waiting for everything to shift to "64-bit only").
I don't know if you mean 1024 registers or 1024-bit wide registers. For both cases it comes down to "diminishing returns" - exponentially increasing performance costs to support it, with negligible benefits to justify it. For 1024 registers think of an huge/expensive "switch(register_number) { " thing killing performance for every instruction, and for 1024-bit registers think of multiplication and division (where it's roughly "O(n*n)" and not just "O(n)" like addition/subtraction).embryo2 wrote:Then why Intel adds new instructions in so slow manner? Why we still have no 1024 SSE registers?Brendan wrote:but silicon is cheap and no sane person cares.
This is also why we can't just have fast and large L1 caches (and why L1 cache sizes have remained at 64 KiB for about a decade while Intel have been adding larger/slower L2, L3, ..). Increasing the size decreases performance/latency.
In my case the ideal is temporarily far away, but over time will get closer. In your case the ideal is permanently far away because you're not getting closer over time.embryo2 wrote:While the ideal is far away we can leverage the managed at runtime. It can make code safe, optimize and provide more debug information. So, I'm going to have these options now while you are going to expect the moment when the ideal arrives.Brendan wrote:Managed environments (e.g. valgrind, Bochs) do make debugging easier; but that only matters when you're debugging, and doesn't help when you're writing code, or when the end user is using it; and it's still inferior to detecting bugs at compile time (sooner is always better than later).
Note that my idea of "ideal" is an unmanaged language; where the IDE checks for bugs while you type and has a built-in managed environment (e.g. a source level interpreter) for debugging purposes; and where the compiler detects as many bugs as possible when the code is compiled.
You failed to read it properly. The author only used direct access to create the attacks and explain them, and direct access is not needed to deploy the attacks. The vulnerability is that a lot of software de-serialises "untrusted" data (e.g. from a random attacker's network connection) and the Java code responsible for serialisation/de-serialisation allows arbitrary code execution (and the "managed environment" does nothing to prevent this).embryo2 wrote:Well, finally managed to explore this piece of trash. The guy there pretends that he shows to the world a very critical vulnerability but no one pays attention. The vulnerability in fact is like this - if you allowed to drop an exe file in windows/system32 directory then you can claim you've found new Windows vulnerability. But it seems to me the free access to the mentioned directory is the first thing any sane admin should stop. And well, if an insane admin allows to everybody to do everything with the server's file system, then yes, the author of the claim can tell us - Java is vulnerable!HoTT wrote:Related.
Cheers,
Brendan