Os and implementing good gpu drivers(nvidia)

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
Post Reply
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Os and implementing good gpu drivers(nvidia)

Post by Brendan »

Hi,
embryo2 wrote:
Ready4Dis wrote:The argument was that you guys said you won't lose speed and will gain tons of security at the same time.
Yes, in the end a good compiler will give us best speed.
Are you going to paint it red? I heard red things go faster...
embryo2 wrote:
Brendan wrote:In general; unmanaged has better performance because it doesn't need to check for anything dangerous (because hardware already checks that for "free").
The checks are not free (development complexity, vendor lock, less space for caches, more power consumed etc). And "anything dangerous" is still dangerous - it kills the application.
Except it doesn't necessarily kill the application at all (signals, hardware exceptions reflected back to the process, etc). I've already explained that this is a language design thing and not a managed vs. unmanaged thing. The only reason you think the application must be killed is that (despite the fact that most unmanaged languages do provide facilities for the process to attempt recovery after a crash) very few people use these facilities because very few people care if a buggy application is killed (as long as it doesn't effect other processes or the kernel as a whole).

Yes; if you add a whole layer of bloated puss (a managed environment) in theory you can check for more errors at run-time than hardware can. However; in practice (at least for 80x86) hardware has the ability to check for mathematical errors (even including things like precision loss - see FPU exception conditions), array index overflows ("bound" instruction), integer overflows ("into" instruction), and also has the ability to split virtual address spaces up into thousands of smaller segments; and for every single one of these features the CPU can do it faster than software can, and still nobody uses them because everyone would rather have higher performance.
embryo2 wrote:
Brendan wrote:For some specific cases (and not "in general"), where things like hand optimised assembly is beneficial managed environments are a worse performance problem.
Assembly is acceptable for managed environments just like it is acceptable to allow user to run any dangerous code if he wishes and understand the consequences.
As soon as you allow assembly, any code can do anything to anything (e.g. maliciously tamper with the JVM) and it's no longer "managed". Also note that most "managed environments" (e.g. Java, .NET, etc) exist for portability (so that the same byte-code can run on multiple different platforms) and not to protect babies from their own code; and as soon as you allow assembly you've killed the main reason for them to exist (portability).

I think I know you well enough to know that what you're alluding to is Java's native libraries; where Java doesn't allow assembly in normal programs at all, but does allow native libraries because managed environments are too restrictive/crippled to support important things without them (e.g. OpenGL) and too retarded/broken to get decent performance without them; and where Java's developers (Sun/Oracle) were smart enough to know that discarding "managed" and allowing all of the problems of "unmanaged" is necessary for the language to be usable for more than just toys.
embryo2 wrote:
Brendan wrote:For other specific cases managed environments are far worse for performance. An example of this is run-time generated deterministic finite automatons (used by utilities like "grep"); where to really achieve max. performance you want to generate a jump table and code for each state, and where the code generated for each state does something a bit like "state00: nextByte = goto state00jumpTable[buffer[pos++]];".
If jump table is accessible to a smart compiler it can optimize it. In the worst case it can issue a warning about possible performance decrease and developer can fix the too complex code.
Translation: You weren't able to understand what I wrote.

Think of a utility like "grep" as a compiler that compiles command line arguments (e.g. a regular expression or something) into native machine code (where that native machine code is the deterministic finite automaton) and then executes that native machine code. For an "ahead of time" compiler that jump table doesn't exist when "grep" is being compiled (it only exists at run-time after its been generated by grep); and a smart compiler can't optimise something that doesn't exist.

Yes, you can have a warning about the massive performance decrease (e.g. display some sort of "orange coffee cup" logo so people know its running in a managed environment).
embryo2 wrote:
Brendan wrote:For other specific cases the performance problems caused by managed environments become severe. An example of this is things like Qemu; where (assuming there's no hardware virtualisation - e.g. emulating ARM on an 80x86 host) you can kiss goodbye to acceptable performance as soon as you hear the word "managed".
Didn't get your point. What's wrong with Qemu? And how it related to managed?
Translation: You were able to understand perfectly, but chose not to so that you can continue denying reality.

When there's no hardware virtualisation; most whole system emulators (VMWare, VirtualPC, VirtualBox, Qemu, etc - all of them except Bochs) fall back to converting guest code into run-time generated native code. In a managed environment software can't generate and execute its own native code.
embryo2 wrote:
Brendan wrote:For unmanaged, code that failed to do simple things correctly can still attempt to do much more complicated recovery (which in my opinion is completely insane to begin with) by using things like exceptions and/or signals that were originally generated from hardware detection.
The problem is the code can corrupt memory within the guarded boundaries. So, it's better not to use proposed "recovery".
For "managed" its the same problem - a bug can corrupt anything that its allowed to modify. If correct code can do "myObject.setter(correctValue);" then buggy code can do "myObject.setter(incorrectValue);".

I agree - it's better not to use proposed "recovery" (for both managed and unmanaged), and far better to kill the process and generate a nice bug report before important details are obscured by foolish attempts to ignore/work around a problem that should've been detected and fixed before the software was released.
embryo2 wrote:
Brendan wrote:In the same way; a managed environment can be sane and terminate the process without giving incompetent programmers the chance to turn their failure into a massive disaster.
The way here is very different. The example you are trying to ignore is still the same - web server refuses to serve just one page and every other page is served successfully.
I'm ignoring things like Apache (which is still the most dominant web server and is written in an unmanaged language), that creates a new process for each connection so that if any process crashes/terminates all the other connections aren't effected?
embryo2 wrote:
Brendan wrote:I think you missed the point here. Where "100%" of bugs are detected at compile time, a managed environment is a pointless pile of bloat and you still need hardware protection to protect from bugs in the managed environment.
If 100% bugs are detected at compile time then I recompile the environment and need no hardware protection.
That's why I used quotation marks - "100%" of bugs and not 100% of bugs.

Hardware protection is necessary because "100%" is never 100%; regardless of whether we're talking about an ahead of time compiler that detects "100%" of bugs before its too late (before software is released) or a managed layer of bloat that detects "100%" of bugs after its too late (after software is released).
embryo2 wrote:
Brendan wrote:Now we have the situation where a fool is ignoring facts. Bugs in the environment are extremely important (far more important than any bug in any one application that will only ever effect that one application); and "most bugs in JRE" is not the same as "all bugs in JRE".
Bugs in critical code also extremely important. The extra code base introduced with the environment isn't so much an issue if we remember the size of Linux kernel and drivers, for example.
I agree - bugs in critical code are extremely important (regardless of whether it's a layer of pointless "managed environment" bloat, or a kernel or anything else). Note that this is why a lot of people (including me) think micro-kernels are better (and why Microsoft requires digital signatures on third-party drivers, and why Linux people wet their pants when they see a "binary blob").
embryo2 wrote:
Brendan wrote:What you're saying is as stupid as something like: It can be proven that a Turing complete CPU can be implemented with a single instruction and nothing else; so anything more than one instruction and nothing else just consumes more power.
Wrong analogy. Instruction set should be optimal, not minimal.
Correct analogy. CPU's features should be optimal, not minimal just because some fool decides that software can do the same protection checks slower.
embryo2 wrote:
Brendan wrote:The fact is that for a lot of things software is a lot less efficient and consumes far more power than implementing it in hardware would. Note that this is also why Intel is planning to add FPGA to Xeon chips and why Intel already added FPGA to some embedded CPU - so that (for specialised applications) people can program the hardware/FPGA to get efficiency improvements that software can't achieve.
I see it as a way to allow users to add extra parallel computations when they dealt with massive number crunching, for example. But may be it's better to simplify the Intel's instruction set and to trade the silicon for more arithmetic units?
In theory, it would be better to simplify Intel's instruction set and remove old baggage nobody uses any more (e.g. most of segmentation, hardware task switching, virtual8086 mode, real mode, the FPU and MMX, etc) and also rearrange opcodes so instructions can be smaller (and not "multiple escape codes/bytes"). In practice, the majority of the silicon is consumed by things like caches, and backward compatibility is far more important than increasing cache sizes by 0.001%.

I still think that eventually, after UEFI is ubiquitous and nobody cares about BIOS (e.g. maybe in the next 10 years if we're lucky) Intel might start removing some of the old stuff (e.g. develop "64-bit only" CPUs and sell them alongside the traditional "everything supported" CPUs, and then spend another 10+ years waiting for everything to shift to "64-bit only").
embryo2 wrote:
Brendan wrote:but silicon is cheap and no sane person cares.
Then why Intel adds new instructions in so slow manner? Why we still have no 1024 SSE registers?
I don't know if you mean 1024 registers or 1024-bit wide registers. For both cases it comes down to "diminishing returns" - exponentially increasing performance costs to support it, with negligible benefits to justify it. For 1024 registers think of an huge/expensive "switch(register_number) { " thing killing performance for every instruction, and for 1024-bit registers think of multiplication and division (where it's roughly "O(n*n)" and not just "O(n)" like addition/subtraction).

This is also why we can't just have fast and large L1 caches (and why L1 cache sizes have remained at 64 KiB for about a decade while Intel have been adding larger/slower L2, L3, ..). Increasing the size decreases performance/latency.
embryo2 wrote:
Brendan wrote:Managed environments (e.g. valgrind, Bochs) do make debugging easier; but that only matters when you're debugging, and doesn't help when you're writing code, or when the end user is using it; and it's still inferior to detecting bugs at compile time (sooner is always better than later).

Note that my idea of "ideal" is an unmanaged language; where the IDE checks for bugs while you type and has a built-in managed environment (e.g. a source level interpreter) for debugging purposes; and where the compiler detects as many bugs as possible when the code is compiled.
While the ideal is far away we can leverage the managed at runtime. It can make code safe, optimize and provide more debug information. So, I'm going to have these options now while you are going to expect the moment when the ideal arrives.
In my case the ideal is temporarily far away, but over time will get closer. In your case the ideal is permanently far away because you're not getting closer over time.
embryo2 wrote:
HoTT wrote:Related.
Well, finally managed to explore this piece of trash. The guy there pretends that he shows to the world a very critical vulnerability but no one pays attention. The vulnerability in fact is like this - if you allowed to drop an exe file in windows/system32 directory then you can claim you've found new Windows vulnerability. But it seems to me the free access to the mentioned directory is the first thing any sane admin should stop. And well, if an insane admin allows to everybody to do everything with the server's file system, then yes, the author of the claim can tell us - Java is vulnerable!
You failed to read it properly. The author only used direct access to create the attacks and explain them, and direct access is not needed to deploy the attacks. The vulnerability is that a lot of software de-serialises "untrusted" data (e.g. from a random attacker's network connection) and the Java code responsible for serialisation/de-serialisation allows arbitrary code execution (and the "managed environment" does nothing to prevent this).


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Os and implementing good gpu drivers(nvidia)

Post by embryo2 »

Brendan wrote:
embryo2 wrote:
Brendan wrote:In general; unmanaged has better performance because it doesn't need to check for anything dangerous (because hardware already checks that for "free").
The checks are not free (development complexity, vendor lock, less space for caches, more power consumed etc). And "anything dangerous" is still dangerous - it kills the application.
Except it doesn't necessarily kill the application at all (signals, hardware exceptions reflected back to the process, etc).
You already agreed that such "recovery" should be avoided, but here you use the impotent recovery as an argument against managed. It's a bit unnatural, at least.
Brendan wrote:I've already explained that this is a language design thing and not a managed vs. unmanaged thing.
I've already explained it's managed vs. unmanaged thing and not language design. Managed can recover safely while unmanaged can't. A bit more on it follows later.
Brendan wrote:The only reason you think the application must be killed is that (despite the fact that most unmanaged languages do provide facilities for the process to attempt recovery after a crash) very few people use these facilities because very few people care if a buggy application is killed (as long as it doesn't effect other processes or the kernel as a whole).
If some people don't care about application crash it doesn't mean every developer shouldn't care about it.
Brendan wrote:Yes; if you add a whole layer of bloated puss (a managed environment) in theory you can check for more errors at run-time than hardware can. However; in practice (at least for 80x86) hardware has the ability to check for mathematical errors (even including things like precision loss - see FPU exception conditions), array index overflows ("bound" instruction), integer overflows ("into" instruction)
Well, now you are proposing a "whole layer of bloated puss" in hardware instead of much more simple layer in software for the reason of efficiency. Let's look at the actual "efficiency". Instead of using just one "jo" instruction you suggest to use interrupt handling bloat for a lot of very simple things. And yes, now you should show how we can avoid all the inter-privilege level overhead and how it is faster than just one "jo" instruction.
Brendan wrote:and also has the ability to split virtual address spaces up into thousands of smaller segments; and for every single one of these features the CPU can do it faster than software can
Well, what for we need these thousands of smaller segments? Just to trade jo instruction for interrupt related overhead? Very well done "optimization".
Brendan wrote:and still nobody uses them because everyone would rather have higher performance.
Yes, one jo is better than interrupt.
Brendan wrote:As soon as you allow assembly, any code can do anything to anything (e.g. maliciously tamper with the JVM) and it's no longer "managed".
If I need the best performance and I know my compiler is unable to optimize enough, then yes, I allow my OS to use unsafe code. But it's my informed choice. While unmanaged gives no choice at all. So, managed allows us to select what is of top priority now - security and safeness or speed while unmanaged just denies us such choice.
Brendan wrote:Also note that most "managed environments" (e.g. Java, .NET, etc) exist for portability (so that the same byte-code can run on multiple different platforms) and not to protect babies from their own code; and as soon as you allow assembly you've killed the main reason for them to exist (portability).
Baby code affects not only babies. It's users who should be protected from baby code, so you just misunderstand the purpose of a widely used code. Also you misunderstand the importance of choice. If user has choice of getting better speed for a particular platform and understands it's security and safeness consequences then it's much better than the situation when user has no such choice.
Brendan wrote:
embryo2 wrote:If jump table is accessible to a smart compiler it can optimize it. In the worst case it can issue a warning about possible performance decrease and developer can fix the too complex code.
Translation: You weren't able to understand what I wrote.

Think of a utility like "grep" as a compiler that compiles command line arguments (e.g. a regular expression or something) into native machine code (where that native machine code is the deterministic finite automaton) and then executes that native machine code. For an "ahead of time" compiler that jump table doesn't exist when "grep" is being compiled (it only exists at run-time after its been generated by grep); and a smart compiler can't optimise something that doesn't exist.
Your "grep compiler" seems to be a superintelligent beast that knows more than the programmer who writes the compiler. But it's not true. If a developer knows compiler design principles then he will pay great attention to the effectiveness of the compiler. It means the developer can first check the details at higher level (e.g. there's no way of exceeding the switch range just because the number of options is limited to 100). But if we forget about higher levels (as you have said it's whole layer of bloated puss) then yes, we are unable to predict behavior of our programs.
Brendan wrote:Yes, you can have a warning about the massive performance decrease (e.g. display some sort of "orange coffee cup" logo so people know its running in a managed environment).
Sometime your comments are really funny :)
Brendan wrote:When there's no hardware virtualisation; most whole system emulators (VMWare, VirtualPC, VirtualBox, Qemu, etc - all of them except Bochs) fall back to converting guest code into run-time generated native code. In a managed environment software can't generate and execute its own native code.
What's the problem of moving code generation to the environment? Any sane algorithm can be compiled independently of the application it is used in. So, in managed environment it is possible to generate application related code.
Brendan wrote:
embryo2 wrote:The problem is the code can corrupt memory within the guarded boundaries. So, it's better not to use proposed "recovery".
For "managed" its the same problem - a bug can corrupt anything that its allowed to modify. If correct code can do "myObject.setter(correctValue);" then buggy code can do "myObject.setter(incorrectValue);".
No. The array access bug will corrupt all the stack or heap after the array. But managed environment just prevents all kinds of such bugs from the very existence. So, in case of unmanaged only "recovery" is possible while in case of managed normal recovery is a widely used practice.
Brendan wrote:
embryo2 wrote:The example you are trying to ignore is still the same - web server refuses to serve just one page and every other page is served successfully.
I'm ignoring things like Apache (which is still the most dominant web server and is written in an unmanaged language), that creates a new process for each connection so that if any process crashes/terminates all the other connections aren't effected?
You can compare the performance. What are the costs of creating and finishing a process vs costs of taking a thread from a thread pool and then releasing it. If you perform a sane comparison then the problem with unmanaged will be obvious to you.
Brendan wrote:
embryo2 wrote:Bugs in critical code also extremely important. The extra code base introduced with the environment isn't so much an issue if we remember the size of Linux kernel and drivers, for example.
I agree - bugs in critical code are extremely important (regardless of whether it's a layer of pointless "managed environment" bloat, or a kernel or anything else). Note that this is why a lot of people (including me) think micro-kernels are better (and why Microsoft requires digital signatures on third-party drivers, and why Linux people wet their pants when they see a "binary blob").
So, you agree that the environment's complexity is not an issue. Just because it increases the number of bugs just a bit over the level of other existing critical code.
Brendan wrote:In theory, it would be better to simplify Intel's instruction set and remove old baggage nobody uses any more (e.g. most of segmentation, hardware task switching, virtual8086 mode, real mode, the FPU and MMX, etc) and also rearrange opcodes so instructions can be smaller (and not "multiple escape codes/bytes"). In practice, the majority of the silicon is consumed by things like caches, and backward compatibility is far more important than increasing cache sizes by 0.001%.
Well, the 0.001% here is too bold to be true. Completely rearchitected Intel processor will represent something like ARM's 64 bit model, which uses much less silicon and power. So, instruction simplification just as important as important now all mobile things with ARM processors. And only the vendor lock (the monopoly trick) is allowing Intel to persist.
Brendan wrote:I still think that eventually, after UEFI is ubiquitous and nobody cares about BIOS (e.g. maybe in the next 10 years if we're lucky) Intel might start removing some of the old stuff (e.g. develop "64-bit only" CPUs and sell them alongside the traditional "everything supported" CPUs, and then spend another 10+ years waiting for everything to shift to "64-bit only").
Yes, it's time required to fade away the monopoly effect.
Brendan wrote:
embryo2 wrote:
Brendan wrote:but silicon is cheap and no sane person cares.
Then why Intel adds new instructions in so slow manner? Why we still have no 1024 SSE registers?
I don't know if you mean 1024 registers or 1024-bit wide registers. For both cases it comes down to "diminishing returns" - exponentially increasing performance costs to support it, with negligible benefits to justify it.
Well, as you have said - "silicon is cheap and no sane person cares". Then what diminishing returns are you talking about? We have plenty of silicon, just use it, no problem if it's useless for many applications, but for some applications it's good to have more registers. So, it's better to recognize the value of silicon instead of insisting "the Intel is of a great value for humanity".
Brendan wrote:For 1024 registers think of an huge/expensive "switch(register_number) { " thing killing performance for every instruction, and for 1024-bit registers think of multiplication and division (where it's roughly "O(n*n)" and not just "O(n)" like addition/subtraction).
I'd better think of addressing overhead vs your switch. And I see the addressing much cheaper.
Brendan wrote:This is also why we can't just have fast and large L1 caches (and why L1 cache sizes have remained at 64 KiB for about a decade while Intel have been adding larger/slower L2, L3, ..). Increasing the size decreases performance/latency.
No. Not the size is the problem. The problem is the way Intel's processors work. Cache is of limited usability if we can't predict right content for it. So, the Intel introduces all those additional useless instructions and improves it's vendor lock instead of simplifying the processor instruction set.
Brendan wrote:In my case the ideal is temporarily far away, but over time will get closer. In your case the ideal is permanently far away because you're not getting closer over time.
Your temporarily vs permanently is greatly exaggerated.
Brendan wrote:
HoTT wrote:Related.
You failed to read it properly. The author only used direct access to create the attacks and explain them, and direct access is not needed to deploy the attacks. The vulnerability is that a lot of software de-serialises "untrusted" data (e.g. from a random attacker's network connection) and the Java code responsible for serialisation/de-serialisation allows arbitrary code execution (and the "managed environment" does nothing to prevent this).
Well, if you think the direct access is not needed then may be you can prove it? I can open http access on my server for server side deserialization and you can show us how it's easy to run your code on my server without direct access.

The "arbitrary" code in the link is the code somebody deploys using direct access to the critical part of the file system (where server's libraries are).
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
HoTT
Member
Member
Posts: 56
Joined: Tue Jan 21, 2014 10:16 am

Re: Os and implementing good gpu drivers(nvidia)

Post by HoTT »

embryo2 wrote:
Brendan wrote:
HoTT wrote:Related.
You failed to read it properly. The author only used direct access to create the attacks and explain them, and direct access is not needed to deploy the attacks. The vulnerability is that a lot of software de-serialises "untrusted" data (e.g. from a random attacker's network connection) and the Java code responsible for serialisation/de-serialisation allows arbitrary code execution (and the "managed environment" does nothing to prevent this).
Well, if you think the direct access is not needed then may be you can prove it? I can open http access on my server for server side deserialization and you can show us how it's easy to run your code on my server without direct access.

The "arbitrary" code in the link is the code somebody deploys using direct access to the critical part of the file system (where server's libraries are).
Just to be clear and not to dive into discussion about what arbitrary means or not: Do you think that the linked exploit is a severe security issue?
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Os and implementing good gpu drivers(nvidia)

Post by Brendan »

Hi,
embryo2 wrote:
Brendan wrote:
embryo2 wrote:The checks are not free (development complexity, vendor lock, less space for caches, more power consumed etc). And "anything dangerous" is still dangerous - it kills the application.
Except it doesn't necessarily kill the application at all (signals, hardware exceptions reflected back to the process, etc).
You already agreed that such "recovery" should be avoided, but here you use the impotent recovery as an argument against managed. It's a bit unnatural, at least.
Brendan wrote:I've already explained that this is a language design thing and not a managed vs. unmanaged thing.
I've already explained it's managed vs. unmanaged thing and not language design. Managed can recover safely while unmanaged can't. A bit more on it follows later.
Managed can allow safe recovery in some situations but not others (but doesn't need to and shouldn't in my opinion); and unmanaged can allow safe recovery in some situations but not others (but doesn't need to and shouldn't in my opinion).
embryo2 wrote:
Brendan wrote:The only reason you think the application must be killed is that (despite the fact that most unmanaged languages do provide facilities for the process to attempt recovery after a crash) very few people use these facilities because very few people care if a buggy application is killed (as long as it doesn't effect other processes or the kernel as a whole).
If some people don't care about application crash it doesn't mean every developer shouldn't care about it.
It's not the developers - it's users who don't want "slow because the programmer thought they were too incompetent to write software that isn't buggy".

It's like.. Would you buy a sports car that has to be surrounded by 2 meter thick bubble-wrap at all times because the brakes probably have design flaws? Of course not - you'd want a car where the manufacturer made sure the brakes work that doesn't need bubble wrap.
embryo2 wrote:
Brendan wrote:Yes; if you add a whole layer of bloated puss (a managed environment) in theory you can check for more errors at run-time than hardware can. However; in practice (at least for 80x86) hardware has the ability to check for mathematical errors (even including things like precision loss - see FPU exception conditions), array index overflows ("bound" instruction), integer overflows ("into" instruction)
Well, now you are proposing a "whole layer of bloated puss" in hardware instead of much more simple layer in software for the reason of efficiency. Let's look at the actual "efficiency". Instead of using just one "jo" instruction you suggest to use interrupt handling bloat for a lot of very simple things. And yes, now you should show how we can avoid all the inter-privilege level overhead and how it is faster than just one "jo" instruction.
No. I suggest only having necessary protection (e.g. isolation between processes and the kernel) and not having any unnecessary protection (protecting a process from itself) that nobody cares about and shouldn't be needed at all for released software. I also suggest finding all the programmers that have so little confidence in their own abilities that think they need this unnecessary protection and send them to a nice free 2 week vacation to the centre of the Sun.
embryo2 wrote:
Brendan wrote:and also has the ability to split virtual address spaces up into thousands of smaller segments; and for every single one of these features the CPU can do it faster than software can
Well, what for we need these thousands of smaller segments? Just to trade jo instruction for interrupt related overhead? Very well done "optimization".
We don't need it (and nobody uses it because it's not needed); but that doesn't change the fact that hardware is able to do it better/faster than software can. Note: "hardware is able to do it better/faster than software" does not imply that any CPU manufacturer cares enough about it to bother making it fast.
embryo2 wrote:
Brendan wrote:and still nobody uses them because everyone would rather have higher performance.
Yes, one jo is better than interrupt.
One jo just gives you undefined behaviour. You need a comparison or something before it; plus something to jump to; plus some way to tell CPU that its an extremely unlikely branch (so it doesn't waste the CPU's branch target buffer). Then you realise it only does one limit and that you typically need two limits ("0 < x < 1234") and you actually need a pair of them. Finally; you create a product and get to add the words "slower than everything else because we suck" to all your advertising and wonder why everyone buys the faster alternative from your competitors while your company goes bankrupt.
embryo2 wrote:
Brendan wrote:As soon as you allow assembly, any code can do anything to anything (e.g. maliciously tamper with the JVM) and it's no longer "managed".
If I need the best performance and I know my compiler is unable to optimize enough, then yes, I allow my OS to use unsafe code. But it's my informed choice. While unmanaged gives no choice at all. So, managed allows us to select what is of top priority now - security and safeness or speed while unmanaged just denies us such choice.
It's also the "informed choice" of the malicious attacker who's writing a trojan utility. Yay!
embryo2 wrote:
Brendan wrote:Also note that most "managed environments" (e.g. Java, .NET, etc) exist for portability (so that the same byte-code can run on multiple different platforms) and not to protect babies from their own code; and as soon as you allow assembly you've killed the main reason for them to exist (portability).
Baby code affects not only babies. It's users who should be protected from baby code, so you just misunderstand the purpose of a widely used code. Also you misunderstand the importance of choice. If user has choice of getting better speed for a particular platform and understands it's security and safeness consequences then it's much better than the situation when user has no such choice.
As a user; have you ever purchased any software written in Java?

I have - in my entire life I bought 2 different "beta" games that were written in Java; and both of them combined cost me less than I spend on coffee in one day. For "unmanaged" code I've probably spent about $300 this year; not because I like spending money, but because software that's actually worth paying for is never written in Java.

If a user has a choice of getting better speed for a particular platform and understands it's security and safeness consequences then they never choose "managed".
embryo2 wrote:
Brendan wrote:
embryo2 wrote:If jump table is accessible to a smart compiler it can optimize it. In the worst case it can issue a warning about possible performance decrease and developer can fix the too complex code.
Translation: You weren't able to understand what I wrote.

Think of a utility like "grep" as a compiler that compiles command line arguments (e.g. a regular expression or something) into native machine code (where that native machine code is the deterministic finite automaton) and then executes that native machine code. For an "ahead of time" compiler that jump table doesn't exist when "grep" is being compiled (it only exists at run-time after its been generated by grep); and a smart compiler can't optimise something that doesn't exist.
Your "grep compiler" seems to be a superintelligent beast that knows more than the programmer who writes the compiler. But it's not true. If a developer knows compiler design principles then he will pay great attention to the effectiveness of the compiler. It means the developer can first check the details at higher level (e.g. there's no way of exceeding the switch range just because the number of options is limited to 100). But if we forget about higher levels (as you have said it's whole layer of bloated puss) then yes, we are unable to predict behavior of our programs.
Brendan wrote:Yes, you can have a warning about the massive performance decrease (e.g. display some sort of "orange coffee cup" logo so people know its running in a managed environment).
Sometime your comments are really funny :)
Brendan wrote:When there's no hardware virtualisation; most whole system emulators (VMWare, VirtualPC, VirtualBox, Qemu, etc - all of them except Bochs) fall back to converting guest code into run-time generated native code. In a managed environment software can't generate and execute its own native code.
What's the problem of moving code generation to the environment? Any sane algorithm can be compiled independently of the application it is used in. So, in managed environment it is possible to generate application related code.
Either it's possible for software to generate native code and execute it at run-time (and therefore managed environments are incapable of protecting anyone from anything because "managed" code can just bypass any/all protection that the environment provides); or it's impossible for (some types of) software to get good performance.

You can not have it both ways. You can't pretend that a managed environment provides protection and no protection at the same time.
embryo2 wrote:
Brendan wrote:
embryo2 wrote:The problem is the code can corrupt memory within the guarded boundaries. So, it's better not to use proposed "recovery".
For "managed" its the same problem - a bug can corrupt anything that its allowed to modify. If correct code can do "myObject.setter(correctValue);" then buggy code can do "myObject.setter(incorrectValue);".
No. The array access bug will corrupt all the stack or heap after the array. But managed environment just prevents all kinds of such bugs from the very existence. So, in case of unmanaged only "recovery" is possible while in case of managed normal recovery is a widely used practice.
Sigh. It's like a brainless zombie that's incapable of seeing anything beyond "C vs. Java".

Yes; Java is more able to detect problems like "out of bounds array index" than C and C++ because they don't even try (in the same way that it's easy to for an obese 90 year old man to run faster than an Olympic athlete when that Olympic athlete is sleeping soundly).

Nothing says a managed environment must protect against these bugs properly; nothing says an unmanaged can't detect/prevent these problems at compile time, and nothing even says an unmanaged language even has to support arrays to begin with (in the same way that an Olympic athlete might wake up instead of sleeping forever).

Note that in case of managed, normal recovery typically fails spectacularly in practice (unless your idea of "recovery" just means appending exception details to a log and terminating the process anyway, which is exactly what most Java software does).
embryo2 wrote:
Brendan wrote:I'm ignoring things like Apache (which is still the most dominant web server and is written in an unmanaged language), that creates a new process for each connection so that if any process crashes/terminates all the other connections aren't effected?
You can compare the performance. What are the costs of creating and finishing a process vs costs of taking a thread from a thread pool and then releasing it. If you perform a sane comparison then the problem with unmanaged will be obvious to you.
There are no web servers written for any managed environment to compare, so it's hard to do a fair comparison (but fairly obvious that nobody wanted to write a web server for a managed environment, probably because everyone capable of writing a web server knows that "managed" is a worthless joke when you want anything close to acceptable performance).

Note that for Linux the kernel itself doesn't really know the difference between processes and threads - they're just "tasks" that may or may not share resources and there's very little difference between forking a process and spawning a thread.
embryo2 wrote:
Brendan wrote:
embryo2 wrote:Bugs in critical code also extremely important. The extra code base introduced with the environment isn't so much an issue if we remember the size of Linux kernel and drivers, for example.
I agree - bugs in critical code are extremely important (regardless of whether it's a layer of pointless "managed environment" bloat, or a kernel or anything else). Note that this is why a lot of people (including me) think micro-kernels are better (and why Microsoft requires digital signatures on third-party drivers, and why Linux people wet their pants when they see a "binary blob").
So, you agree that the environment's complexity is not an issue. Just because it increases the number of bugs just a bit over the level of other existing critical code.
What? It's obvious that I think that reducing both the complexity and amount of critical code is important. A 64 KiB micro-kernel alone is far better than a 10 MiB monolithic kernel alone, or a 64 KiB micro-kernel plus a 10 MiB "managed environment"; and all 3 of these options are better than a 10 MiB monolithic kernel plus a 10 MiB "managed environment".
embryo2 wrote:
Brendan wrote:In theory, it would be better to simplify Intel's instruction set and remove old baggage nobody uses any more (e.g. most of segmentation, hardware task switching, virtual8086 mode, real mode, the FPU and MMX, etc) and also rearrange opcodes so instructions can be smaller (and not "multiple escape codes/bytes"). In practice, the majority of the silicon is consumed by things like caches, and backward compatibility is far more important than increasing cache sizes by 0.001%.
Well, the 0.001% here is too bold to be true. Completely rearchitected Intel processor will represent something like ARM's 64 bit model, which uses much less silicon and power. So, instruction simplification just as important as important now all mobile things with ARM processors. And only the vendor lock (the monopoly trick) is allowing Intel to persist.
ARM's CPUs are smaller because they don't clock as fast, have smaller caches, have crappy "uncore" (no built-in PCI-e, etc) and have near zero RAS features. It has nothing to do with baggage.

The things allowing Intel to maintain its monopoly are backward compatibility, and the fact that no other company can come close to its single-threaded performance.
embryo2 wrote:
Brendan wrote:I still think that eventually, after UEFI is ubiquitous and nobody cares about BIOS (e.g. maybe in the next 10 years if we're lucky) Intel might start removing some of the old stuff (e.g. develop "64-bit only" CPUs and sell them alongside the traditional "everything supported" CPUs, and then spend another 10+ years waiting for everything to shift to "64-bit only").
Yes, it's time required to fade away the monopoly effect.
It's time required to maintain the monopoly - Intel can't break too much backward compatibility too quickly.
embryo2 wrote:
Brendan wrote:I don't know if you mean 1024 registers or 1024-bit wide registers. For both cases it comes down to "diminishing returns" - exponentially increasing performance costs to support it, with negligible benefits to justify it.
Well, as you have said - "silicon is cheap and no sane person cares". Then what diminishing returns are you talking about? We have plenty of silicon, just use it, no problem if it's useless for many applications, but for some applications it's good to have more registers. So, it's better to recognize the value of silicon instead of insisting "the Intel is of a great value for humanity".
Using more silicon is not the problem, it's how you use it. For an analogy; you can add 10000 lines of code to an existing application (e.g. add a new feature or something) without effecting the performance of the old code; or you can add 5 lines in a critical spot and completely destroy performance.
embryo2 wrote:
Brendan wrote:This is also why we can't just have fast and large L1 caches (and why L1 cache sizes have remained at 64 KiB for about a decade while Intel have been adding larger/slower L2, L3, ..). Increasing the size decreases performance/latency.
No. Not the size is the problem. The problem is the way Intel's processors work. Cache is of limited usability if we can't predict right content for it. So, the Intel introduces all those additional useless instructions and improves it's vendor lock instead of simplifying the processor instruction set.
Was this even supposed to make sense?

If a pink elephant farts on a sunny day then accountants should eat more bagels because the problem is that water boils at 100 degrees Celsius.
embryo2 wrote:
Brendan wrote:In my case the ideal is temporarily far away, but over time will get closer. In your case the ideal is permanently far away because you're not getting closer over time.
Your temporarily vs permanently is greatly exaggerated.
As exaggerated as "always trailing the competition" vs. "actually trying to overtake the competition"?
embryo2 wrote:
Brendan wrote:You failed to read it properly. The author only used direct access to create the attacks and explain them, and direct access is not needed to deploy the attacks. The vulnerability is that a lot of software de-serialises "untrusted" data (e.g. from a random attacker's network connection) and the Java code responsible for serialisation/de-serialisation allows arbitrary code execution (and the "managed environment" does nothing to prevent this).
Well, if you think the direct access is not needed then may be you can prove it? I can open http access on my server for server side deserialization and you can show us how it's easy to run your code on my server without direct access.
It's not my speciality - I've never tried to compromise anything, and Java isn't worth learning enough about.
embryo2 wrote:The "arbitrary" code in the link is the code somebody deploys using direct access to the critical part of the file system (where server's libraries are).
Yes; it's almost like "white hat" security researchers have some sort of code of conduct to prevent them from being mistaken for "black hat" attackers, and don't just give everyone on the Internet fully working exploits for unpatched vulnerabilities. I'm sure nobody will ever feel like using the exploit to run any other code. :roll:


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: Os and implementing good gpu drivers(nvidia)

Post by Combuster »

embryo2 wrote:I can open http access on my server for server side de-serialization and you can show us how it's easy to run your code on my server without direct access.
I'm pretty sure you only glanced over the article and didn't actually understand what it said.

The issue fundamentally requires:
1: The presence of a Java class that has inappropriate de-serialisation side-effects.
2: Software with a protocol that communicates Java serialised objects.
3: Communications access to this software.
The concrete example to this is, if you would run Jenkins (#1,#2) with the admin port open to the internet (#3), your computer will get owned by a crawler at some point in time.

Thus, having used Jenkins sometime, I can tell you that admin access is protected and without the credentials I can't convince Jenkins to do anything. The manual will tell you the same, and so will a security analysis of Jenkins' own code (unless that turns out to have something unrelated to exploit). Let me ask you: would you still feel safe to run a public-facing Jenkins?


The problem is that this is the fault of code that's not actually used, but is present nonetheless in a library. That's a pretty interesting source for exploits, and puts out the interesting question of whether or not you want fat systems anywhere.
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
Antti
Member
Member
Posts: 923
Joined: Thu Jul 05, 2012 5:12 am
Location: Finland

Re: Os and implementing good gpu drivers(nvidia)

Post by Antti »

May I post something because it is a coffee break and it seems we are talking about coffee here (at least if we grep for it)? I like to think it like this: The outside world ("run-time") shall be cold and harsh. Have a calm and comfortable place ("compile-time") to get prepared for it.

What I do not like is that managed environments and languages are mostly known and popular because of their rich frameworks. There is an unfair comparison between managed and unmanaged because the latter usually does not have such a rich framework (which is just an implementation issue). Here on this forum people see it differently but I am almost sure that normal programmers (especially beginners) are mostly impressed by how much easier programming is when using, for example, a .NET-like environment. However, the reason for their impression has almost nothing to do with the managed environment/language. It is just the rich framework with all the bells and whistles. That said, I think the actual advantages of managed environment/language are not so important for normal programmers and it may be even hard to get an answer to a question like "what features in this managed enviroment/language you like the most" that is not somehow related to the framework itself.
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Os and implementing good gpu drivers(nvidia)

Post by embryo2 »

HoTT wrote:Just to be clear and not to dive into discussion about what arbitrary means or not: Do you think that the linked exploit is a severe security issue?
It is severe if you have access to the server's file system. But usually nobody except some privileged users has such access.

It works like this:

Java deserialization allows a class to define it's deserialization behavior. In case of specific internal state it is really required. So, Java invokes an interface defined method to give the class a chance to create it's proper internal state. For it to work JVM looks for the class with the name found in the deserialized binary. There are rules that define the scope where JVM can look for the class. In short the scope is limited to a set of directories with jar (Java archive) files. Here you see that we need to have the class, which implements the interface and is accessible to the JVM. So, how it is possible to run a code without having the class in a directory where JVM should look for it? It's just impossible. It means if an attacker has no way to put his class implementation in the directory where JVM looks for it, the attacker can only claim "Java is vulnerable", but there's no actual harm possible.
Combuster wrote:I'm pretty sure you only glanced over the article and didn't actually understand what it said.

The issue fundamentally requires:
1: The presence of a Java class that has inappropriate de-serialisation side-effects.
2: Software with a protocol that communicates Java serialised objects.
3: Communications access to this software.
Wrong. It's not the "inappropriate de-serialisation side-effects". It should be the attacker created class. The class should have some malicious code and be located in one of the server library directories.

For #2 and #3 it' correct. If we have attacker's class in out class path and #2 and #3 are present then an attacker can run his code.
Combuster wrote:Thus, having used Jenkins sometime, I can tell you that admin access is protected and without the credentials I can't convince Jenkins to do anything. The manual will tell you the same, and so will a security analysis of Jenkins' own code (unless that turns out to have something unrelated to exploit). Let me ask you: would you still feel safe to run a public-facing Jenkins?
As far as I know the Jenkins is just an application that is run in Tomcat's container. So we have following options:

1. Jenkins default settings do not hide Tomcat's management ports.
2. Jenkins opens it's own ports to listen to external commands.

Both options are immune to the proposed kind of attack if the attacker has no access to the file system. But the fact of using insecure communication, mentioned in the "vulnerability claim", is dangerous without any mention of deserilization kind of vulnerabilities. Of course, it's better to secure the communication with the command line interpreter.
Combuster wrote:The problem is that this is the fault of code that's not actually used, but is present nonetheless in a library. That's a pretty interesting source for exploits, and puts out the interesting question of whether or not you want fat systems anywhere.
There's no fault. Windows (and Linux and Mac and others) also can run an exe file if it exists. But how can you ensure it's existence? Even more - all web servers will run your code if they configured to run it. It's just HTTP request string that triggers the choice of the actual code fragment. But here is the problem - can you change the configuration files on the server side? If you have full access (for writing), then yes, you can. Exactly the same is true for the deserialization issue. If you can inject your class in the class path directory, then yes, you can invoke it in case there is serialization based communication open to the external world.

A bit of repetition - there should be a situation when you have write access to the attacked server file system, the server is configured to expose it's serialization based communication and such communication is actually based on the serialization. Well, now you can think about the admin who managed to open to the world the file system and all required ports.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
Octocontrabass
Member
Member
Posts: 5521
Joined: Mon Mar 25, 2013 7:01 pm

Re: Os and implementing good gpu drivers(nvidia)

Post by Octocontrabass »

embryo2 wrote:It means if an attacker has no way to put his class implementation in the directory where JVM looks for it, the attacker can only claim "Java is vulnerable", but there's no actual harm possible.
The vulnerable class is already present in a directory where the JVM will look for it. All of the named applications bundle it as an included component.
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Os and implementing good gpu drivers(nvidia)

Post by embryo2 »

Brendan wrote:It's not the developers - it's users who don't want "slow because the programmer thought they were too incompetent to write software that isn't buggy".
Are you so competent that you have no bugs?
Brendan wrote:It's like.. Would you buy a sports car that has to be surrounded by 2 meter thick bubble-wrap at all times because the brakes probably have design flaws? Of course not - you'd want a car where the manufacturer made sure the brakes work that doesn't need bubble wrap.
There is internal "bubble-wrap at all times". It's airbag. So, people prefer security, even if it costs some money.
Brendan wrote:I suggest only having necessary protection (e.g. isolation between processes and the kernel) and not having any unnecessary protection (protecting a process from itself) that nobody cares about and shouldn't be needed at all for released software.
Then you suggest to use cars without airbag.
Brendan wrote:I also suggest finding all the programmers that have so little confidence in their own abilities that think they need this unnecessary protection and send them to a nice free 2 week vacation to the centre of the Sun.
Your dreams are funny, but the reality is a bit different.
Brendan wrote:but that doesn't change the fact that hardware is able to do it better/faster than software can
Ok, let's look at the hardware. How it works? It translates instructions in some internal representation called micro code. So, the FMUL instruction is transformed in something like this:

Code: Select all

bt r1,NaNBitPosition; check if r1 is NaN
jc NaN; jump to NaN handler
bt r2,NaNBitPosition; check if r2 is NaN
jc NaN; jump to NaN handler
mul r1,r2; multiplication without any check
jo overflow; in case there was overflow jump to the handler
jst stackOverflow; in case there was stack overflow
... ; other tests for all possible floating-point exceptions
Now look at the actual processor's work. Does it magically perform something without time or silicon spent on it? No. It just executes a bit more detailed stream of instructions. So, if we were able to execute the detailed stream in software, what then should be different? I think nothing important should change. So, if compiler knows the actual range of the input and output values it can optimize out all unnecessary checks and produce even faster code than we have with the Intel processors. And in case the compiler doesn't know the actual range it just produces the exact code Intel's processor uses now and then there will be no difference in speed. And now you can see that software can increase execution speed even more than existing processors (hardware) can.
Brendan wrote:One jo just gives you undefined behaviour. You need a comparison or something before it;
In case of your proposed "into" instruction there should be nothing else except the "jo".
Brendan wrote:plus something to jump to;
In case of all your proposed instructions there also should be the code that is invoked when interrupt fires.
Brendan wrote:plus some way to tell CPU that its an extremely unlikely branch (so it doesn't waste the CPU's branch target buffer).
Yes, here it would be better to have a processor which can execute exactly what we want, but not the crap it thinks is better for the situation.
Brendan wrote:Then you realise it only does one limit and that you typically need two limits ("0 < x < 1234") and you actually need a pair of them.
Do you think the "bound" instruction doesn't need to check against two limits?
Brendan wrote:Finally; you create a product and get to add the words "slower than everything else because we suck" to all your advertising and wonder why everyone buys the faster alternative from your competitors while your company goes bankrupt.
I can only tell you the IBM's prices. For one instance of the WebSphere Application Server (WAS) IBM wants something around 50 thousand $. For a product, based on the WAS and targeted for the enterprise integration market, the price starts somewhere near 300 thousand $. You can compare it with systems with tags like "faster than everything else because we are so smart and brave".
Brendan wrote:It's also the "informed choice" of the malicious attacker who's writing a trojan utility. Yay!
In case of managed the user has choice not to trust the source of the code, so, many attackers really suck. But in case of unmanaged the user has no choice and attackers are very glad to see your posts, protecting them from really dangerous change (for attackers, of course).
Brendan wrote:As a user; have you ever purchased any software written in Java?
My employers (and I suppose your's too) use a lot of IBM WAS instances.
Brendan wrote:because software that's actually worth paying for is never written in Java.
So, you still have no idea what Android is. I advice you to try it. It's cool.
Brendan wrote:If a user has a choice of getting better speed for a particular platform and understands it's security and safeness consequences then they never choose "managed".
Billions of users use Windows with it's .Net.
Brendan wrote:You can not have it both ways. You can't pretend that a managed environment provides protection and no protection at the same time.
I can have it both ways. I compile it without safety checks if trust the code source. And I compile it with safety checks if I do not trust the source. In fact it looks even simpler. The environment just asks me if I want to try some speed and loose the safeness and I answer depending on my information about the source.
Brendan wrote:It's like a brainless zombie that's incapable of seeing anything beyond "C vs. Java".
Ok, if you add to your compiler the option of safety checks injection then I agree that you are much closer to the ideal, which is, of course, the full fledged managed environment.
Brendan wrote:Note that in case of managed, normal recovery typically fails spectacularly in practice.
As you have said "Java isn't worth learning enough about", so, your "typically" here is just a product of your imagination.
Brendan wrote:There are no web servers written for any managed environment to compare
Ok, if you don't know the enterprise development industry I can point a few things here. It is WAS, WebLogic, jBoss, Tomcat, GlassFish to name a few.
Brendan wrote:Note that for Linux the kernel itself doesn't really know the difference between processes and threads - they're just "tasks" that may or may not share resources and there's very little difference between forking a process and spawning a thread.
May be Linux has no idea about differences, but you, as a low level developer, should understand the difference between creation of a new process and picking a thread from a thread pool.
Brendan wrote:It's obvious that I think that reducing both the complexity and amount of critical code is important. A 64 KiB micro-kernel alone is far better than a 10 MiB monolithic kernel alone, or a 64 KiB micro-kernel plus a 10 MiB "managed environment"; and all 3 of these options are better than a 10 MiB monolithic kernel plus a 10 MiB "managed environment".
Ok, let's use you assessment. And we can see, that managed vs unmanaged is all about just double complexity increase. Now you can remember the time when there were no compilers, no OSes, no high level languages. There was almost no complexity. Now we have the compilers, languages, OSes and the complexity with it all. And what? Did the world crashed? No. But the complexity increase was orders of magnitude! And what? Nobody sees no problem. It's the progress. It manages to deal with the complexity increase.
Brendan wrote:ARM's CPUs are smaller because they don't clock as fast, have smaller caches, have crappy "uncore" (no built-in PCI-e, etc) and have near zero RAS features. It has nothing to do with baggage.
ARM CPUs are smaller because they need no all this Intel's bloat. Yes, the simpler is better.
Brendan wrote:and the fact that no other company can come close to its single-threaded performance.
No company can efficiently target the market, locked at the Intel instruction set. But in specific areas there are some interesting processors. The problem is in the comparison. Intel benchmarks it's processors using software that is specifically built for Intel processors. So, any competitor should emulate Intel's instruction set and trade complexity for compatibility.

But mobile devices will get the Intel out of the way of progress.
Brendan wrote:Using more silicon is not the problem, it's how you use it. For an analogy; you can add 10000 lines of code to an existing application (e.g. add a new feature or something) without effecting the performance of the old code; or you can add 5 lines in a critical spot and completely destroy performance.
Yes, principles are important. But compatibility and hidden microcode are obvious drawbacks of Intel's solution.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Os and implementing good gpu drivers(nvidia)

Post by embryo2 »

Antti wrote:I like to think it like this: The outside world ("run-time") shall be cold and harsh. Have a calm and comfortable place ("compile-time") to get prepared for it.
Nice picture. But why the "run-time" should be cold and harsh? It's not a requirement.
Antti wrote:There is an unfair comparison between managed and unmanaged because the latter usually does not have such a rich framework (which is just an implementation issue).
But why the latter has rich frameworks? Just because it's easier to write rich frameworks using managed.
Antti wrote:I think the actual advantages of managed environment/language are not so important for normal programmers and it may be even hard to get an answer to a question like "what features in this managed enviroment/language you like the most" that is not somehow related to the framework itself.
The related features are:

1. Memory management.
2. Easy debugging.
3. Less bugs.
4. Compatibility.
5. Easy learning.
6. Rich frameworks because of #1,2,3,4,5.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Os and implementing good gpu drivers(nvidia)

Post by embryo2 »

Octocontrabass wrote:The vulnerable class is already present in a directory where the JVM will look for it. All of the named applications bundle it as an included component.
If you mean the InvokerTransformer, mentioned in the "vulnerability" article, then it's just general invoker without any malicious code. May be some libraries use this invoker for deserialization of objects with some specific state, but the invoker itself never does anything malicious. It just runs the code the deserialized class provides. There should be deserialized class with malicious code. Else there's no vulnerability.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
Antti
Member
Member
Posts: 923
Joined: Thu Jul 05, 2012 5:12 am
Location: Finland

Re: Os and implementing good gpu drivers(nvidia)

Post by Antti »

embryo2 wrote:Nice picture. But why the "run-time" should be cold and harsh? It's not a requirement.
To make it efficient, elegant and simple enough so that there is some hope to get bugs under control, i.e. have a secure system. Please note that it is "cold and harsh" anyway if it really enters into open world where it is attacked from all sides. Being fully prepared for it is better than relying on some (unrealiable?) help on the field.
Octocontrabass
Member
Member
Posts: 5521
Joined: Mon Mar 25, 2013 7:01 pm

Re: Os and implementing good gpu drivers(nvidia)

Post by Octocontrabass »

embryo2 wrote:There should be deserialized class with malicious code. Else there's no vulnerability.
The authors demonstrated remotely executing "touch /tmp/pwned" without being an authenticated user. Are you saying it's not a vulnerability because "touch /tmp/pwned" is not malicious?
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Os and implementing good gpu drivers(nvidia)

Post by Brendan »

Hi,
embryo2 wrote:
HoTT wrote:Just to be clear and not to dive into discussion about what arbitrary means or not: Do you think that the linked exploit is a severe security issue?
It is severe if you have access to the server's file system. But usually nobody except some privileged users has such access.
The fact that Jenkins developers panicked and rushed out a patch to mitigate the problem should give you a good idea of how wrong you are. Please note that the words "unauthenticated remote code execution" (Jenkins developer's own words, and not the person who discovered the vulnerability's words) mean exactly what they sounds like - someone who is not authorised is able to run remote code (not local code).
embryo2 wrote:It works like this:
No, it works like this:
  • You write 100% correct code
  • You release your product
  • Your customers get their butt raped, while the "managed environment" does nothing to prevent it
embryo2 wrote:
Brendan wrote:It's not the developers - it's users who don't want "slow because the programmer thought they were too incompetent to write software that isn't buggy".
Are you so competent that you have no bugs?
I'm so competent that not only do I have an extremely low rate of bugs in software that I release, I also do everything possible to work around both hardware and firmware bugs, and provide multiple layers to guard against problems in my software, hardware and firmware. I don't even assume RAM works properly.
embryo2 wrote:
Brendan wrote:It's like.. Would you buy a sports car that has to be surrounded by 2 meter thick bubble-wrap at all times because the brakes probably have design flaws? Of course not - you'd want a car where the manufacturer made sure the brakes work that doesn't need bubble wrap.
There is internal "bubble-wrap at all times". It's airbag. So, people prefer security, even if it costs some money.
Airbags exists to protect against user error, not to protect against design flaws.
embryo2 wrote:
Brendan wrote:I suggest only having necessary protection (e.g. isolation between processes and the kernel) and not having any unnecessary protection (protecting a process from itself) that nobody cares about and shouldn't be needed at all for released software.
Then you suggest to use cars without airbag.
I suggest to use cars without design flaws (that do guard against user error).
embryo2 wrote:
Brendan wrote:but that doesn't change the fact that hardware is able to do it better/faster than software can
Ok, let's look at the hardware. How it works?
I very much doubt (especially after reading what you wrote) that you have the faintest clue how hardware works. You don't even seem to know the difference between micro-ops and micro-code; and the fact that you chose something that has nothing at all to do with security (FMUL) and not something has security implications (e.g. a read or write) doesn't help much.

Mostly; CPUs do not do "one sequential step at a time". For example; the CPU can do the multiplication while doing the pre-checks in parallel, and then (afterwards) discard the results of the multiplication if the pre-checks failed. They're also "pipelined", which means they can be doing the first part of one instruction while they're doing the second part of another instruction while they're doing the third part of another instruction. A modern "Core i7" probably has over 10 instructions "in flight" (on average). It's this "split everything into tiny pieces and do many pieces in parallel" nature that makes hardware many times faster than software can be.
embryo2 wrote:Now look at the actual processor's work. Does it magically perform something without time or silicon spent on it? No. It just executes a bit more detailed stream of instructions. So, if we were able to execute the detailed stream in software, what then should be different?
What's different is that software is unable to split everything into tiny pieces and do many pieces in parallel. Software is restricted to sequential steps (instructions).
embryo2 wrote:
Brendan wrote:One jo just gives you undefined behaviour. You need a comparison or something before it;
In case of your proposed "into" instruction there should be nothing else except the "jo".
Brendan wrote:plus something to jump to;
In case of all your proposed instructions there also should be the code that is invoked when interrupt fires.
In my case, the kernel provides exception handlers to handle it (e.g. by adding info to a log and terminating the process) and there's no additional bloat in the process itself to explictly handle problems that shouldn't happen.
embryo2 wrote:
Brendan wrote:plus some way to tell CPU that its an extremely unlikely branch (so it doesn't waste the CPU's branch target buffer).
Yes, here it would be better to have a processor which can execute exactly what we want, but not the crap it thinks is better for the situation.
Brendan wrote:Then you realise it only does one limit and that you typically need two limits ("0 < x < 1234") and you actually need a pair of them.
Do you think the "bound" instruction doesn't need to check against two limits?
The "bound" instruction does check against 2 limits. Your "jo" does not.
embryo2 wrote:
Brendan wrote:Finally; you create a product and get to add the words "slower than everything else because we suck" to all your advertising and wonder why everyone buys the faster alternative from your competitors while your company goes bankrupt.
I can only tell you the IBM's prices. For one instance of the WebSphere Application Server (WAS) IBM wants something around 50 thousand $. For a product, based on the WAS and targeted for the enterprise integration market, the price starts somewhere near 300 thousand $. You can compare it with systems with tags like "faster than everything else because we are so smart and brave".
Heh. Web technologies are "special".
embryo2 wrote:
Brendan wrote:As a user; have you ever purchased any software written in Java?
My employers (and I suppose your's too) use a lot of IBM WAS instances.
I meant you personally; not some stuffed suit in the purchasing department that doesn't know the difference between a server and a waitress.

I can guarantee that I have never and will never be willing to work in web development (like I said above, they're "special").
embryo2 wrote:
Brendan wrote:because software that's actually worth paying for is never written in Java.
So, you still have no idea what Android is. I advice you to try it. It's cool.
You mean, the OS written primarily in C, where competent developers use NDK and write apps in C/C++, and where Java is used for portability and not because its "managed"?
embryo2 wrote:
Brendan wrote:If a user has a choice of getting better speed for a particular platform and understands it's security and safeness consequences then they never choose "managed".
Billions of users use Windows with it's .Net.
Only because they don't have a choice (e.g. the software they want to run has no equivelent unmanaged alternative).
embryo2 wrote:
Brendan wrote:You can not have it both ways. You can't pretend that a managed environment provides protection and no protection at the same time.
I can have it both ways. I compile it without safety checks if trust the code source. And I compile it with safety checks if I do not trust the source. In fact it looks even simpler. The environment just asks me if I want to try some speed and loose the safeness and I answer depending on my information about the source.
I was wrong. Apparently you've got so little intelligence that you actually are able to pretend that a managed environment provides protection and no protection at the same time (by suggesting that you can run the code in an unmanaged environment and ignoring the fact that an unmanaged environment is not a managed environment).
embryo2 wrote:
Brendan wrote:Note that in case of managed, normal recovery typically fails spectacularly in practice.
As you have said "Java isn't worth learning enough about", so, your "typically" here is just a product of your imagination.
No, my "typically" comes from using software written in Java that fails spectacularly as soon as something goes wrong.
embryo2 wrote:
Brendan wrote:There are no web servers written for any managed environment to compare
Ok, if you don't know the enterprise development industry I can point a few things here. It is WAS, WebLogic, jBoss, Tomcat, GlassFish to name a few.
None of these are web servers - they're all "bloatware" (frameworks, etc) that sit between web developers and web servers. Websphere seems to rely on Apache; WebLogic seems to rely on Apache, jBoss seems to be the combination of Apache and Tomcat, and GlassFish seems to rely on Apache.

Note: Some of these do have weeny little HTTP servers built in (for development, etc), but they still all seem to rely on Apache for production use because Java is too slow to handle more than 20 HTTP connections at the same time.
embryo2 wrote:
Brendan wrote:It's obvious that I think that reducing both the complexity and amount of critical code is important. A 64 KiB micro-kernel alone is far better than a 10 MiB monolithic kernel alone, or a 64 KiB micro-kernel plus a 10 MiB "managed environment"; and all 3 of these options are better than a 10 MiB monolithic kernel plus a 10 MiB "managed environment".
Ok, let's use you assessment. And we can see, that managed vs unmanaged is all about just double complexity increase. Now you can remember the time when there were no compilers, no OSes, no high level languages. There was almost no complexity. Now we have the compilers, languages, OSes and the complexity with it all. And what? Did the world crashed? No. But the complexity increase was orders of magnitude! And what? Nobody sees no problem. It's the progress. It manages to deal with the complexity increase.
For both managed and unmanaged; what happened is that the quality of software (in terms of both performance and bugs) was reduced significantly in an attempt to reduce developer time. It has nothing at all to do with managed vs. unmanaged and everything to do with incompetent programmers that don't understand how computers actually work because they spend all day glueing together shrink wrapped libraries in very high level languages.
embryo2 wrote:
Brendan wrote:ARM's CPUs are smaller because they don't clock as fast, have smaller caches, have crappy "uncore" (no built-in PCI-e, etc) and have near zero RAS features. It has nothing to do with baggage.
ARM CPUs are smaller because they need no all this Intel's bloat. Yes, the simpler is better.
An abacus is simpler. You should sell your computers and buy one.
embryo2 wrote:
Brendan wrote:and the fact that no other company can come close to its single-threaded performance.
No company can efficiently target the market, locked at the Intel instruction set. But in specific areas there are some interesting processors. The problem is in the comparison. Intel benchmarks it's processors using software that is specifically built for Intel processors. So, any competitor should emulate Intel's instruction set and trade complexity for compatibility.
So, you have no idea how benchmarks like SPECint and SPEC2006 work? Intel benchmark their CPUs using standardised benchmarks compiled for 80x86; ARM benchmark their CPUs using the same standardised benchmarks compiled for ARM, IBM benchmark their CPUs using the same standardised benchmarks compiled for POWER, etc. None of them execute 80x86 instruction set except those using 80x86 CPUs (Intel and AMD).
embryo2 wrote:
Brendan wrote:Using more silicon is not the problem, it's how you use it. For an analogy; you can add 10000 lines of code to an existing application (e.g. add a new feature or something) without effecting the performance of the old code; or you can add 5 lines in a critical spot and completely destroy performance.
Yes, principles are important. But compatibility and hidden microcode are obvious drawbacks of Intel's solution.
Compatiblity is the reason Intel has killed every other architecture that dared attempt to compete in the laptop/desktop/workstation space. I'm not sure it's fair to call killing every competitor a drawback. I have no idea why anyone would think micro-code is a drawback.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Schol-R-LEA
Member
Member
Posts: 1925
Joined: Fri Oct 27, 2006 9:42 am
Location: Athens, GA, USA

Re: Os and implementing good gpu drivers(nvidia)

Post by Schol-R-LEA »

I am going to say outright what no one on either side seems to be willing to:

MANAGED CODE IS A CHIMERA.

There is no such thing as 'managed code', any more than there is such a thing a 'computer security', and for the same reasons: because it is not possible even in principle to completely secure any Turing-equivalent computation engine, for reasons that were (in part) explained in the paper "On Computable Numbers" itself. Anyone who says otherwise is trying to square the circle.

Runtime checks are just as vulnerable as static checks, but they catch different and often exclusive sets of vulnerabilities, and neither of them, either alone or together, can ensure secure operation. This is a mathematical property of mechanical computation systems, not a design flaw: the existence of the G function in any Turing-equivalent system (and many weaker systems) ensures that it will always be there. Adding further checks is no solution, either, as that leads to the Von Neumann Catastrophe of Infinite Regress, since for any system with a finite number of checks, Goedel's incompleteness theorems would indicate that a different G would still exist for the system as a whole.

(OK, I'm playing fast and loose with the concepts here, since technically G is a proof that cannot be decided in a given system of predicate logic, not a function that exploits a weakness in a computation engine; however, the proofs of vulnerability are based on the theoretical incompleteness and undecidability proofs of the class that Goedel's theorems are in, so as shorthand I'm just saying that it's an outgrowth of them.)

Similar problems exist with any attempt to ensure that only code produced by a certified or managed compiler is run. If a language is Turing-equivalent, then it can be used to implement an unmanaged compiler, generate spoofed certification (regardless of whether the hashing algorithm is publicly known - it is impossible, once again even in principle, to prevent reverse engineering the algorithm without rendering the code unusable), and inserting the unmanaged code into an executable in an undetectable manner. The best you can do is make it unreasonably difficult, and the definition of "unreasonably difficult" can and will vary with the circumstances.

This means that practical computer security must be a process, not a static defense or even static defense-in-depth, and one which requires trade-offs and conscious decision-making on the parts not only of the compiler developers or OS developers, but also the library developers, client-programmers, system administrators, and users, and since any of them can and eventually will fail along the way, the impact of any one decision must be limited in scope wherever feasible.

In short, the term 'managed code' is a marketing term, not a technology term. Dispose of it, if you intend to actually accomplish anything meaningful.
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Post Reply