Os and implementing good gpu drivers(nvidia)

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Os and implementing good gpu drivers(nvidia)

Post by Brendan »

Hi,
embryo2 wrote:
Brendan wrote:That brings me to unsolvable problems. What does the compiler do if it can't possibly determine if a piece of code is safe or not?

The only thing it can do is "false negatives".
Wrong. It can inject safety checks (as you already know, but refuse to consider this option).
As soon as you do that you end up with run-time overhead and run-time failures. Yes, it's possible, but it's also crappy and (the "extra run-time overhead" part) is something Rusky was trying to say wasn't necessary.
embryo2 wrote:
Brendan wrote:Your entire OS's security will depend on this compiler. Your entire OS's security will depend on something that's expected to have between 80000 and 400000 errors.
It is common problem for all software, would it be managed or not. So, let's forget about this common thing while talking about managed vs unmanaged.
You're the one that keeps bringing up "managed". I'm the one who forgot about "managed" days ago (after showing that AlexHully's plans are not "managed" at all).

All I'm suggesting that an "ahead of time compiler" alone is grossly inadequate for security purposes (regardless of whether that "ahead of time compiler" is an assembler, a C compiler, a Rust compiler, or a compiler for a far more restrictive language); because regardless of how restrictive the language is nothing protects the OS from bugs in that "ahead of time compiler".

Of course you're right in that bugs are a common problem for all software; which is why (especially for monolithic kernels) you also want hardware protection to protect from bugs in the kernel; and why some people prefer micro-kernels, and why some people run OSs in virtual machines (to add an extra layer of protection), and why IOMMUs can be used for more than just virtualisation, and why Intel added MPX (which is used by Linux as additional hardware protection against kernel bugs), and why Intel added SGX to protect some applications (those that need a high level of security) from everything else in the OS.

Relying on a compiler and nothing else for security (and ignoring all the security/protection features the CPU provides) is as idiotic as relying on the hardware's security alone (and having a compiler that never checks for or detects any errors). You want multiple layers; not "all eggs in same basket".

Of course even with multiple layers of security (e.g. an "ahead of time" compiler checking for errors when converting source into byte code, a virtual machine checking for errors at run-time, and the OS using the CPU's protection features) you end up with frequent critical vulnerabilities in Java; despite the fact that it's a mature product that's been constantly improved over 2 entire decades.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Os and implementing good gpu drivers(nvidia)

Post by Rusky »

The point is that it may be worth it, depending on your situation, to trade off some of the CPU's protections for software-based protection. Maybe you want a security architecture that doesn't match the CPU's very well. Maybe you're designing an OS for use primarily by single applications in virtual machines. Et cetera.
Ready4Dis
Member
Member
Posts: 571
Joined: Sat Nov 18, 2006 9:11 am

Re: Os and implementing good gpu drivers(nvidia)

Post by Ready4Dis »

Rusky wrote:The point is that it may be worth it, depending on your situation, to trade off some of the CPU's protections for software-based protection. Maybe you want a security architecture that doesn't match the CPU's very well. Maybe you're designing an OS for use primarily by single applications in virtual machines. Et cetera.
Nobody is trying to say it's not worth it, that depends on what your goals are, we are just saying that you don't magically gain something for nothing.
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Os and implementing good gpu drivers(nvidia)

Post by embryo2 »

Ready4Dis wrote:In a managed OS, you need to be sure that the .exe/.com/elf or whatever was compiled with your 'safe' compiler, if it wasn't, then there are absolutely no guaranties that it won't do something and there is nothing stopping it from taking over the entire PC.
The point is - any program just must be compiled by OS before running under OS. It is management. Code is managed in some ways. It is compiled, it is extended with safety checks, it uses managed environment memory management services, the resources the code uses also are managed by OS, code can be profiled at runtime and such information can help to increase it's speed. There are many things possible when environment manages something without requiring a developer to spend time for tedious work and user to be too concerned about security.
Ready4Dis wrote:Again, I am not saying it's impossible, just improbable to make something secure and fast at the same time, as those two are normally mutually exclusive.
It is the progress that makes things probable and possible. The direction of the progress is towards automated systems and handmade C/assembly code will be obsolete very soon (on historical scale).
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Os and implementing good gpu drivers(nvidia)

Post by embryo2 »

Brendan wrote:As soon as you do that you end up with run-time overhead and run-time failures.
We talked about it. Overhead will be decreased with the compiler development progress and failures are safely processed by properly written code. But for unmanaged environment any failure can lead to memory corruption and application crash (at least).
Brendan wrote:All I'm suggesting that an "ahead of time compiler" alone is grossly inadequate for security purposes
Ok, I am not going to refuse hardware protection. But it can be extended a lot with the help of managed environments. And in the end hardware protection will be obsolete.
Brendan wrote:and why Intel added MPX (which is used by Linux as additional hardware protection against kernel bugs), and why Intel added SGX to protect some applications (those that need a high level of security) from everything else in the OS.
Is the problem of software bugs being solved by hardware? No. It helps a bit but general direction is towards better compilers. Compilers can solve the problem while hardware can't.
Brendan wrote:Of course even with multiple layers of security (e.g. an "ahead of time" compiler checking for errors when converting source into byte code, a virtual machine checking for errors at run-time, and the OS using the CPU's protection features) you end up with frequent critical vulnerabilities in Java; despite the fact that it's a mature product that's been constantly improved over 2 entire decades.
Here again I answer with the same - any complex system has bugs and the JVM is not exception here, but any OS on earth is much more dependent on bugs in case it is unmanaged. Because if something manages to free a developer from tedious work then the developer makes less bugs. Only automation (complete one in extreme) is the way we should go along. Automation means management of everything by a system instead of a programmer.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
Ready4Dis
Member
Member
Posts: 571
Joined: Sat Nov 18, 2006 9:11 am

Re: Os and implementing good gpu drivers(nvidia)

Post by Ready4Dis »

If that's your goals, then go for it. I still don't think you'll going to magically gain performance and security at the same time. I am not saying the goals aren't noble, nor am I saying it's unachievable, I'm just saying that it's a lot of work, and by itself doesn't solve the problem, you need (As you alluded to) an entire ecosystem designed around it (which hogs resources and slows things down) as well as compiling just in time and assuming nothing changes the compiled binary, which will make launching large applications very slow indeed.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Os and implementing good gpu drivers(nvidia)

Post by Brendan »

Hi,
embryo2 wrote:
Brendan wrote:As soon as you do that you end up with run-time overhead and run-time failures.
We talked about it. Overhead will be decreased with the compiler development progress and failures are safely processed by properly written code. But for unmanaged environment any failure can lead to memory corruption and application crash (at least).
For managed environments, initially overhead and number of bugs caused by the environment will both be bad, and (in theory) as the JIT and/or compiler improves the overhead and number of bugs caused will reduce; but the overhead can never be zero and performance can't be as good as unmanaged.

Note that managed environments don't avoid application crashes - it's "application crashed due to exception detected by software/managed environment" instead of "application crashed due to exception detected by hardware". The number of application crashes can only be decreased by detecting bugs at compile time instead of detecting them at run-time.
embryo2 wrote:
Brendan wrote:All I'm suggesting that an "ahead of time compiler" alone is grossly inadequate for security purposes
Ok, I am not going to refuse hardware protection. But it can be extended a lot with the help of managed environments. And in the end hardware protection will be obsolete.
For what AlexHully was proposing (unmanaged, where "100%" of bugs are detected at compile time) a managed environment can only detect bugs caused by the "ahead of time" compiler while introducing additional bugs in the managed environment itself. The bugs in the compiler are likely to be approximately equal to the bugs in the managed environment itself; so the end result is more overhead with no difference in number of bugs.

Hardware protection will never be obsolete because hardware can do the checks in parallel with normal execution, making it effectively free. For performance, the overhead of managed environments can never be "effectively free" and will always be worse.

For quality control and debugging (e.g. before software is released to normal users, where performance is irrelevant), managed environments (e.g. valgrind) are useful if (and only if) the compiler isn't able to detect the bugs at compile time anyway.
embryo2 wrote:
Brendan wrote:and why Intel added MPX (which is used by Linux as additional hardware protection against kernel bugs), and why Intel added SGX to protect some applications (those that need a high level of security) from everything else in the OS.
Is the problem of software bugs being solved by hardware? No. It helps a bit but general direction is towards better compilers. Compilers can solve the problem while hardware can't.
You're right - things like hardware protection and managed environments don't solve software bugs, and only minimise the damage bugs can cause. However I was mostly talking about security, and minimising the damage bugs can cause does improve security.
embryo2 wrote:
Brendan wrote:Of course even with multiple layers of security (e.g. an "ahead of time" compiler checking for errors when converting source into byte code, a virtual machine checking for errors at run-time, and the OS using the CPU's protection features) you end up with frequent critical vulnerabilities in Java; despite the fact that it's a mature product that's been constantly improved over 2 entire decades.
Here again I answer with the same - any complex system has bugs and the JVM is not exception here, but any OS on earth is much more dependent on bugs in case it is unmanaged. Because if something manages to free a developer from tedious work then the developer makes less bugs. Only automation (complete one in extreme) is the way we should go along. Automation means management of everything by a system instead of a programmer.
You're conflating unrelated issues. You're saying that higher level languages and/or huge fat libraries (where programmers do less) reduce bugs; and that a higher level language with a huge fat library that is unmanaged is better than a lower level language with no libraries that is managed.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Os and implementing good gpu drivers(nvidia)

Post by embryo2 »

Brendan wrote:For managed environments, initially overhead and number of bugs caused by the environment will both be bad, and (in theory) as the JIT and/or compiler improves the overhead and number of bugs caused will reduce; but the overhead can never be zero and performance can't be as good as unmanaged.
Why unmanaged has better performance? Because it doesn't check dangerous code output. So, you trade danger for speed. I prefer reverse trade.
Brendan wrote:Note that managed environments don't avoid application crashes - it's "application crashed due to exception detected by software/managed environment" instead of "application crashed due to exception detected by hardware". The number of application crashes can only be decreased by detecting bugs at compile time instead of detecting them at run-time.
It was discussed and shown that managed environment allows 99% of an application functionality to be useful after an exception is thrown. While unmanaged crashes without any other option. So, 100% of functionality is unavailable in case of unmanaged and 1% of functionality is unavailable in case of managed.
Brendan wrote:For what AlexHully was proposing (unmanaged, where "100%" of bugs are detected at compile time)
Now it's impossible, so it's only theoretical possibility somewhere in the future.
Brendan wrote:a managed environment can only detect bugs caused by the "ahead of time" compiler while introducing additional bugs in the managed environment itself. The bugs in the compiler are likely to be approximately equal to the bugs in the managed environment itself; so the end result is more overhead with no difference in number of bugs.
Bugs in the environment are less important. They are found quicker and more efforts are targeting such kind of bugs. And most bugs in JRE, for example, are of the same kind as is the case in any unmanaged library. For example it is the bugs related to SSL or SSO or web server functionality or other library functions. It's just logical errors in library functions. It's not a consequence of being managed or affected by managed (or unmanaged). It's kind of bugs every developer will have in any language and environment. But bugs attributed to the environment are very rare.

So, we have the situation when complexity of the environment doesn't produce more viable bugs. And because you stress the bugs related to the environment I should decline such objection.
Brendan wrote:Hardware protection will never be obsolete because hardware can do the checks in parallel with normal execution, making it effectively free.
If hardware does something it means there are some algorithms implemented and the hardware just invokes the algorithms. Why should we move algorithms toward hardware level? Is the hardware implementation of algorithms easier? No. Does hardware algorithm implementation consume more silicon and power? Yes. So, why not to implement these algorithms in software and make hardware simpler and cheaper (in terms of total costs of ownership, including unit price and power consumption)?
Brendan wrote:For performance, the overhead of managed environments can never be "effectively free" and will always be worse.
Yes, if we will be willing to trade danger for speed. It is possible to write a program for personal use that is used only in one way and doesn't expose itself to the network or other security vulnerable environments. Then we can disable all checks and get some speed increase (not very big, in fact). But what prevents us from ordering the managed environment to produce such "unchecked" version of a code and run it in isolated space? So, if we want to live dangerously we can do it even with managed environment. But if we do not want to be in danger, then we just allow the environment to guard us against many threats.
Brendan wrote:You're conflating unrelated issues. You're saying that higher level languages and/or huge fat libraries (where programmers do less) reduce bugs; and that a higher level language with a huge fat library that is unmanaged is better than a lower level language with no libraries that is managed.
You misunderstood me. Fat libraries are the same for managed and for unmanaged. The different thing here is the environment. But environment allows us to make less bugs because it frees us from doing tedious work. So we can have less bugs even in fat libraries. And while the environment also can have bugs it always will have less critical bugs just because such bugs affect so many users and the attention paid to such bugs is too large. As a result we can have a robust environment and fat libraries with less bugs. Add to it security and reliability of managed approach. In the end number of bugs in unmanaged will prevail over the number of bugs in managed.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
Ready4Dis
Member
Member
Posts: 571
Joined: Sat Nov 18, 2006 9:11 am

Re: Os and implementing good gpu drivers(nvidia)

Post by Ready4Dis »

embryo2 wrote:
Brendan wrote:For managed environments, initially overhead and number of bugs caused by the environment will both be bad, and (in theory) as the JIT and/or compiler improves the overhead and number of bugs caused will reduce; but the overhead can never be zero and performance can't be as good as unmanaged.
Why unmanaged has better performance? Because it doesn't check dangerous code output. So, you trade danger for speed. I prefer reverse trade.
This was exactly the argument, you are trading speed for safety/security, and like I mentioned many times there is nothing wrong with this as long as it's in your goal. The argument was that you guys said you won't lose speed and will gain tons of security at the same time.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Os and implementing good gpu drivers(nvidia)

Post by Brendan »

Hi,
embryo2 wrote:
Brendan wrote:For managed environments, initially overhead and number of bugs caused by the environment will both be bad, and (in theory) as the JIT and/or compiler improves the overhead and number of bugs caused will reduce; but the overhead can never be zero and performance can't be as good as unmanaged.
Why unmanaged has better performance? Because it doesn't check dangerous code output. So, you trade danger for speed. I prefer reverse trade.
In general; unmanaged has better performance because it doesn't need to check for anything dangerous (because hardware already checks that for "free").

For some specific cases (and not "in general"), where things like hand optimised assembly is beneficial managed environments are a worse performance problem.

For other specific cases managed environments are far worse for performance. An example of this is run-time generated deterministic finite automatons (used by utilities like "grep"); where to really achieve max. performance you want to generate a jump table and code for each state, and where the code generated for each state does something a bit like "state00: nextByte = goto state00jumpTable[buffer[pos++]];".

For other specific cases the performance problems caused by managed environments become severe. An example of this is things like Qemu; where (assuming there's no hardware virtualisation - e.g. emulating ARM on an 80x86 host) you can kiss goodbye to acceptable performance as soon as you hear the word "managed".
embryo2 wrote:
Brendan wrote:Note that managed environments don't avoid application crashes - it's "application crashed due to exception detected by software/managed environment" instead of "application crashed due to exception detected by hardware". The number of application crashes can only be decreased by detecting bugs at compile time instead of detecting them at run-time.
It was discussed and shown that managed environment allows 99% of an application functionality to be useful after an exception is thrown. While unmanaged crashes without any other option. So, 100% of functionality is unavailable in case of unmanaged and 1% of functionality is unavailable in case of managed.
Wrong. For unmanaged, code that failed to do simple things correctly can still attempt to do much more complicated recovery (which in my opinion is completely insane to begin with) by using things like exceptions and/or signals that were originally generated from hardware detection. In the same way; a managed environment can be sane and terminate the process without giving incompetent programmers the chance to turn their failure into a massive disaster.

Basically; its a "language design" thing that has nothing at all to do with managed vs. unmanaged at all.
embryo2 wrote:
Brendan wrote:For what AlexHully was proposing (unmanaged, where "100%" of bugs are detected at compile time)
Now it's impossible, so it's only theoretical possibility somewhere in the future.
It's as possible as managed environments - both have the same "security vs. performance vs. complexity" compromise.
embryo2 wrote:
Brendan wrote:a managed environment can only detect bugs caused by the "ahead of time" compiler while introducing additional bugs in the managed environment itself. The bugs in the compiler are likely to be approximately equal to the bugs in the managed environment itself; so the end result is more overhead with no difference in number of bugs.
Bugs in the environment are less important. They are found quicker and more efforts are targeting such kind of bugs. And most bugs in JRE, for example, are of the same kind as is the case in any unmanaged library. For example it is the bugs related to SSL or SSO or web server functionality or other library functions. It's just logical errors in library functions. It's not a consequence of being managed or affected by managed (or unmanaged). It's kind of bugs every developer will have in any language and environment. But bugs attributed to the environment are very rare.
I think you missed the point here. Where "100%" of bugs are detected at compile time, a managed environment is a pointless pile of bloat and you still need hardware protection to protect from bugs in the managed environment.
embryo2 wrote:So, we have the situation when complexity of the environment doesn't produce more viable bugs. And because you stress the bugs related to the environment I should decline such objection.
Now we have the situation where a fool is ignoring facts. Bugs in the environment are extremely important (far more important than any bug in any one application that will only ever effect that one application); and "most bugs in JRE" is not the same as "all bugs in JRE".
embryo2 wrote:
Brendan wrote:Hardware protection will never be obsolete because hardware can do the checks in parallel with normal execution, making it effectively free.
If hardware does something it means there are some algorithms implemented and the hardware just invokes the algorithms. Why should we move algorithms toward hardware level? Is the hardware implementation of algorithms easier? No. Does hardware algorithm implementation consume more silicon and power? Yes. So, why not to implement these algorithms in software and make hardware simpler and cheaper (in terms of total costs of ownership, including unit price and power consumption)?
What you're saying is as stupid as something like: It can be proven that a Turing complete CPU can be implemented with a single instruction and nothing else; so anything more than one instruction and nothing else just consumes more power.

The fact is that for a lot of things software is a lot less efficient and consumes far more power than implementing it in hardware would. Note that this is also why Intel is planning to add FPGA to Xeon chips and why Intel already added FPGA to some embedded CPU - so that (for specialised applications) people can program the hardware/FPGA to get efficiency improvements that software can't achieve.

For silicon alone you're right; but silicon is cheap and no sane person cares. CPU manufacturers like Intel are literally sitting around trying to think of ways to use more transistors to improve performance (especially single-threaded performance). They're not going to say "We've got a budget of 1 billion transistors, so lets only use half of them so that software is slower and we consume maximum power for 3 times as long!".
embryo2 wrote:
Brendan wrote:For performance, the overhead of managed environments can never be "effectively free" and will always be worse.
Yes, if we will be willing to trade danger for speed. It is possible to write a program for personal use that is used only in one way and doesn't expose itself to the network or other security vulnerable environments. Then we can disable all checks and get some speed increase (not very big, in fact). But what prevents us from ordering the managed environment to produce such "unchecked" version of a code and run it in isolated space? So, if we want to live dangerously we can do it even with managed environment. But if we do not want to be in danger, then we just allow the environment to guard us against many threats.
Yes. Please note that I've been trying to help you understand that it's possible to run managed languages in an unmanaged environment, or run an unmanaged language in a managed environment, for a relatively long time now. It's nice to see I've at least partially succeeded.
embryo2 wrote:
Brendan wrote:You're conflating unrelated issues. You're saying that higher level languages and/or huge fat libraries (where programmers do less) reduce bugs; and that a higher level language with a huge fat library that is unmanaged is better than a lower level language with no libraries that is managed.
You misunderstood me. Fat libraries are the same for managed and for unmanaged. The different thing here is the environment. But environment allows us to make less bugs because it frees us from doing tedious work. So we can have less bugs even in fat libraries. And while the environment also can have bugs it always will have less critical bugs just because such bugs affect so many users and the attention paid to such bugs is too large. As a result we can have a robust environment and fat libraries with less bugs. Add to it security and reliability of managed approach. In the end number of bugs in unmanaged will prevail over the number of bugs in managed.
Managed environments (e.g. valgrind, Bochs) do make debugging easier; but that only matters when you're debugging, and doesn't help when you're writing code, or when the end user is using it; and it's still inferior to detecting bugs at compile time (sooner is always better than later).

Note that my idea of "ideal" is an unmanaged language; where the IDE checks for bugs while you type and has a built-in managed environment (e.g. a source level interpreter) for debugging purposes; and where the compiler detects as many bugs as possible when the code is compiled.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
HoTT
Member
Member
Posts: 56
Joined: Tue Jan 21, 2014 10:16 am

Re: Os and implementing good gpu drivers(nvidia)

Post by HoTT »

embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Os and implementing good gpu drivers(nvidia)

Post by embryo2 »

Ready4Dis wrote:The argument was that you guys said you won't lose speed and will gain tons of security at the same time.
Yes, in the end a good compiler will give us best speed.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Os and implementing good gpu drivers(nvidia)

Post by embryo2 »

Brendan wrote:In general; unmanaged has better performance because it doesn't need to check for anything dangerous (because hardware already checks that for "free").
The checks are not free (development complexity, vendor lock, less space for caches, more power consumed etc). And "anything dangerous" is still dangerous - it kills the application.
Brendan wrote:For some specific cases (and not "in general"), where things like hand optimised assembly is beneficial managed environments are a worse performance problem.
Assembly is acceptable for managed environments just like it is acceptable to allow user to run any dangerous code if he wishes and understand the consequences.
Brendan wrote:For other specific cases managed environments are far worse for performance. An example of this is run-time generated deterministic finite automatons (used by utilities like "grep"); where to really achieve max. performance you want to generate a jump table and code for each state, and where the code generated for each state does something a bit like "state00: nextByte = goto state00jumpTable[buffer[pos++]];".
If jump table is accessible to a smart compiler it can optimize it. In the worst case it can issue a warning about possible performance decrease and developer can fix the too complex code.
Brendan wrote:For other specific cases the performance problems caused by managed environments become severe. An example of this is things like Qemu; where (assuming there's no hardware virtualisation - e.g. emulating ARM on an 80x86 host) you can kiss goodbye to acceptable performance as soon as you hear the word "managed".
Didn't get your point. What's wrong with Qemu? And how it related to managed?
Brendan wrote:For unmanaged, code that failed to do simple things correctly can still attempt to do much more complicated recovery (which in my opinion is completely insane to begin with) by using things like exceptions and/or signals that were originally generated from hardware detection.
The problem is the code can corrupt memory within the guarded boundaries. So, it's better not to use proposed "recovery".
Brendan wrote:In the same way; a managed environment can be sane and terminate the process without giving incompetent programmers the chance to turn their failure into a massive disaster.
The way here is very different. The example you are trying to ignore is still the same - web server refuses to serve just one page and every other page is served successfully.
Brendan wrote:I think you missed the point here. Where "100%" of bugs are detected at compile time, a managed environment is a pointless pile of bloat and you still need hardware protection to protect from bugs in the managed environment.
If 100% bugs are detected at compile time then I recompile the environment and need no hardware protection.
Brendan wrote:Now we have the situation where a fool is ignoring facts. Bugs in the environment are extremely important (far more important than any bug in any one application that will only ever effect that one application); and "most bugs in JRE" is not the same as "all bugs in JRE".
Bugs in critical code also extremely important. The extra code base introduced with the environment isn't so much an issue if we remember the size of Linux kernel and drivers, for example.
Brendan wrote:What you're saying is as stupid as something like: It can be proven that a Turing complete CPU can be implemented with a single instruction and nothing else; so anything more than one instruction and nothing else just consumes more power.
Wrong analogy. Instruction set should be optimal, not minimal.
Brendan wrote:The fact is that for a lot of things software is a lot less efficient and consumes far more power than implementing it in hardware would. Note that this is also why Intel is planning to add FPGA to Xeon chips and why Intel already added FPGA to some embedded CPU - so that (for specialised applications) people can program the hardware/FPGA to get efficiency improvements that software can't achieve.
I see it as a way to allow users to add extra parallel computations when they dealt with massive number crunching, for example. But may be it's better to simplify the Intel's instruction set and to trade the silicon for more arithmetic units?
Brendan wrote:but silicon is cheap and no sane person cares.
Then why Intel adds new instructions in so slow manner? Why we still have no 1024 SSE registers?
Brendan wrote:Managed environments (e.g. valgrind, Bochs) do make debugging easier; but that only matters when you're debugging, and doesn't help when you're writing code, or when the end user is using it; and it's still inferior to detecting bugs at compile time (sooner is always better than later).

Note that my idea of "ideal" is an unmanaged language; where the IDE checks for bugs while you type and has a built-in managed environment (e.g. a source level interpreter) for debugging purposes; and where the compiler detects as many bugs as possible when the code is compiled.
While the ideal is far away we can leverage the managed at runtime. It can make code safe, optimize and provide more debug information. So, I'm going to have these options now while you are going to expect the moment when the ideal arrives.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Os and implementing good gpu drivers(nvidia)

Post by embryo2 »

HoTT wrote:Related.
Interesting, but I still haven't an idea how the author manages to invoke deserialized payload. I'll try to read it carefully later.

And of course, if an admin is ready to expose all default ports to the internet, then even Java can't help.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Os and implementing good gpu drivers(nvidia)

Post by embryo2 »

HoTT wrote:Related.
Well, finally managed to explore this piece of trash. The guy there pretends that he shows to the world a very critical vulnerability but no one pays attention. The vulnerability in fact is like this - if you allowed to drop an exe file in windows/system32 directory then you can claim you've found new Windows vulnerability. But it seems to me the free access to the mentioned directory is the first thing any sane admin should stop. And well, if an insane admin allows to everybody to do everything with the server's file system, then yes, the author of the claim can tell us - Java is vulnerable!
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
Post Reply