Os and implementing good gpu drivers(nvidia)

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
Post Reply
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Os and implementing good gpu drivers(nvidia)

Post by Rusky »

While "managed" is largely a marketing term, computability really doesn't come into this at all. You don't need to be able to define or verify "security" for all possible programs in a Turing-equivalent language- and it is thus absolutely possible for a given system to be 100% secure.

The problem is not Godel, but engineering effort and human fallibility, which is Brendan's point. Security is a process not because of any theoretical properties of the tools we use, but because it's prohibitively expensive to formally verify everything the way e.g. seL4 is.
User avatar
Schol-R-LEA
Member
Member
Posts: 1925
Joined: Fri Oct 27, 2006 9:42 am
Location: Athens, GA, USA

Re: Os and implementing good gpu drivers(nvidia)

Post by Schol-R-LEA »

Sorry, I was editing my last post while you posted this; as I explained in the added paragraph, I am really referring not Goedel's theorems themselves, but to some theorems which do apply to (possible) existence of vulnerabilities in register machines. I was mainly addressing the claims by Embryo and AlexHully that SASOS would be unassailable because it was managed code - I was simply showing that, even if their code were perfect, it would still be (in principle) vulnerable to some sort of attack.

Still, your point is valid: bringing up the existence of a theoretical vulnerability is unnecessary, since the inherent limitations of real-world engineering leave more than enough room for demonstrable ones. I think we agree here: that 'secure' software has to balance vulnerability analysis against the diminishing returns of increasing complexity and decreasing performance. It was some of the other posters who appeared to be seeing the issue as a black and white, managed vs. unmanaged, Java vs. C question, which it isn't and never will be. It was these posters who kept making unfounded claims, and assuming that Brendan was advocating eliminating all software analysis and runtime checks (which he wasn't), not you.

Don't get me wrong; my own designs for both Thelema and Kether call for a number of features which are common trappings of 'managed' code (compiling to an intermediate form for portability, JIT code generation, application signing, extensive static analysis, limited runtime checks for some things that cannot be reasonably caught by static analysis, and a number of other safety checking features). However, I do not and would not use the term 'managed code' for it, and more importantly, my intention is to conduct research into implementing these features more efficiently, not to palm off a modification of an existing kernel as a viable commercial platform as distinguished from the original.

I think the real thing Brendan was saying was not so much that 'managed' systems are inherently wrong, but that their alleged advantages are not clearly demonstrated by the people arguing for them. He is reasoning about the tradeoffs, and his conclusion that most of them aren't worth the cost.

Take the grep example, for instance, one which was largely misunderstood. Brendan was saying that, in a typical regular expression compiler, the regex compiler (or the compiler for the implementation language) is able to generate efficient code from the regex compiler's internal representation (in the form of a jump table) that runs on the specific processor directly. However, he goes on to say, if the regex compiler produces a generalized intermediate bytecode that is then re-compiled by the managed code system, the JIT optimizer would (very likely) lack the critical higher-level structural information needed to see that such an optimization was possible, unless the bytecode had a special-case support for it (which would add complexity).

(In case you're wondering, my solution to that specific problem is to use an AST representation for the portable 'executable', rather than a bytecode. However, that has its own tradeoffs, as seen with the 'slim binaries' used in some of the later Wirth languages, and in any case it still would need to allow 'unmanaged' code for many critical optimizations. In any case, that's just one instance of a more general problem.)

In other words, he's saying that if they intended to address every weakness of their system as it comes up, they were looking at a continued cycle of adding more and more special cases, each of which adding their own overhead and vulnerabilities, until the 'smart' system loses its putative advantages over the more general, 'dumb', brute-force approach.
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Os and implementing good gpu drivers(nvidia)

Post by Brendan »

Hi,
Schol-R-LEA wrote:I think the real thing Brendan was saying was not so much that 'managed' systems are inherently wrong, but that their alleged advantages are not clearly demonstrated by the people arguing for them. He is reasoning about the tradeoffs, and his conclusion that most of them aren't worth the cost.
I'm mostly trying to say:
  • Most bugs (e.g. "printf("Hello Qorld\");") can't be detected by a managed environment, compiler or hardware; and therefore good software engineering practices (e.g. unit tests) are necessary
  • Languages and compilers can be designed to detect a lot more bugs during "ahead of time" compiling; and the design of languages like C and C++ prevent compilers for these languages from being good at detecting bugs during "ahead of time" compiling, but this is a characteristic of the languages and not a characteristic imposed by "unmanaged", and unmanaged languages do exist that are far better (but not necessarily ideal) at detecting/preventing bugs during "ahead of time" compiling (e.g. Rust).
  • Bugs in everything; including "ahead of time" compilers, JIT compilers, kernels and hardware itself; all mean that hardware protection (designed to protect processes from each other, and to protect the kernel from processes) is necessary when security is needed (or necessary for everything, except extremely rare cases like embedded systems and games consoles where software can't modify anything that is persisted and there's no networking)
  • The combination of good software engineering practices, well designed language and hardware protection mean that the benefits of performing additional checks in software at run-time (a managed environment) is near zero even when the run-time checking is both exhaustive and perfect, because everything else detects or would detect the vast majority of bugs anyway.
  • "Exhaustive and perfect" is virtually impossible; which means that the benefits of performing additional checks in software at run-time (a managed environment) is less than "near zero" in practice, and far more likely to be negative (in that the managed environment is more likely to introduce bugs of its own than to find bugs)
  • Performing additional checks in software at run-time (required by my definition of "managed environment") must increase overhead
  • The "near zero or worse" benefits of managed environments do not justify the increased overhead caused by performing additional checks in software at run-time
  • Where performance is irrelevant (specifically, during testing done before software is released) managed environments may be beneficial; but this never applies to released software.
  • Languages that are restricted for the purpose of allowing additional checks in software at run-time to be performed ("managed languages"); including things like not allowing raw pointers, not allowing assembly language, not allowing explicit memory management, not allowing self modifying code and/or not allowing dynamic code generation; prevent software from being as efficient as possible
  • Software written in a managed language but executed in an unmanaged language (without the overhead of run-time checking) is also prevented from being as efficient as possible by the restrictions imposed by the managed language
  • General purpose code can not be designed for a specific purpose by definition; and therefore can not be optimal for any specific purpose. This effects libraries for both managed languages and unmanaged languages alike.
  • Large libraries and/or frameworks improve development time by sacrificing the quality of the end product (because general purpose code can not be designed for a specific purpose by definition).
  • For most (not all) things that libraries are used for; for both managed and unmanaged languages the programmer has the option of ignoring the library's general purpose code and writing code specifically for their specific case. For managed languages libraries are often native code (to avoid the overhead of "managed", which is likely the reason managed languages tend to come with massive libraries/frameworks) and if a programmer chooses to write the code themselves they begin with a huge disadvantage (they can't avoid the overhead of "managed" like the library did) and their special purpose code will probably never beat the general purpose native code. For an unmanaged language the programmer can choose to write the code themselves (and avoid sacrificing performance for the sake of developer time) without that huge disadvantage.
  • To achieve optimal performance and reduce "programmer error"; a programmer has to know what effect their code actually has at the lowest levels (e.g. what their code actually asks the CPU/s to do). Higher level languages make it harder for programmers to know what effect their code has at the lowest levels; and are therefore a barrier preventing both performance and correctness. This applies to managed and unmanaged languages alike. Note: as a general rule of thumb; if you're not able to accurately estimate "cache lines touched" without compiling, executing or profiling; then you're not adequately aware of what your code does at the lower levels.
  • The fact that higher level languages are a barrier preventing both performance and correctness is only partially mitigated through the use of highly tuned ("optimised for no specific case") libraries.
  • Portability is almost always desirable
  • Source code portability (traditionally used by languages like C and C++) causes copyright concerns for anything not intended as open source, which makes it a "less preferable" way to achieve portability for a large number of developers. To work around this developers of "not open source" software provide pre-compiled native executables. Pre-compiled native executables can't be optimised specifically for the end user's hardware/CPUs unless the developer provides thousands of versions of the pre-compiled native executables, which is extremely impractical. The end result is that users end up with poorly optimised software.
  • To avoid the copyright concerns of source code portability while also allowing software to be optimised specifically for the end user's specific hardware/CPUs; executable code needs to be delivered to users as some form of byte-code.
  • Various optimisations are expensive (e.g. even for fundamental things like register allocation finding the ideal solution is prohibitively expensive); and JIT compiling leads to a run-time compromise between the expense of performing the optimisation and the benefits of performing the optimisation. An ahead of time compiler has no such compromise and therefore can use much more expensive optimisations and can optimise better (especially if it's able to optimise for the specific hardware/CPUs).
  • There are massive problems with the tool-chains for popular unmanaged languages (e.g. C and C++) that prevent effective optimisation (specifically; splitting a program into object files and optimising them in isolation prevents a huge number of opportunities, and trying to optimise at link time after important information has been discarded also prevents a huge number of opportunities). Note that this is a restriction of typical tools, not a restriction of the languages or environments.
  • Popular JIT compiled languages are typically able to get close to the performance of popular "compiled to native" unmanaged languages because these "compiled to native" unmanaged languages have both the "not optimised specifically for the specific hardware/CPUs" problem and the "effective optimisation prevented by the tool-chain" problem.
  • "Ahead of time" compiling from byte-code to native on the end user's machine (e.g. when the end user installs software) provides portability without causing the performance problems of JIT and without causing the performance problems that popular unmanaged languages have.
In other words; the best solution is an unmanaged language that is designed to detect as many bugs as possible during "source to byte code" compiling that does not prevent "unsafe" things (if needed), combined with an ahead of time "byte code to native" compiler on the end user's computer; where the resulting native code is executed in an unmanaged environment with hardware protection. Everything else; which includes all existing managed and unmanaged languages and all of their respective tools (that I'm aware of); are inferior for at least some of the reasons outlined above. 8)


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Antti
Member
Member
Posts: 923
Joined: Thu Jul 05, 2012 5:12 am
Location: Finland

Re: Os and implementing good gpu drivers(nvidia)

Post by Antti »

Note: I wrote this next text before the previous post so it might make less sense... I did not check everything.
Brendan wrote:
embryo2 wrote:Are you so competent that you have no bugs?
I'm so competent that not only do I have an extremely low rate of bugs in software that I release, I also do everything possible to work around both hardware and firmware bugs, and provide multiple layers to guard against problems in my software, hardware and firmware. I don't even assume RAM works properly.
This comes at a price althought the low rate of bugs could be true (as far as I know we only have your word for it) and the fact that without actually releasing any software it is possible to keep this statement valid forever. Being that competent has required a lot of time and it is unrealistic to assume programmers in general are going to do their hobby/job even half as thoroughly. The truth is that almost all other programmers are, in your own words, sent to the centre of the Sun. The next thing is to think about what is important and what is not. You may be competent at developing extremely correct software but are you, at the same time, competent at doing it in a reasonable time frame? In some cases, it is important to have that extremely correct software no matter how long it takes to develop it (and not having the software at all if the extremely correctness can not be met). In some other cases, there might be a real need for the software and it is developed in a reasonable time frame (and not having the software at all is not an option).

However, the previous opinion was mostly a non-technical comment on the fact that all the programmers are not doing their personal projects. A lot of code may be bad but it does not mean the programmers were bad. The code was good enough for its intended purpose and the authors were competent because they used available resources efficiently and released the software right on schedule. Of course there are real problems with the modern programming environments that make thing worse in general. I am agreeing with the idea that protecting a process from itself should not be needed. Having a "cold and harsh" run-time environment will help that processes are made strong enough to survive it. This will lead to a better code audit, compile-time analysis, programmers knowledge, and et cetera. Then there are details like run-time efficiency, (de facto) bug-free environments, and elegancy. The things we on this forum are interested in?
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Os and implementing good gpu drivers(nvidia)

Post by embryo2 »

Antti wrote:Please note that it is "cold and harsh" anyway if it really enters into open world where it is attacked from all sides. Being fully prepared for it is better than relying on some (unrealiable?) help on the field.
But we can have "warm and tender" environment for developers and armored skin around the beauty in the wild. We (developers) are within the environment and it should be nice for us. Attackers are outside. And because of them the environment wears armor. Armor is about compilers and managed environments. Unmanaged is just naked in this analogy.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Os and implementing good gpu drivers(nvidia)

Post by embryo2 »

Octocontrabass wrote:
embryo2 wrote:There should be deserialized class with malicious code. Else there's no vulnerability.
The authors demonstrated remotely executing "touch /tmp/pwned" without being an authenticated user. Are you saying it's not a vulnerability because "touch /tmp/pwned" is not malicious?
What "authors demonstrated" is a lot of obfuscated talks about how smart they are when creating very simple client for very simple protocol. And finally they skip a lot of actions and show you "touch /tmp/pwned". But do you know how exactly they managed to run this "touch /tmp/pwned"? You should read the article again. But beware of the obfuscation and self-advertising.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Os and implementing good gpu drivers(nvidia)

Post by embryo2 »

Brendan wrote:The fact that Jenkins developers panicked and rushed out a patch to mitigate the problem should give you a good idea of how wrong you are.
My bid is still actual. Anybody can tell here he is able to run his code using the ports I can open on my server. And let's see how obfuscated advertising will bust despite some developers are eager to see the danger here.
Brendan wrote:Airbags exists to protect against user error, not to protect against design flaws.
Is the car's speed a design flaw?
Brendan wrote:CPUs do not do "one sequential step at a time".
Yes, my eyes now are opened :)
Brendan wrote:It's this "split everything into tiny pieces and do many pieces in parallel" nature that makes hardware many times faster than software can be.
But your eye are still closed. Do you see the fact, that any parallel operation can be written sequentially just for your better understanding? Or your brain goes parallel and executes 10 instructions at a time?

My example was about the set of operations required for the algorithm to be implemented. And your answer was about some optimizations I haven't shown to you. Yes, I skipped the optimized part. But do you think that after proposed optimization the actual number of actions will be different?
Brendan wrote:What's different is that software is unable to split everything into tiny pieces and do many pieces in parallel. Software is restricted to sequential steps (instructions).
Do you mean restricted by Intel? Then yes, I agree.
Brendan wrote:In my case, the kernel provides exception handlers to handle it (e.g. by adding info to a log and terminating the process) and there's no additional bloat in the process itself to explictly handle problems that shouldn't happen.
It doesn't prevent the process from having the implicit handling of "additional bloat". Do you know how electronic gates work? Can you show a magical schematic with the ability of skipping bit value check or setting input levels according to the variable's value in memory?
Brendan wrote:Heh. Web technologies are "special".
Yes, it's different world. Antti has described it a bit in the post above. It's better to pay attention to the description.
Brendan wrote:I meant you personally; not some stuffed suit in the purchasing department that doesn't know the difference between a server and a waitress.
I purchase almost no software. But it was one game that I decided to buy. And after I installed it the bloat just told me it need me to register at some site and keep my PC connected to the internet forever. And yes, the bloat was written in C.
Brendan wrote:You mean, the OS written primarily in C, where competent developers use NDK and write apps in C/C++, and where Java is used for portability and not because its "managed"?
You can compare the share of the "competent developers" on the Android market. And of course, you will prefer to blame the majority of developers instead of recognizing the fact that "competent developers" suck to compete with the majority (which uses Java only).
Brendan wrote:Apparently you've got so little intelligence that you actually are able to pretend that a managed environment provides protection and no protection at the same time (by suggesting that you can run the code in an unmanaged environment and ignoring the fact that an unmanaged environment is not a managed environment).
I was repeating the word "choice" for may be 10 times. But you still miss it.
Brendan wrote:
embryo2 wrote:Ok, if you don't know the enterprise development industry I can point a few things here. It is WAS, WebLogic, jBoss, Tomcat, GlassFish to name a few.
None of these are web servers - they're all "bloatware" (frameworks, etc) that sit between web developers and web servers. Websphere seems to rely on Apache; WebLogic seems to rely on Apache, jBoss seems to be the combination of Apache and Tomcat, and GlassFish seems to rely on Apache.
Well, if "bloatware" runs the entire planet's operations, then for you it's really better to send all developers to the sun.

But I should remind you, the Apache web server is used as front running load balancer, but not as web server. Sometime there are even hardware based load balancers. The current state of Java technology doesn't pays great attention to the low level stuff like a socket pooling, optimized for a particular OS, so it's easier to use Apache instead of writing OS level code in Java.

And of course, there are solutions without front end Apache.
Brendan wrote:So, you have no idea how benchmarks like SPECint and SPEC2006 work?
Yes, I have no deep insight about the benchmark's internal kitchen. But I know the tests are selected just to match typical load and the typical load (unexpected!) is Intel based PC load. Here you can see some tests:

Common CPUs
Low End CPUs
Low Mid Range CPUs
High Mid Range CPUs

Have you noticed the lists are full of Intel compatible processors and there's no any other processor?
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Os and implementing good gpu drivers(nvidia)

Post by embryo2 »

Schol-R-LEA wrote:I am going to say outright what no one on either side seems to be willing to:

MANAGED CODE IS A CHIMERA.
May be you have taken the name too seriously? Ok, I agree to name it as you wish. But the essence of the operations will be the same.
Schol-R-LEA wrote:since for any system with a finite number of checks, Goedel's incompleteness theorems would indicate that a different G would still exist for the system as a whole
Do you believe in closed systems? It worth to remember Goedel's theorem and it's preconditions. We are outside the managed environment and we can prove it using external information. But yes, we can't mathematically prove we are right in case of the choice managed vs unmanaged. However, we can prove it by example. It wins the world.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
Octocontrabass
Member
Member
Posts: 5521
Joined: Mon Mar 25, 2013 7:01 pm

Re: Os and implementing good gpu drivers(nvidia)

Post by Octocontrabass »

embryo2 wrote:What "authors demonstrated" is a lot of obfuscated talks about how smart they are when creating very simple client for very simple protocol.
The complexity of the client and the protocol is irrelevant.
embryo2 wrote:And finally they skip a lot of actions and show you "touch /tmp/pwned".
  1. Locate an attack vector (a protocol where a serialized object is transmitted from the client to the server before the client has been authenticated)
  2. Generate a payload (e.g. using ysoserial)
  3. Inject the payload as a replacement for one of the serialized objects that would normally be transmitted
  4. The server executes the malicious code in the payload
What's missing?
embryo2 wrote:But do you know how exactly they managed to run this "touch /tmp/pwned"?
Which of the steps in their process (listed above) do you want me to explain?
embryo2 wrote:You should read the article again. But beware of the obfuscation and self-advertising.
I have been referring to it every time I write a reply. It seems very straightforward to me.
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Os and implementing good gpu drivers(nvidia)

Post by embryo2 »

Brendan wrote:
  • Most bugs (e.g. "printf("Hello Qorld\");") can't be detected by a managed environment, compiler or hardware; and therefore good software engineering practices (e.g. unit tests) are necessary
It's compile time detectable problem.
Brendan wrote:
  • Languages and compilers can be designed to detect a lot more bugs during "ahead of time" compiling; and the design of languages like C and C++ prevent compilers for these languages from being good at detecting bugs during "ahead of time" compiling, but this is a characteristic of the languages and not a characteristic imposed by "unmanaged", and unmanaged languages do exist that are far better (but not necessarily ideal) at detecting/preventing bugs during "ahead of time" compiling (e.g. Rust).
Ahead of time doesn't solve the problem of runtime bugs. And security also can be compromised.
Brendan wrote:
  • Bugs in everything; including "ahead of time" compilers, JIT compilers, kernels and hardware itself; all mean that hardware protection (designed to protect processes from each other, and to protect the kernel from processes) is necessary when security is needed (or necessary for everything, except extremely rare cases like embedded systems and games consoles where software can't modify anything that is persisted and there's no networking)
Hardware protection requires time, power and silicon. Software protection can require less time, power and silicon.
Brendan wrote:
  • The combination of good software engineering practices, well designed language and hardware protection mean that the benefits of performing additional checks in software at run-time (a managed environment) is near zero even when the run-time checking is both exhaustive and perfect, because everything else detects or would detect the vast majority of bugs anyway.
The proposed combination is too far from achieving stated goal of "near zero benefits" of runtime checks.
Brendan wrote:
  • "Exhaustive and perfect" is virtually impossible; which means that the benefits of performing additional checks in software at run-time (a managed environment) is less than "near zero" in practice, and far more likely to be negative (in that the managed environment is more likely to introduce bugs of its own than to find bugs)
It's negative until more smart compilers are released. It's only matter of time (not so great time).
Brendan wrote:
  • Performing additional checks in software at run-time (required by my definition of "managed environment") must increase overhead
See previous item.
Brendan wrote:
  • The "near zero or worse" benefits of managed environments do not justify the increased overhead caused by performing additional checks in software at run-time
Safeness and security justify the increase.
Brendan wrote:
  • Where performance is irrelevant (specifically, during testing done before software is released) managed environments may be beneficial; but this never applies to released software.
It applies to released software also because the issues of safeness and security are still important.
Brendan wrote:
  • Languages that are restricted for the purpose of allowing additional checks in software at run-time to be performed ("managed languages"); including things like not allowing raw pointers, not allowing assembly language, not allowing explicit memory management, not allowing self modifying code and/or not allowing dynamic code generation; prevent software from being as efficient as possible
If the efficiency is of a paramount importance we can buy trusted sources of the efficient software and because of the nature of trust we can safely tell the managed environment to compile the code without safety checks and with the attention to the developer's performance related annotations. Next it runs the code under hardware protection. And next after we have tested some software usage patterns we can safely remove even hardware protection for every tested pattern and obtain even better performance.
Brendan wrote:
  • Software written in a managed language but executed in an unmanaged language (without the overhead of run-time checking) is also prevented from being as efficient as possible by the restrictions imposed by the managed language
Restrictions can be circumvented by the means described above.
Brendan wrote:
  • General purpose code can not be designed for a specific purpose by definition; and therefore can not be optimal for any specific purpose. This effects libraries for both managed languages and unmanaged languages alike.
Is the integer addition (x+y) operation a general purpose one? Is it implemented inefficiently in case of JIT?
Brendan wrote:
  • Large libraries and/or frameworks improve development time by sacrificing the quality of the end product (because general purpose code can not be designed for a specific purpose by definition).
Here is the place for aggressive inlining and other similar technics. But the code should be in a compatible form, like bytecode.
Brendan wrote:
  • For most (not all) things that libraries are used for; for both managed and unmanaged languages the programmer has the option of ignoring the library's general purpose code and writing code specifically for their specific case. For managed languages libraries are often native code (to avoid the overhead of "managed", which is likely the reason managed languages tend to come with massive libraries/frameworks) and if a programmer chooses to write the code themselves they begin with a huge disadvantage (they can't avoid the overhead of "managed" like the library did) and their special purpose code will probably never beat the general purpose native code. For an unmanaged language the programmer can choose to write the code themselves (and avoid sacrificing performance for the sake of developer time) without that huge disadvantage.
If the performance is important and the environment's compiler is still too weak and there's some mechanism of trust between a developer and a user, then the developer is perfectly free to implement any possible optimization tricks.
Brendan wrote:
  • To achieve optimal performance and reduce "programmer error"; a programmer has to know what effect their code actually has at the lowest levels (e.g. what their code actually asks the CPU/s to do). Higher level languages make it harder for programmers to know what effect their code has at the lowest levels; and are therefore a barrier preventing both performance and correctness. This applies to managed and unmanaged languages alike. Note: as a general rule of thumb; if you're not able to accurately estimate "cache lines touched" without compiling, executing or profiling; then you're not adequately aware of what your code does at the lower levels.
If a developer faces some bottleneck and it's important then he usually digs deep enough to find the root cause. So, all your "harder for programmer to know" is for beginners only.
Brendan wrote:
  • The fact that higher level languages are a barrier preventing both performance and correctness is only partially mitigated through the use of highly tuned ("optimised for no specific case") libraries.
Optimized libraries aren't the only way. The developer experience is much preferable solution.
Brendan wrote:
  • Portability is almost always desirable
So, just use bytecode.
Brendan wrote:
  • Source code portability (traditionally used by languages like C and C++) causes copyright concerns for anything not intended as open source, which makes it a "less preferable" way to achieve portability for a large number of developers. To work around this developers of "not open source" software provide pre-compiled native executables. Pre-compiled native executables can't be optimised specifically for the end user's hardware/CPUs unless the developer provides thousands of versions of the pre-compiled native executables, which is extremely impractical. The end result is that users end up with poorly optimised software.
Copyright concerns can be avoided using dowloadable software. Just select your platform and get the best performance. But the trust should exist there. So, any copyrighter now can exploit user's inability to protect themselves, but in case of managed the environment takes care of using hardware protection or even emulating the hardware to detect potential threat.
Brendan wrote:
  • To avoid the copyright concerns of source code portability while also allowing software to be optimised specifically for the end user's specific hardware/CPUs; executable code needs to be delivered to users as some form of byte-code.
See above.
Brendan wrote:
  • Various optimisations are expensive (e.g. even for fundamental things like register allocation finding the ideal solution is prohibitively expensive); and JIT compiling leads to a run-time compromise between the expense of performing the optimisation and the benefits of performing the optimisation. An ahead of time compiler has no such compromise and therefore can use much more expensive optimisations and can optimise better (especially if it's able to optimise for the specific hardware/CPUs).
There's no compromise. The environment can decide when to use JIT or AOT.
Brendan wrote:
  • There are massive problems with the tool-chains for popular unmanaged languages (e.g. C and C++) that prevent effective optimisation (specifically; splitting a program into object files and optimising them in isolation prevents a huge number of opportunities, and trying to optimise at link time after important information has been discarded also prevents a huge number of opportunities). Note that this is a restriction of typical tools, not a restriction of the languages or environments.
Well, yes, we need to get rid of unmanaged :)
Brendan wrote:
  • Popular JIT compiled languages are typically able to get close to the performance of popular "compiled to native" unmanaged languages because these "compiled to native" unmanaged languages have both the "not optimised specifically for the specific hardware/CPUs" problem and the "effective optimisation prevented by the tool-chain" problem.
So, the unmanaged sucks despite all your claims above.
Brendan wrote:
  • "Ahead of time" compiling from byte-code to native on the end user's machine (e.g. when the end user installs software) provides portability without causing the performance problems of JIT and without causing the performance problems that popular unmanaged languages have.
AOT is important part of the managed environment.
Brendan wrote:In other words; the best solution is an unmanaged language that is designed to detect as many bugs as possible during "source to byte code" compiling that does not prevent "unsafe" things (if needed), combined with an ahead of time "byte code to native" compiler on the end user's computer; where the resulting native code is executed in an unmanaged environment with hardware protection.
The best solution is managed environment with many option available including JIT, AOT, hardware protected sessions and of course - the best ever smart compiler.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Os and implementing good gpu drivers(nvidia)

Post by embryo2 »

Octocontrabass wrote:
embryo2 wrote:What "authors demonstrated" is a lot of obfuscated talks about how smart they are when creating very simple client for very simple protocol.
The complexity of the client and the protocol is irrelevant.
The author describes a lot of details about how he managed to send a set of bytes to the server. Who needs this obfuscation?
Octocontrabass wrote:
  1. Locate an attack vector (a protocol where a serialized object is transmitted from the client to the server before the client has been authenticated)
  2. Generate a payload (e.g. using ysoserial)
  3. Inject the payload as a replacement for one of the serialized objects that would normally be transmitted
  4. The server executes the malicious code in the payload
What's missing?
Well, almost everything.

Between the steps 3 and 4 there is a set of actions involved. Server receives the payload. Server invokes specific handler. The handler invokes the general deserializer. The deserializer looks for the class which name it found in the payload. JVM tells the deserializer "there's no such beast". Yes, the response will be as such if the attacker has no access to the server's file system. But the author pretends nobody notes such a small problem as his access to the server's file system. And the author puts his class in the server's class path directory. And JVM returns this class to the deserializer. And deserializer invokes it's state restoration method. And the author pretends nobody sees the actual way things work here.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Os and implementing good gpu drivers(nvidia)

Post by Brendan »

Hi,
embryo2 wrote:
Brendan wrote:Airbags exists to protect against user error, not to protect against design flaws.
Is the car's speed a design flaw?
It's a design requirement - a car that doesn't move is an ornament and not a car at all.
embryo2 wrote:
Brendan wrote:CPUs do not do "one sequential step at a time".
Yes, my eyes now are opened :)
Brendan wrote:It's this "split everything into tiny pieces and do many pieces in parallel" nature that makes hardware many times faster than software can be.
But your eye are still closed. Do you see the fact, that any parallel operation can be written sequentially just for your better understanding? Or your brain goes parallel and executes 10 instructions at a time?

My example was about the set of operations required for the algorithm to be implemented. And your answer was about some optimizations I haven't shown to you. Yes, I skipped the optimized part. But do you think that after proposed optimization the actual number of actions will be different?
Brendan wrote:What's different is that software is unable to split everything into tiny pieces and do many pieces in parallel. Software is restricted to sequential steps (instructions).
Do you mean restricted by Intel? Then yes, I agree.
No. It's extremely difficult for people to reason about software if/when it's doing many things simultaneously; and so all CPUs (including Intel's and everyone else's) have to emulate "one step at a time" so that they're usable (even if they don't actually do "one step at a time" internally).
embryo2 wrote:
Brendan wrote:In my case, the kernel provides exception handlers to handle it (e.g. by adding info to a log and terminating the process) and there's no additional bloat in the process itself to explictly handle problems that shouldn't happen.
It doesn't prevent the process from having the implicit handling of "additional bloat". Do you know how electronic gates work? Can you show a magical schematic with the ability of skipping bit value check or setting input levels according to the variable's value in memory?
If an exception (e.g. page fault) occurs and the kernel responds by terminating the process immediately; no stupid bloat for handling this condition (or any similar condition) is possible in any process at all.

I don't see how the remainder of your reply (gates? schematics?) relates to what its replying to.
embryo2 wrote:
Brendan wrote:You mean, the OS written primarily in C, where competent developers use NDK and write apps in C/C++, and where Java is used for portability and not because its "managed"?
You can compare the share of the "competent developers" on the Android market. And of course, you will prefer to blame the majority of developers instead of recognizing the fact that "competent developers" suck to compete with the majority (which uses Java only).
I very much doubt that we're using the same definition of competence. For things like smartphone app development and web development (which is far worse) the inherent inefficiency means that a person must be willing to sacrifice the quality of the end product for the sake of "rapid turd shovelling". This sacrifice is only possible for people who have a severe lack of pride in their work or are unable to recognise the inefficiencies; and lack of pride in your work and/or an inability to recognise and avoid inefficiency is how I define incompetence.
embryo2 wrote:
Brendan wrote:None of these are web servers - they're all "bloatware" (frameworks, etc) that sit between web developers and web servers. Websphere seems to rely on Apache; WebLogic seems to rely on Apache, jBoss seems to be the combination of Apache and Tomcat, and GlassFish seems to rely on Apache.
Well, if "bloatware" runs the entire planet's operations, then for you it's really better to send all developers to the sun.
If "bloatware" did run the entire planet's operations, then it wouldn't just be better for me but better for everyone (or at least, everyone that isn't sent to the sun). Fortunately bloatware only runs the small part of the planet that (I assume) you're permanently stuck in.
embryo2 wrote:But I should remind you, the Apache web server is used as front running load balancer, but not as web server. Sometime there are even hardware based load balancers. The current state of Java technology doesn't pays great attention to the low level stuff like a socket pooling, optimized for a particular OS, so it's easier to use Apache instead of writing OS level code in Java.
Heh - a web server is "OS level code" now?!?
embryo2 wrote:
Brendan wrote:So, you have no idea how benchmarks like SPECint and SPEC2006 work?
Yes, I have no deep insight about the benchmark's internal kitchen. But I know the tests are selected just to match typical load and the typical load (unexpected!) is Intel based PC load. Here you can see some tests:

Common CPUs
Low End CPUs
Low Mid Range CPUs
High Mid Range CPUs

Have you noticed the lists are full of Intel compatible processors and there's no any other processor?
I only saw this list of benchmark results. There isn't a single 80x86 benchmark at all!

You deliberately chose a "PC only" benchmark; and shouldn't be surprised that you get benchmark results for PCs. You also deliberately chose to ignore the benchmarks that the industry uses (like SPECint and SPEC2006) that I mentioned.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Octocontrabass
Member
Member
Posts: 5521
Joined: Mon Mar 25, 2013 7:01 pm

Re: Os and implementing good gpu drivers(nvidia)

Post by Octocontrabass »

embryo2 wrote:The author describes a lot of details about how he managed to send a set of bytes to the server. Who needs this obfuscation?
You don't have to read the entire page from start to finish. You can skip to the parts that describe the exploit (and the links where the exploit is described in further detail).
embryo2 wrote:Between the steps 3 and 4 there is a set of actions involved. Server receives the payload. Server invokes specific handler. The handler invokes the general deserializer. The deserializer looks for the class which name it found in the payload. JVM tells the deserializer "there's no such beast". Yes, the response will be as such if the attacker has no access to the server's file system. But the author pretends nobody notes such a small problem as his access to the server's file system. And the author puts his class in the server's class path directory. And JVM returns this class to the deserializer. And deserializer invokes it's state restoration method. And the author pretends nobody sees the actual way things work here.
All of the classes used in the exploit are part of the application being exploited. More specifically, they are all part of commons-collections, which is a component of all of the exploited applications. The remote attacker needs no access to the server's filesystem, because the vulnerable code is already there.
User avatar
Schol-R-LEA
Member
Member
Posts: 1925
Joined: Fri Oct 27, 2006 9:42 am
Location: Athens, GA, USA

Re: Os and implementing good gpu drivers(nvidia)

Post by Schol-R-LEA »

embryo2 wrote:
Brendan wrote:Most bugs (e.g. "printf("Hello Qorld\");") can't be detected by a managed environment, compiler or hardware; and therefore good software engineering practices (e.g. unit tests) are necessary
It's compile time detectable problem.
Wait a moment, are you claiming that the compiler (as opposed to the editor or IDE) should be not only incorporate a spell checker for strings, but also be able to detect a spelling error in a string (e.g., "Qorld" instead of "World") while correctly discriminating unusual spellings of (for example) proper names of objects (e.g., "Unable to open Flash drive 'myLittlePNY', please check drive"), and meaningfully indicating an error for the first but not the second? Human beings cannot do that consistently, I cannot see how the compiler would be able to. Indeed, in the first case, there is no way of knowing whether the coder intended to write "Qorld" rather than "World" (e.g., in the case where "Qorld" is a user's handle) without consulting the coder in some manner ahead of time, and even if that were possible to indicate it in one place (for argument's sake, let's say that there is a user-defined dictionary extension, as is common for spell checkers), any instance where more than one match is possible (for example, the admittedly silly statement "Qorld rocks your World"), either there would be a one-time exception for ignoring the first case but not the second, which requires coder intervention, or ignoring both, in which case if the user had written "Qorld rocks your Qorld" with the second being a typo, the error would be missed. While detecting such edge cases and providing a warning and/or an opportunity for coder intervention may even make sense, it still would not be the compiler checking the error, per se.

The point isn't that spelling is a particularly difficult issue; that's just one example of the several cases where a purely automated error checking absent of contextual information cannot be performed. Unit testing, visual code inspection, and other basic software engineering defense-in-depth would still be needed to fully qualify the program, and even then the possibility of an unchecked error getting through is non-zero.
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
User avatar
Schol-R-LEA
Member
Member
Posts: 1925
Joined: Fri Oct 27, 2006 9:42 am
Location: Athens, GA, USA

Re: Os and implementing good gpu drivers(nvidia)

Post by Schol-R-LEA »

Brendan wrote:
embryo2 wrote:
Brendan wrote:So, you have no idea how benchmarks like SPECint and SPEC2006 work?
Yes, I have no deep insight about the benchmark's internal kitchen. But I know the tests are selected just to match typical load and the typical load (unexpected!) is Intel based PC load. Here you can see some tests:

Common CPUs
Low End CPUs
Low Mid Range CPUs
High Mid Range CPUs

Have you noticed the lists are full of Intel compatible processors and there's no any other processor?
I only saw this list of benchmark results. There isn't a single 80x86 benchmark at all!

You deliberately chose a "PC only" benchmark; and shouldn't be surprised that you get benchmark results for PCs. You also deliberately chose to ignore the benchmarks that the industry uses (like SPECint and SPEC2006) that I mentioned.
Actually, it is rather worse than that; the site Embryo linked to is one selling a specific, proprietary 'benchmark' of unknown properties, and what it presents is not the results of a valid benchmark suite at all. As such, the page exists solely as advertising, and nothing on it can be considered trusted.

To be fair, I suspect the Embryo simply chose the page as being the first one found in a Google search on the phrase 'CPU benchmark', without looking critically at what was actually presented or understanding the nuances of benchmarking (by which I mean, the need for transparency in both the benchmark algorithms and the systems being tested, the recognition of possible subtle biases in a given suite, the problem of unscrupulous developers hiding benchmark-oriented performance tweaks in both hardware and software, etc.).
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Post Reply