Re: Os and implementing good gpu drivers(nvidia)
Posted: Tue Nov 10, 2015 1:08 pm
Hi,
Surely if any of these bugs could be detected by a compiler; they wouldn't be "bugs that can't be detected by a managed environment, compiler or hardware" in the first place.
One has 2.8 GHz CPUs with 1300 MHz RAM and for this combination the ideal prefetch scheduling distance is 160 cycles. The other has 3.5 GHz CPUs with 1600 MHz RAM and for this combination the ideal prefetch scheduling distance is 200 cycles. Where do I download pre-compiled software that was optimised for each computer's prefetch scheduling distance?
One has 2 physical chips (and NUMA) with 4 cores per chip and hyperthreading (16 logical CPUs total) and 12 GiB of RAM (6 GiB per NUMA domain). The other has a single physical quad-core chip (8 logical CPUs total) and 32 GiB of RAM without NUMA. Where do I download pre-compiled software that was optimised for each computer's prefetch scheduling distance and the differences in memory subsystems, number of NUMA domains, chips, cores, etc?
One supports AVX2.0 and the other doesn't support AVX at all. Where do I download pre-compiled software that was optimised for each computer's prefetch scheduling distance, and the differences in memory subsystems (and number of NUMA domains, chips, cores, etc), and which SIMD extensions are/aren't supported?
Cheers,
Brendan
Please explain how it's possible for "bugs that can't be detected by a managed environment, compiler or hardware" to be detected by a compiler.embryo2 wrote:It's compile time detectable problem.Brendan wrote:
- Most bugs (e.g. "printf("Hello Qorld\");") can't be detected by a managed environment, compiler or hardware; and therefore good software engineering practices (e.g. unit tests) are necessary
Surely if any of these bugs could be detected by a compiler; they wouldn't be "bugs that can't be detected by a managed environment, compiler or hardware" in the first place.
Ahead of time compiling doesn't solve the problem of run-time bugs in isolation; but I was not suggesting that ahead of time compiling should be used in isolation.embryo2 wrote:Ahead of time doesn't solve the problem of runtime bugs. And security also can be compromised.Brendan wrote:
- Languages and compilers can be designed to detect a lot more bugs during "ahead of time" compiling; and the design of languages like C and C++ prevent compilers for these languages from being good at detecting bugs during "ahead of time" compiling, but this is a characteristic of the languages and not a characteristic imposed by "unmanaged", and unmanaged languages do exist that are far better (but not necessarily ideal) at detecting/preventing bugs during "ahead of time" compiling (e.g. Rust).
Software protection requires.... hardware that's able to execute software (which requires time, power and silicon).embryo2 wrote:Hardware protection requires time, power and silicon. Software protection can require less time, power and silicon.Brendan wrote:
- Bugs in everything; including "ahead of time" compilers, JIT compilers, kernels and hardware itself; all mean that hardware protection (designed to protect processes from each other, and to protect the kernel from processes) is necessary when security is needed (or necessary for everything, except extremely rare cases like embedded systems and games consoles where software can't modify anything that is persisted and there's no networking)
Why (was there a flaw in my reasoning that you haven't mentioned)?embryo2 wrote:The proposed combination is too far from achieving stated goal of "near zero benefits" of runtime checks.Brendan wrote:
- The combination of good software engineering practices, well designed language and hardware protection mean that the benefits of performing additional checks in software at run-time (a managed environment) is near zero even when the run-time checking is both exhaustive and perfect, because everything else detects or would detect the vast majority of bugs anyway.
Um, what? If a smart compiler was able to guarantee there are no bugs in the managed environment itself; then a smart compiler could guarantee there's no bugs in normal applications too (which would make a managed environment pointless).embryo2 wrote:It's negative until more smart compilers are released. It's only matter of time (not so great time).Brendan wrote:
- "Exhaustive and perfect" is virtually impossible; which means that the benefits of performing additional checks in software at run-time (a managed environment) is less than "near zero" in practice, and far more likely to be negative (in that the managed environment is more likely to introduce bugs of its own than to find bugs)
Zero additional safety and zero additional security doesn't even justify a lollipop.embryo2 wrote:Safeness and security justify the increase.Brendan wrote:
- The "near zero or worse" benefits of managed environments do not justify the increased overhead caused by performing additional checks in software at run-time
No; if you release software that has safeness and security problems then you've already failed; and you should probably get a job as a web developer instead of working on useful software (so that people know you qualify when they're preparing that amazing vacation to the centre of the sun).embryo2 wrote:It applies to released software also because the issues of safeness and security are still important.Brendan wrote:
- Where performance is irrelevant (specifically, during testing done before software is released) managed environments may be beneficial; but this never applies to released software.
Basically; in an ill-fated attempt at showing that "managed" isn't a pathetic joke, you suggest using "unmanaged" as an alternative?embryo2 wrote:If the efficiency is of a paramount importance we can buy trusted sources of the efficient software and because of the nature of trust we can safely tell the managed environment to compile the code without safety checks and with the attention to the developer's performance related annotations. Next it runs the code under hardware protection. And next after we have tested some software usage patterns we can safely remove even hardware protection for every tested pattern and obtain even better performance.Brendan wrote:
- Languages that are restricted for the purpose of allowing additional checks in software at run-time to be performed ("managed languages"); including things like not allowing raw pointers, not allowing assembly language, not allowing explicit memory management, not allowing self modifying code and/or not allowing dynamic code generation; prevent software from being as efficient as possible
Security measures can be circumvented by the means described above? Nice...embryo2 wrote:Restrictions can be circumvented by the means described above.Brendan wrote:
- Software written in a managed language but executed in an unmanaged language (without the overhead of run-time checking) is also prevented from being as efficient as possible by the restrictions imposed by the managed language
How many libraries does Java provide to do integer addition (x+y)? Are there more than 4 of these libraries?embryo2 wrote:Is the integer addition (x+y) operation a general purpose one? Is it implemented inefficiently in case of JIT?Brendan wrote:
- General purpose code can not be designed for a specific purpose by definition; and therefore can not be optimal for any specific purpose. This effects libraries for both managed languages and unmanaged languages alike.
I'm not talking about trivial/common optimisations that all compilers do anyway; I'm talking about things like (e.g.) choosing insertion sort because you know that for your specific case the data is always "nearly sorted" beforehand (and not just using a generic quicksort that's worse for your special case just because that's what a general purpose library happened to shove in your face).embryo2 wrote:Here is the place for aggressive inlining and other similar technics. But the code should be in a compatible form, like bytecode.Brendan wrote:
- Large libraries and/or frameworks improve development time by sacrificing the quality of the end product (because general purpose code can not be designed for a specific purpose by definition).
I'm glad that you agree that "managed" is useless and we should all use "unmanaged" (and the optimisation tricks it makes possible) for anything important.embryo2 wrote:If the performance is important and the environment's compiler is still too weak and there's some mechanism of trust between a developer and a user, then the developer is perfectly free to implement any possible optimization tricks.Brendan wrote:
- For most (not all) things that libraries are used for; for both managed and unmanaged languages the programmer has the option of ignoring the library's general purpose code and writing code specifically for their specific case. For managed languages libraries are often native code (to avoid the overhead of "managed", which is likely the reason managed languages tend to come with massive libraries/frameworks) and if a programmer chooses to write the code themselves they begin with a huge disadvantage (they can't avoid the overhead of "managed" like the library did) and their special purpose code will probably never beat the general purpose native code. For an unmanaged language the programmer can choose to write the code themselves (and avoid sacrificing performance for the sake of developer time) without that huge disadvantage.
If a developer faces some bottleneck and it's important, then he usually digs deep enough to find that root cause is "managed".embryo2 wrote:If a developer faces some bottleneck and it's important then he usually digs deep enough to find the root cause. So, all your "harder for programmer to know" is for beginners only.Brendan wrote:
- To achieve optimal performance and reduce "programmer error"; a programmer has to know what effect their code actually has at the lowest levels (e.g. what their code actually asks the CPU/s to do). Higher level languages make it harder for programmers to know what effect their code has at the lowest levels; and are therefore a barrier preventing both performance and correctness. This applies to managed and unmanaged languages alike. Note: as a general rule of thumb; if you're not able to accurately estimate "cache lines touched" without compiling, executing or profiling; then you're not adequately aware of what your code does at the lower levels.
I don't know if you mean "the developer's experience level" (e.g. how skilled they are) or "the developer experience" (e.g. whether they have nice tools/IDE, pair of good/large monitors and a comfortable chair); and for both of these possibilities I don't know how it helps them understand things that higher level languages are deliberately designed to abstract.embryo2 wrote:Optimized libraries aren't the only way. The developer experience is much preferable solution.Brendan wrote:
- The fact that higher level languages are a barrier preventing both performance and correctness is only partially mitigated through the use of highly tuned ("optimised for no specific case") libraries.
I have 2 computers and both are 64-bit 80x86.embryo2 wrote:So, just use bytecode.Brendan wrote:
- Portability is almost always desirable
Copyright concerns can be avoided using dowloadable software. Just select your platform and get the best performance. But the trust should exist there. So, any copyrighter now can exploit user's inability to protect themselves, but in case of managed the environment takes care of using hardware protection or even emulating the hardware to detect potential threat.Brendan wrote:
- Source code portability (traditionally used by languages like C and C++) causes copyright concerns for anything not intended as open source, which makes it a "less preferable" way to achieve portability for a large number of developers. To work around this developers of "not open source" software provide pre-compiled native executables. Pre-compiled native executables can't be optimised specifically for the end user's hardware/CPUs unless the developer provides thousands of versions of the pre-compiled native executables, which is extremely impractical. The end result is that users end up with poorly optimised software.
One has 2.8 GHz CPUs with 1300 MHz RAM and for this combination the ideal prefetch scheduling distance is 160 cycles. The other has 3.5 GHz CPUs with 1600 MHz RAM and for this combination the ideal prefetch scheduling distance is 200 cycles. Where do I download pre-compiled software that was optimised for each computer's prefetch scheduling distance?
One has 2 physical chips (and NUMA) with 4 cores per chip and hyperthreading (16 logical CPUs total) and 12 GiB of RAM (6 GiB per NUMA domain). The other has a single physical quad-core chip (8 logical CPUs total) and 32 GiB of RAM without NUMA. Where do I download pre-compiled software that was optimised for each computer's prefetch scheduling distance and the differences in memory subsystems, number of NUMA domains, chips, cores, etc?
One supports AVX2.0 and the other doesn't support AVX at all. Where do I download pre-compiled software that was optimised for each computer's prefetch scheduling distance, and the differences in memory subsystems (and number of NUMA domains, chips, cores, etc), and which SIMD extensions are/aren't supported?
Sure - while the software is running and being JIT compiled, the environment decides "Oh, I should use AOT for this next part", travels backwards in time until 5 minutes before the software started running, does ahead of time compiling, then travels forward in time and switches to AOT. I have no idea why this hasn't been implemented before!embryo2 wrote:There's no compromise. The environment can decide when to use JIT or AOT.Brendan wrote:
- Various optimisations are expensive (e.g. even for fundamental things like register allocation finding the ideal solution is prohibitively expensive); and JIT compiling leads to a run-time compromise between the expense of performing the optimisation and the benefits of performing the optimisation. An ahead of time compiler has no such compromise and therefore can use much more expensive optimisations and can optimise better (especially if it's able to optimise for the specific hardware/CPUs).
We need to get rid of C and C++ because they make unmanaged seem far worse than it should (for multiple reasons).embryo2 wrote:Well, yes, we need to get rid of unmanagedBrendan wrote:
- There are massive problems with the tool-chains for popular unmanaged languages (e.g. C and C++) that prevent effective optimisation (specifically; splitting a program into object files and optimising them in isolation prevents a huge number of opportunities, and trying to optimise at link time after important information has been discarded also prevents a huge number of opportunities). Note that this is a restriction of typical tools, not a restriction of the languages or environments.
Most existing unmanaged languages suck, but their problems have nothing to do with "unmanaged" and are relatively easy to fix or avoid. Most managed languages also suck, but their problems have everything to do with "managed" and can't be fixed or avoided.embryo2 wrote:So, the unmanaged sucks despite all your claims above.Brendan wrote:
- Popular JIT compiled languages are typically able to get close to the performance of popular "compiled to native" unmanaged languages because these "compiled to native" unmanaged languages have both the "not optimised specifically for the specific hardware/CPUs" problem and the "effective optimisation prevented by the tool-chain" problem.
AOT may or may not be an important part of a managed environment; but this has nothing to do with using 2 AOT compilers (one before software is deployed and the other after/while software is installed by the end user) to solve portability and performance/optimisation and copyright problems in traditional unmanaged toolchains.embryo2 wrote:AOT is important part of the managed environment.Brendan wrote:
- "Ahead of time" compiling from byte-code to native on the end user's machine (e.g. when the end user installs software) provides portability without causing the performance problems of JIT and without causing the performance problems that popular unmanaged languages have.
Can you back this up with logical reasoning?embryo2 wrote:The best solution is managed environment with many option available including JIT, AOT, hardware protected sessions and of course - the best ever smart compiler.Brendan wrote:In other words; the best solution is an unmanaged language that is designed to detect as many bugs as possible during "source to byte code" compiling that does not prevent "unsafe" things (if needed), combined with an ahead of time "byte code to native" compiler on the end user's computer; where the resulting native code is executed in an unmanaged environment with hardware protection.
Cheers,
Brendan