What do you think about managed code and OSes written in it?

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: What do you think about managed code and OSes written in

Post by Rusky »

Brendan wrote:Do you think developers will be able to write code in C faster with the new C compiler?
If nothing else, the error messages would improve developers' ability to fix problems when they do come up. Standard C's behavior is undefined when dereferencing bad pointers, going outside array bounds (even just calculating the address!), etc. This does sometimes lead to a simple segfault that's just as easy to fix as a NullPointerException, but it also often leads to odd, unexpected behavior that's hard to track down, either because the optimizer made assumptions the program violated or because memory got corrupted and the program kept running.

So on that note,
Brendan wrote:When software crashes, do you think end users will be glad that the problem was detected by software and not hardware?
I know I'm glad when the software detects errors like that rather than letting the bug turn into a security vulnerability. In many cases, this overhead is extremely worthwhile.

And no, you can't solve this problem without the "overhead" just by adding things like compile-time integer ranges to the language, because that's just another way to enforce that those checks are there- most of those checks have to happen at some point and it's just a question of what direction you come from when adding them.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: What do you think about managed code and OSes written in

Post by Brendan »

Hi,
Rusky wrote:
Brendan wrote:Do you think developers will be able to write code in C faster with the new C compiler?
If nothing else, the error messages would improve developers' ability to fix problems when they do come up. Standard C's behavior is undefined when dereferencing bad pointers, going outside array bounds (even just calculating the address!), etc. This does sometimes lead to a simple segfault that's just as easy to fix as a NullPointerException, but it also often leads to odd, unexpected behavior that's hard to track down, either because the optimizer made assumptions the program violated or because memory got corrupted and the program kept running.
Yes, but most of the problems in C can be fixed with language changes and compile time checks (and without additional run-time overhead).
Rusky wrote:So on that note,
Brendan wrote:When software crashes, do you think end users will be glad that the problem was detected by software and not hardware?
I know I'm glad when the software detects errors like that rather than letting the bug turn into a security vulnerability. In many cases, this overhead is extremely worthwhile.

And no, you can't solve this problem without the "overhead" just by adding things like compile-time integer ranges to the language, because that's just another way to enforce that those checks are there- most of those checks have to happen at some point and it's just a question of what direction you come from when adding them.
But you can have no overhead an no checks (other than the CPU's checks).


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: What do you think about managed code and OSes written in

Post by Rusky »

If you want to safely check array bounds, there's going to be a lot of runtime checks no matter how much analysis the compiler does- you're going to get runtime errors whether the programmer writes them, the compiler inserts them, or the CPU checks them, so the question is really not whether the language is "optimistic" or "pessimistic," it's whether the language lets errors cause bigger problems or whether it just halts the program with a clear error message (or way to get one).

The performance problems with managed languages are due far more to interpreting/JIT, poor memory layout, and excessive boxing than anything to do with runtime checks. A statically compiled language that still does runtime checks when necessary for safety but gives you more control over how to manage memory, like Go (still garbage collected) or Rust (compiler-enforced pointer safety) can have performance a lot closer to an unmanaged language than something like Java or C#.
charsleysa
Posts: 6
Joined: Wed Jan 21, 2015 4:19 pm

Re: What do you think about managed code and OSes written in

Post by charsleysa »

Rusky wrote:If you want to safely check array bounds, there's going to be a lot of runtime checks no matter how much analysis the compiler does- you're going to get runtime errors whether the programmer writes them, the compiler inserts them, or the CPU checks them, so the question is really not whether the language is "optimistic" or "pessimistic," it's whether the language lets errors cause bigger problems or whether it just halts the program with a clear error message (or way to get one).

The performance problems with managed languages are due far more to interpreting/JIT, poor memory layout, and excessive boxing than anything to do with runtime checks. A statically compiled language that still does runtime checks when necessary for safety but gives you more control over how to manage memory, like Go (still garbage collected) or Rust (compiler-enforced pointer safety) can have performance a lot closer to an unmanaged language than something like Java or C#.
C# can achieve native performance, it's all about the compilers and the use cases. The current compilers use JIT which does have a performance penalty but using AOT that penalty can be negated,and with compiler extensions you can give control over memory management.

The performance penalties from runtime checks are so small that the tradeoff is worth it for overall system stability and developer efficiency.
embryo

Re: What do you think about managed code and OSes written in

Post by embryo »

Brendan wrote:Let's write a C compiler. Let's write a C compiler that inserts run-time checks everywhere a pointer is used (and sends a SIGSEGV if it detects the pointer wasn't valid). Let's write a C compiler that generates code to track things like array sizes and insert checks to detect "array index out of bounds". Let's write a C compiler that inserts additional "is divisor zero?" checks before every division (and sends SIGFPE).
Compiler provided checks are not define a managed environment. Such environment is managed in such a sense that a developer can forget about management of many things. The array bounds and null pointers are just part of the management, employed by a managed solution. Another part of such management includes cross-platform standard for a language, for example. A developer has no need to read a lot about differences his language have on various platforms, there is no need to study all the toolchains for different OSes and architectures, there is no need for learning things like windowing or file access in a very different fashion if a platform was changed. This learning experience can add some value to a developer's background, but it always takes a lot of time. And managed solution just delivers this time to a developer almost for free, or exchanges it for some overhead, that is very often just absolutely insignificant.
Brendan wrote:When software crashes, do you think end users will be glad that the problem was detected by software and not hardware? Do you think developers will be able to write code in C faster with the new C compiler? Do you think the additional overhead would be worthwhile?
If a problem was detected by a managed software, then it is guaranteed that the software can perform any other action except the failed one. For example, if a Swing action handler throws NullPointerException the whole application still works as expected except just one action handler, where a developer had made a mistake. User just stops using this action and more than 99% of the software functionality is still absolutely usable. Next, if developer pays attention to the software quality, he just reads periodically application problem logs and can find the exact line in the code, where the NullPointerException was thrown. And now it's just a matter of a few minutes to add required checks. In more complex situations it is possible to track the root cause of the NullPointerException and avoid the checking overhead. And if we compare this situation with improved C-compiler, then it becomes obvious, that the answer to the question "Do you think developers will be able to write code in C faster with the new C compiler?" is no. The lack of the whole managed environment with it's reliability and stack trace capabilities stripped from platform-dependent complexities prevents a C based solution to achieve developer performance, comparable with managed solutions. And also it is the same about user experience with managed vs unmanaged solutions.

And of course, it is possible to close the gap between a managed and unmanaged solution, but as a result we will have just another managed solution instead of "improved" unmanaged.
Brendan wrote:The problem is that we're detecting problems at run-time.
No. We are trying to kill any problem just before it was able to be born. And we definitely have some success on this way in contrast with unmanaged solutions.
Brendan wrote:but what if we actually were able to guarantee that there are no problems left to detect at run-time (e.g. by guaranteeing that all possible problems will be detected during "ahead of time" compiling)? In that case (at least in theory) developers would be able to develop software faster and there wouldn't be any run-time overhead either; however there also wouldn't be any difference between "managed" and "unmanaged"
Problem hunting is just one part of the saga. Another part is about freeing a developer from tedious work like memory management, endiannes issues or different widths of types. In unmanaged solutions those boring tasks also present a stability and correctness treat, but the tediousness of the tasks is also very visible and it is the unmanaged solution who insists on the boredom for a developer.
Brendan wrote:You can used a managed language for both security/isolation and correctness, but security/isolation and correctness are still 2 different things.
Ok, they are different. But should be considered in such context, when it is hard to decouple one from another. And a synergistic effect of fighting both problems is much wider than if we were fighting the problems one after one.
Brendan wrote:Unmanaged code is "optimistic" - it assumes the code is correct (regardless of whether it is or not). Managed code is "pessimistic" - it assumes code is not correct and then penalises performance (regardless of whether code is correct or not).
May be. But we also shouldn't forget about developers boredom.
Brendan wrote:If you write some code and compile it with 2 different compilers (one that produces native/unmanaged code and another that produces managed code); then the amount of work it took to write the code is identical regardless of which compiler you use and regardless of whether the code ends up as managed or unmanaged.
There are learning and debugging costs that you have missed in your equation. Also there are differences in the languages involved. A managed language influences it's compiler a lot while unmanaged makes it in a much lesser extent. So, the amount of work is not identical.
Brendan wrote:Do you execute all C/C++ code inside a "managed" virtual machine environment like valgrind where software is used to detect run-time problems and not hardware?
I suppose it is a future of the programming.
embryo

Re: What do you think about managed code and OSes written in

Post by embryo »

charsleysa wrote:C# can achieve native performance, it's all about the compilers and the use cases.
Some use cases still enforce a JIT to deliver worse performance than C-compiler. For example C compiler uses XMM registers for some tasks while Java JIT don't. Unfortunately Java is not an OS wide environment and it prevents JIT from achieving a performance, comparable with AOT compilers. But managed OSes can help here and provide even better performance than C-compilers can deliver.
charsleysa wrote:The performance penalties from runtime checks are so small that the tradeoff is worth it for overall system stability and developer efficiency.
Pure Java solutions, that employ most modern JVM, can't compete with C-compiler that uses XMM registers. Here are some test results, but unfortunately there is no comparison with C compilers. However, my experience shows that C compilers with XMM register usage enabled can achieve a performance that is just about 10-20% worse than the demonstrated hand made assembly solution.
charsleysa
Posts: 6
Joined: Wed Jan 21, 2015 4:19 pm

Re: What do you think about managed code and OSes written in

Post by charsleysa »

embryo wrote:
charsleysa wrote:C# can achieve native performance, it's all about the compilers and the use cases.
Some use cases still enforce a JIT to deliver worse performance than C-compiler. For example C compiler uses XMM registers for some tasks while Java JIT don't. Unfortunately Java is not an OS wide environment and it prevents JIT from achieving a performance, comparable with AOT compilers. But managed OSes can help here and provide even better performance than C-compilers can deliver.
charsleysa wrote:The performance penalties from runtime checks are so small that the tradeoff is worth it for overall system stability and developer efficiency.
Pure Java solutions, that employ most modern JVM, can't compete with C-compiler that uses XMM registers. Here are some test results, but unfortunately there is no comparison with C compilers. However, my experience shows that C compilers with XMM register usage enabled can achieve a performance that is just about 10-20% worse than the demonstrated hand made assembly solution.
If you want an example of a managed OS using more than just General Purpose Registers then check out the MOSA Project. We've got quite an advanced compiler that we are continuing to improve all the time.

Also while the standard java compiler may not used XMM registers, it is possible for a custom JIT compiler to make use of them.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: What do you think about managed code and OSes written in

Post by Brendan »

Hi,
embryo wrote:
Brendan wrote:Let's write a C compiler. Let's write a C compiler that inserts run-time checks everywhere a pointer is used (and sends a SIGSEGV if it detects the pointer wasn't valid). Let's write a C compiler that generates code to track things like array sizes and insert checks to detect "array index out of bounds". Let's write a C compiler that inserts additional "is divisor zero?" checks before every division (and sends SIGFPE).
Compiler provided checks are not define a managed environment. Such environment is managed in such a sense that a developer can forget about management of many things. The array bounds and null pointers are just part of the management, employed by a managed solution. Another part of such management includes cross-platform standard for a language, for example. A developer has no need to read a lot about differences his language have on various platforms, there is no need to study all the toolchains for different OSes and architectures, there is no need for learning things like windowing or file access in a very different fashion if a platform was changed. This learning experience can add some value to a developer's background, but it always takes a lot of time. And managed solution just delivers this time to a developer almost for free, or exchanges it for some overhead, that is very often just absolutely insignificant.
None of those things require a managed language and all of them can be done with unmanaged languages. For example, there's no reason you can't compile an unmanaged language down to some sort of portable byte-code and then compile that byte-code to native when the program is installed and/or running; and no reason you can't use (e.g.) something like libraries to hide the differences between platforms.

Mostly; you're finding "desirable attributes" (easier bug finding, faster development, more portability, more standard APIs, etc) and pretending they only apply to managed languages when all of them have nothing to do with managed vs. unmanaged whatsoever.

If I take 80x86 assembly and run it inside a managed environment (e.g. some sort of virtual machine, possibly using JIT, and possibly even on an very different CPU like ARM) then it is "managed" but none of the things you're incorrectly attributing to "managed" would apply to "managed 80x86 assembly". If I took (e.g.) C# code and compiled it directly to native code with none of the run-time checks then it would be "unmanaged" and almost all of the things you're incorrectly attributing to "managed" would still apply to "unmanaged C#".

I'd be tempted to say that the best use of "managed" is during testing (where the performance doesn't matter and a higher chance of finding bugs sooner is more important) - e.g. use "managed" to find the bugs, then (after the bugs are found/fixed) compile it as unmanaged for the end user's to use without unnecessary overhead. This includes (e.g.) running C/C++ code inside a managed environment like Valgrind during testing, and includes compiling (e.g.) Java to native/unmanaged with something like GCJ.

An even better idea is to run the code in an interpreted testing environment where you can stop it anywhere and investigate all the variable's contents, modifying the code while it's suspended, etc; and where you can run code in "immediate mode" (e.g. you stop the code and get a command prompt, then enter code directly at the command prompt to execute immediately in the context of the now stopped program to explore what would happen). This is actually what I had when I first started programming - Commodore 64 BASIC, which is an "almost managed" language (it had "peek" and "poke" to read and write arbitrary RAM and "sys" to execute arbitrary RAM, but excluding those it was managed), combined with a compiler that would compile "almost managed" BASIC to "unmanaged" machine code.
embryo wrote:
Brendan wrote:When software crashes, do you think end users will be glad that the problem was detected by software and not hardware? Do you think developers will be able to write code in C faster with the new C compiler? Do you think the additional overhead would be worthwhile?
If a problem was detected by a managed software, then it is guaranteed that the software can perform any other action except the failed one. For example, if a Swing action handler throws NullPointerException the whole application still works as expected except just one action handler, where a developer had made a mistake. User just stops using this action and more than 99% of the software functionality is still absolutely usable. Next, if developer pays attention to the software quality, he just reads periodically application problem logs and can find the exact line in the code, where the NullPointerException was thrown. And now it's just a matter of a few minutes to add required checks. In more complex situations it is possible to track the root cause of the NullPointerException and avoid the checking overhead. And if we compare this situation with improved C-compiler, then it becomes obvious, that the answer to the question "Do you think developers will be able to write code in C faster with the new C compiler?" is no. The lack of the whole managed environment with it's reliability and stack trace capabilities stripped from platform-dependent complexities prevents a C based solution to achieve developer performance, comparable with managed solutions. And also it is the same about user experience with managed vs unmanaged solutions.
You can do exactly the same with (e.g.) exception handling in C++ (or exception handling in every language that supports it with, regardless of whether it's managed or not); and it's just another thing you're incorrectly attributing to "managed".
embryo wrote:
Brendan wrote:The problem is that we're detecting problems at run-time.
No. We are trying to kill any problem just before it was able to be born. And we definitely have some success on this way in contrast with unmanaged solutions.
Brendan wrote:but what if we actually were able to guarantee that there are no problems left to detect at run-time (e.g. by guaranteeing that all possible problems will be detected during "ahead of time" compiling)? In that case (at least in theory) developers would be able to develop software faster and there wouldn't be any run-time overhead either; however there also wouldn't be any difference between "managed" and "unmanaged"
Problem hunting is just one part of the saga. Another part is about freeing a developer from tedious work like memory management, endiannes issues or different widths of types. In unmanaged solutions those boring tasks also present a stability and correctness treat, but the tediousness of the tasks is also very visible and it is the unmanaged solution who insists on the boredom for a developer.
Great. I'll add garbage collection to the growing list of things you've mistakenly attributed to "managed".
embryo wrote:
Brendan wrote:You can used a managed language for both security/isolation and correctness, but security/isolation and correctness are still 2 different things.
Ok, they are different. But should be considered in such context, when it is hard to decouple one from another. And a synergistic effect of fighting both problems is much wider than if we were fighting the problems one after one.
After you've found all the bugs and you no longer care about correctness; why would you want additional overhead for no reason forever? You'd only want security/isolation.

Note: If you assume that it's impossible to know if software still has bugs or not and therefore the additional overhead is always justified; then you must also assume that it's impossible to know if the compiler and/or "managed environment" still has bugs and can't be trusted.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
embryo

Re: What do you think about managed code and OSes written in

Post by embryo »

Brendan wrote:For example, there's no reason you can't compile an unmanaged language down to some sort of portable byte-code and then compile that byte-code to native when the program is installed and/or running;
Even in a naive sense, that the phrase "managed code" has, we can see that code management (e.g. compilation) should be automatic. So the bytecode compilation when it is installed or running is a management activity and a managed subject is the bytecode.
Brendan wrote:and no reason you can't use (e.g.) something like libraries to hide the differences between platforms.
The reason here is the need for efficiency, that is wrongly assumed by many developers as a way of coding low level things. Low level on a particular platform requires that platform differences are and will always be accessible.
Brendan wrote:you're finding "desirable attributes" (easier bug finding, faster development, more portability, more standard APIs, etc) and pretending they only apply to managed languages when all of them have nothing to do with managed vs. unmanaged whatsoever.
Once a managed code has been compiled into a native representation it becomes unmanaged (or managed by hardware defined means). But the act of management already has been executed and only because it has taken place the code got an ability to be running.

In contrast an unmanaged code is also managed, but only by a developer with the all tedious steps coded manually in some scripts or whatever it takes. So we have an automatic management vs a manual management. Also all those features (debugging, development speed, learning, portability and so on) are heavily influenced by this split of responsibilities. Debugging is easier due to automated run time information management, portability is a result of actions like automatic conversion, development speed is better because of automated memory management an so on. All those areas are automated and managed by a software instead of a developer. So, if you add those features to unmanaged language, then you will get a managed solution, with all the automatic management in place.
Brendan wrote:If I take 80x86 assembly and run it inside a managed environment (e.g. some sort of virtual machine, possibly using JIT, and possibly even on an very different CPU like ARM) then it is "managed"
Wrong. Management can be performed at any stage of the code's life. From initial writing and up to application uninstall. And the difference is in the amount of work that we have to do manually (in unmanaged fashion, when process is ad hoc and very error prone). It's like crafting things manually vs using a fully automated conveyor.
Brendan wrote:If I took (e.g.) C# code and compiled it directly to native code with none of the run-time checks then it would be "unmanaged" and almost all of the things you're incorrectly attributing to "managed" would still apply to "unmanaged C#".
Unless you compile the code under management of a virtual machine, all your activity is unmanaged (i.e. handy craft).
Brendan wrote:I'd be tempted to say that the best use of "managed" is during testing (where the performance doesn't matter and a higher chance of finding bugs sooner is more important) - e.g. use "managed" to find the bugs, then (after the bugs are found/fixed) compile it as unmanaged for the end user's to use without unnecessary overhead.
It's almost as you have painted, but with the final step also automated - a JIT compiles the code into unmanaged form for you. And if you have control over the JIT you can order it to remove runtime checks (if you sure that there are no more bugs), so the efficiency will not be compromised.
Brendan wrote:An even better idea is to run the code in an interpreted testing environment where you can stop it anywhere and investigate all the variable's contents, modifying the code while it's suspended, etc; and where you can run code in "immediate mode"
But don't forget about the next automated step - just let the JIT to do it for you.
Brendan wrote:You can do exactly the same with (e.g.) exception handling in C++ (or exception handling in every language that supports it with, regardless of whether it's managed or not); and it's just another thing you're incorrectly attributing to "managed".
Wrong. Because having an illegal pointer is a legal state for unmanaged program. If you are happy enough and GP has taken place right after the wrong pointer was followed, then it is possible to recover your program, but if a happiness is not with you this day, then corrupted memory will prevent any effort to manage the unmanageable situation.
Brendan wrote:Great. I'll add garbage collection to the growing list of things you've mistakenly attributed to "managed".
And next you should add many more things until your code becomes really managed by an automated process. Having just one wheel doesn't mean you can drive your car. A car is a complex of things. And every part is important. And you can't just throw away some gear and hope that the car is still managed.
Brendan wrote:After you've found all the bugs and you no longer care about correctness; why would you want additional overhead for no reason forever? You'd only want security/isolation.
Security/isolation after "you've found all the bugs" is a matter of hardware failures. And again we are going to discuss the speed of an application crash. Or it is just a misbehaving user, that pushes brake when he wants his car to run. Both cases are equally bad for managed and unmanaged solutions because they are outside the planned (and bug free) system behavior.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: What do you think about managed code and OSes written in

Post by Brendan »

Hi,

Let's simplify this entire thing.

Bugs can be categorised as follows:
  • Problems that can be detected at compile time
  • Problems that can't be detected at compile time, but can be detected by either software or hardware at run-time
  • Problems that can't be detected at compile time or by hardware, but can be detected by software
  • Problems that can't be detected in an automated/systematic way, where human effort (in the form of unit tests, bug reports from end users, etc) is required
For a suitably well designed language (e.g. not C with all its silly undefined behaviour) that's used by a suitably competent programmer, the majority of problems fall into the first category or last category.

For the minority of problems that fall into the second and third categories, you can't assume that the code isn't malicious (e.g. deliberately designed to exploit inevitable bugs in the compiler or environment) and hardware security/isolation is a significant part of the defence against this. Also, the security/isolation provided by the hardware is often unavoidable (e.g. paging is required for other reasons that don't involve security/isolation so it costs nothing extra to also use it for security/isolation). For these reasons; for problems in the second category it would be foolish to rely on software checks alone (rather than hardware checks alone, or both software and hardware checks).

For bugs in the third category; the additional complexity in the compiler/environment, the overhead of run-time checks, and the performance loss caused by preventing the programmer from using lower level approaches (e.g. inline assembly); combined with the very small number of bugs that fall into the third category; mean that the advantages of software checks at run-time ("managed") are not justified by the disadvantages.

All other things (portability, garbage collection, exceptions, libraries, syntactical sugar, etc; and the entire "finished product performance vs. developer time" compromise) has nothing to do with managed vs. unmanaged.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: What do you think about managed code and OSes written in

Post by Rusky »

Brendan wrote:For bugs in the third category; the additional complexity in the compiler/environment, the overhead of run-time checks, and the performance loss caused by preventing the programmer from using lower level approaches (e.g. inline assembly); combined with the very small number of bugs that fall into the third category; mean that the advantages of software checks at run-time ("managed") are not justified by the disadvantages.
If you're including array bounds checks or (on processes where hardware checks are slower or unavailable) overflow checks in that category, I strongly disagree. While some of these checks can be eliminated by type systems with numerical ranges, most of the time that type system will just force the checks to be added somewhere anyway. Further, this is actually a rather large class of very important bugs- buffer overflows can be caused by both types of overflow. The overhead of checking here, whether enforced by the type system or added automatically by the compiler for a small performance loss (there are other ways to eliminate bounds checks in this case, like using iterators), is very much worth it.

Slightly slower overall program execution is definitely worth not having accidental remote code execution vulnerabilities.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: What do you think about managed code and OSes written in

Post by Brendan »

Hi,
Rusky wrote:
Brendan wrote:For bugs in the third category; the additional complexity in the compiler/environment, the overhead of run-time checks, and the performance loss caused by preventing the programmer from using lower level approaches (e.g. inline assembly); combined with the very small number of bugs that fall into the third category; mean that the advantages of software checks at run-time ("managed") are not justified by the disadvantages.
If you're including array bounds checks or (on processes where hardware checks are slower or unavailable) overflow checks in that category, I strongly disagree. While some of these checks can be eliminated by type systems with numerical ranges, most of the time that type system will just force the checks to be added somewhere anyway. Further, this is actually a rather large class of very important bugs- buffer overflows can be caused by both types of overflow. The overhead of checking here, whether enforced by the type system or added automatically by the compiler for a small performance loss (there are other ways to eliminate bounds checks in this case, like using iterators), is very much worth it.
For overflows all of the bugs can be found at compile time, and fixed by increasing variable sizes or reducing input values with no (manually inserted or automatically inserted) run-time checks. For statically allocated arrays all the bugs can be found at compile time, and fixed by using ranged types for indexing with no run-time (manually inserted or automatically inserted) checks.

That only leaves dynamically allocated arrays; which can either be replaced by a pool of statically allocated arrays, or pointers. We all know pointers aren't safe, but when used correctly (e.g. by a competent and cautious programmer) their speed and flexibility more than makes up for that.
Rusky wrote:Slightly slower overall program execution is definitely worth not having accidental remote code execution vulnerabilities.
Except "slightly slower" can be a 20 times performance difference, and "accidental remote code execution vulnerabilities" is a straw man.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
embryo

Re: What do you think about managed code and OSes written in

Post by embryo »

Brendan wrote:Bugs can be categorised as follows:
  • Problems that can be detected at compile time
  • Problems that can't be detected at compile time, but can be detected by either software or hardware at run-time
  • Problems that can't be detected at compile time or by hardware, but can be detected by software
  • Problems that can't be detected in an automated/systematic way, where human effort (in the form of unit tests, bug reports from end users, etc) is required
A bit shorter it can be present as a matrix with rows and columns as such:
Rows:
- compile time detection
- run time detection
Columns:
- automated detection
- manual operations

So your phrase:
Brendan wrote:For a suitably well designed language (e.g. not C with all its silly undefined behaviour) that's used by a suitably competent programmer, the majority of problems fall into the first category or last category.
Can be compared with this - the majority of problems fall into the manual operations column. Just because automated problem prevention works fine after achieving some maturity. So, it is obvious, that a better software delivery process should minimize the most problematic part of the matrix - the manual operations column. And it's just what managed environments are trying to accomplish. At least I see such categorization as more suitable.
Brendan wrote:For the minority of problems that fall into the second and third categories, you can't assume that the code isn't malicious (e.g. deliberately designed to exploit inevitable bugs in the compiler or environment) and hardware security/isolation is a significant part of the defence against this. Also, the security/isolation provided by the hardware is often unavoidable (e.g. paging is required for other reasons that don't involve security/isolation so it costs nothing extra to also use it for security/isolation). For these reasons; for problems in the second category it would be foolish to rely on software checks alone (rather than hardware checks alone, or both software and hardware checks).
Of course we can use hardware protection even within a managed environment. But the question is about unmanaged environment's insistence on the hardware protection only. And everything that is outside of the hardware protection is supposed to be handled to a programmer. Would it be security, isolation or whatever. Here again I should repeat my automation hungry position - just why I should do those bothering things instead of some program to do it for me?
Brendan wrote:For bugs in the third category; the additional complexity in the compiler/environment, the overhead of run-time checks, and the performance loss caused by preventing the programmer from using lower level approaches (e.g. inline assembly); combined with the very small number of bugs that fall into the third category; mean that the advantages of software checks at run-time ("managed") are not justified by the disadvantages.
I see some overcomplication here. A lot of complex entities are mixed in one phrase. And next follows a short conclusion about "not justified by the disadvantages". But it's too much entities here to draw such simple conclusion without any detailed explanation.

So I ask a simple question - do you want that a software were able to free you from some tedious work? If yes, then it's just all about making our development environment managed. Just as simple as such.
HoTT
Member
Member
Posts: 56
Joined: Tue Jan 21, 2014 10:16 am

Re: What do you think about managed code and OSes written in

Post by HoTT »

What exactly makes a language a managed one? I think there are at least two definitions flying around, both constantly changing. The discussion makes not much sense this way.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: What do you think about managed code and OSes written in

Post by Rusky »

Brendan wrote:For overflows all of the bugs can be found at compile time, and fixed by increasing variable sizes or reducing input values with no (manually inserted or automatically inserted) run-time checks. For statically allocated arrays all the bugs can be found at compile time, and fixed by using ranged types for indexing with no run-time (manually inserted or automatically inserted) checks.
None of these are true, because of (among other things) I/O.

For example, statically-sized stack buffers used to read in or operate on data from the user, the network, the disk, other programs, etc. must be bounds-checked at run-time, either directly through an index range check, or indirectly through ranged types or simply the structure of the code working with the buffer.

Ranged types to prevent overflow/overrun can be a great tool to manage those checks, but you often end up with a value whose range is too big, either from I/O or from normal operations on the value. How do you convert it to a type with the correct range? A run-time check, somewhere on the spectrum between manually and compiler-inserted.
Brendan wrote:Except "slightly slower" can be a 20 times performance difference
In a properly-written system that is rare and bypassable without turning off the checks by default.
Brendan wrote:"accidental remote code execution vulnerabilities" is a straw man.
No, it is the leading result of not doing proper bounds checking, and is thus the biggest problem solved by either compile-time or run-time, manual or enforced or automatic, overflow/bounds checking.
Post Reply