babernat wrote:Which kind of leads me to the (obvious?) conclusion that any type of exceptions in OS code (whether caught or not) is to be avoided.
I think the reason it tends to be avoided in kernel code is due to its run-time overhead. Conceptually, it is possible to use it, just expensive.
The reason I'm thinking of this is because by trade I'm a Java programmer. The language uses exceptions liberally which is something I disagree with.
Exception specifications in Java make exceptions a lot more painful than they need to be. I've used exceptions in a lot of production code written in C++ and C# over the years and it's really made things easier to read and debug.
Exceptions in general make code hard to analyze and debug.
It's true that they prevent purely local reasoning when looking at code, but there are ways to mitigate this. For example, in C++ you can use the RAII idiom to ensure proper cleanup of resources in the presence of exceptions. Java lacks this facility, requiring the use of "finally" blocks. C# has "using" blocks, which are nicer than "finally" blocks, but still can't beat RAII in C++.
In terms of debugging, I think it actually helps in Java because you get a nice stack trace that shows you exactly where the failure occurred. In gnarly C code, it's hard to figure out which return code shows the symptom, or worse, was forgotten.
In my experience, if exceptions are used properly they make the code much more readable by focusing on the common case.
My logic being if I know something might go wrong, why would I not account for that case in my API instead of using an exception which could cause unforeseen problems, unecessary stack unwindng, etc.
Exceptions
are part of any API that uses them.