C++-like exception handling in kernel

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
vvaltchev
Member
Member
Posts: 274
Joined: Fri May 11, 2018 6:51 am

Re: C++-like exception handling in kernel

Post by vvaltchev »

Ethin wrote:Again, I must disagree -- I think your overthinking the solution to this problem. Preventing violations of noexcept is as simple as ensuring that signatures -- including the noexcept attribute -- match. You shouldn't need to modify the ABI, only the language.
OK, I'll try one last time. Consider the following two cpp files:

Code: Select all

/* -------------------------------- lib.cpp --------------------------------- */
void foo(char *arg) /* here the function foo() accepts a char* parameter */
{
	/* ... */
}

void bar(const char *arg)
{
        /* ... */
}

/* --------------------------------- main.cpp ----------------------------- */
void foo(const char *arg); /* here the function is declared to accept a const char* parameter */

int main(int argc, char **argv)
{
	foo(argv[0]);
	return 0;
}
Now, let's compile them and link the two objects into a binary:

Code: Select all

$ gcc -c lib.cpp
$ gcc -c main.cpp
$ gcc -o app lib.o main.o
Would do you think it will happen in this case? This:

Code: Select all

$ gcc -o app lib.o main.o
/usr/bin/ld: main.o: in function `main':
main.cpp:(.text+0x1e): undefined reference to `foo(char const*)'
collect2: error: ld returned 1 exit status
That's because constness affects the ABI. Let's see in details:

Code: Select all

$ nm lib.o
000000000000000f T _Z3barPKc
0000000000000000 T _Z3fooPc
$ nm -C lib.o
000000000000000f T bar(char const*)
0000000000000000 T foo(char*)
As you can see, the mangling is different, there is an extra K in symbol for bar(). That allows us to detect at link time a lot of errors. Imagine what would happen if the mangling was the same: we'd get no error, so we could continue to call foo() expecting that it will never change the contents of *arg, while it reality foo() does change that. In large code bases where many people work together, it is essential to avoid mistakes like that.

Why things should be any different for noexcept? I don't understand why you (given the choice) wouldn't prefer to have enforce declarations to match with implementations at link time.

The only reason I can think of about why it's not the case today, is that putting noexcept into the C++ ABI would have broken a lot of things and the migration from C++98's throw() to C++11's noexcept would have been quite painful. Btw, I was mistaken mentioning the C++ committee before, because the ABI is not part of the language at all and it's not standard across compilers, even on the same platform. Fortunately, Clang adapted the same ABI of GCC for C++.

Just for completeness, I'll show what happens when noexcept is used:

Code: Select all

$ cat lib2.cpp
void foo() noexcept(false) { }
void bar() noexcept(true) { }
vlad@vlad-xps:~/tmp$ nm lib2.o
000000000000000b T _Z3barv
0000000000000000 T _Z3foov
Absolutely nothing, as you can see.
Tilck, a Tiny Linux-Compatible Kernel: https://github.com/vvaltchev/tilck
nullplan
Member
Member
Posts: 1769
Joined: Wed Aug 30, 2017 8:24 am

Re: C++-like exception handling in kernel

Post by nullplan »

No vvaltchev, both Ethin and I understood what you mean and want to do, we just disagree that that's a worthwhile goal. Divergent signatures have been a problem for a long time already and are just as easily solved by header files (and by including the header files also in the modules that define the functions). A mismatch in throw() and noexcept would be found that way. Only the mistake you have shown would not be, because what you showed was an example of function overloading: foo(const char*) is actually a different function from foo(char*) in C++. Therefore a linker error is appropriate, since a function is called that is never defined.

If you have a large codebase that uses C or C++ and no header files, sounds like you have work to do (namely to write those header files). BTDT.
Carpe diem!
vvaltchev
Member
Member
Posts: 274
Joined: Fri May 11, 2018 6:51 am

Re: C++-like exception handling in kernel

Post by vvaltchev »

nullplan wrote:No vvaltchev, both Ethin and I understood what you mean and want to do, we just disagree that that's a worthwhile goal. Divergent signatures have been a problem for a long time already and are just as easily solved by header files (and by including the header files also in the modules that define the functions). A mismatch in throw() and noexcept would be found that way. Only the mistake you have shown would not be, because what you showed was an example of function overloading: foo(const char*) is actually a different function from foo(char*) in C++. Therefore a linker error is appropriate, since a function is called that is never defined.
Sure, I completely agree header files must be used. I just wanted to point out that it's too easy to cheat without including noexcept in the ABI. Unfortunately, there are cases where people declare an internal function without putting it into a header and also cases where, due to different #defines, the function signature in the header might differ from the one in the C or CPP file, even if the source file includes the header itself. I'm not saying that I've seen such a case with noexcept (being conditional on #defines), but I've seen various mismatches of functions signatures over the years (e.g. one parameter missing etc.).

Anyway, I understand your point: the mangling is different mostly because of the overloading, not because of type-safety. Therefore, if we don't support noexcept overloading, why should we have different mangling for the noexcept functions? I get that point now. Indeed, in C there is no overloading, therefore there is no mangling: if, for any reason, you end up with the wrong signature, you'll hit UB. I get that, but I still prefer to reduce the possibility of human errors. In particular for C++, where it's everything about type-safety etc. Consider that in MSVC, if you try cheating by re-defining a class by replacing private with public (because you wanna touch private fields), the code won't work because in debug builds MSVC adds paddings just to stop you from doing that. It might seem too much, but I like it.

Also, theoretically, it might make some sense to support noexcept overloading. When a noexcept function calls foo(), it calls the noexcept version, when a regular function calls it instead, it calls the regular (throwing) version. Agreed that a noexcept version should have a different signature anyway (e.g. use return codes) but still.. it might be cool for some cases. I'm thinking again about the const overloads for methods in classes: if the method is "const", all the methods it calls are const (if such const overloads exist), because the "this" pointer is const. I haven't thought a lot about this idea, it just came to my mind. I might be just as well a terrible idea, I don't know.
nullplan wrote:If you have a large codebase that uses C or C++ and no header files, sounds like you have work to do (namely to write those header files). BTDT.
No no, I never thought about not using headers. My example was just more convenient this way.
Tilck, a Tiny Linux-Compatible Kernel: https://github.com/vvaltchev/tilck
Ethin
Member
Member
Posts: 625
Joined: Sun Jun 23, 2019 5:36 pm
Location: North Dakota, United States

Re: C++-like exception handling in kernel

Post by Ethin »

I know what your saying, but your still overthinking the problem. You don't want to change the ABI for something like noexcept. I'm not even sure how you even could, especially since the ABI (if I'm not mistaken) does not specify how exceptions are passed over the ABI boundary. Why do you think dealing with C++ exceptions over FFI is so hard?
When it comes to attributes or specifiers within C++, your focus should solely be on the language. The noexcept-specifier, just like the nodiscard-attribute, is internal to the language. As such, you have a lot more freedom: the compiler could very trivially enforce noexcept-specifier contracts. The only reason that compilers don't do this is because the standard says that implementations shouldn't reject violaters of the noexcept-specifier, even though that's the logical thing to do. Claiming that the ABI needs to be changed just to enforce noexcept is like claiming that the ABI should be changed to enforce the nodiscard attribute: this is a language-level item, not an ABI item. The ABI could care less that your function is marked nodiscard; attributes such as nodiscard or noexcept attributes have no meaning on the ABI level.
vvaltchev
Member
Member
Posts: 274
Joined: Fri May 11, 2018 6:51 am

Re: C++-like exception handling in kernel

Post by vvaltchev »

Ethin wrote:I know what your saying, but your still overthinking the problem.
Don't judge my thinking process just because you disagree with my ideas :-)
Ethin wrote:You don't want to change the ABI for something like noexcept.
That is a subjective statement. Clearly, there is a high price for changing the ABI and I understand why compiler engineers preferred not to, but, ultimately, everything is arbitrary. There are no "laws" of computer science that would force one decision or another. It's about making trade-offs. It's possible that, if I was in the position to decide, I would have taken the same decision (to not include it in the ABI), but because of cost effectiveness, not because it wouldn't be useful.
Ethin wrote:I'm not even sure how you even could, especially since the ABI (if I'm not mistaken) does not specify how exceptions are passed over the ABI boundary. Why do you think dealing with C++ exceptions over FFI is so hard?
The ABI for exceptions and anything else in C++ is well defined for pairs <compiler, platform> but the problem is that it's not officially documented and multiple compilers couldn't agree to commit to a specific ABI for a specific platform. For C, we have cdecl, stdcall, fastcall etc. We also have Microsoft x64 and System V AMD64 for x86_64 and who knows how many ABIs for other architectures. For C++, because compiler engineers wanted to have more flexibility, couldn't agree on anything particular. Other compiled languages don't have a standard documented ABI too. Not having a formally-specified ABI, means more flexibility for future changes. That's why FFI means using the C ABI for that platform, because is the common denominator. Therefore, with FFI we cannot throw exceptions, because in C there is no such thing.
Ethin wrote:When it comes to attributes or specifiers within C++, your focus should solely be on the language. The noexcept-specifier, just like the nodiscard-attribute, is internal to the language.
That is subjective. Nodiscard cannot change the control flow, nor allocate special objects on the stack: it just triggers a warning if you ignore the return value. While a noexcept(false) attribute means that this function might throw an exception. That changes everything, like changing the return type of a function. No matter how carefully you called it and prepared to check it's return value, it might not be enough. You have to call that function from exception-safe code which won't be called from a destructor or be prepared to catch an exception with try/catch. This is a more serious requirement than nodiscard but I acknowledge that the comparison makes sense. Choosing what to include in the ABI is arbitrary and goes from exclusively the strictly necessary (the simplest path) to every possible attribute (hell).
Ethin wrote:As such, you have a lot more freedom: the compiler could very trivially enforce noexcept-specifier contracts. The only reason that compilers don't do this is because the standard says that implementations shouldn't reject violaters of the noexcept-specifier, even though that's the logical thing to do.
We agree here: the ISO standard is weird. I guess it was another trade-off to facilitate the conversion from throw() to noexcept().
Ethin wrote:Claiming that the ABI needs to be changed just to enforce noexcept is like claiming that the ABI should be changed to enforce the nodiscard attribute: this is a language-level item, not an ABI item. The ABI could care less that your function is marked nodiscard; attributes such as nodiscard or noexcept attributes have no meaning on the ABI level.
What has meaning at ABI level and what doesn't is arbitrary: there are no "laws of CS for writing ABIs" that forbid that. Simply, compiler engineers prefer to keep the ABI to the minimum possible and avoid changes at all costs (this is a trade-off, not a law!). Changing the ABI was not mandatory to support noexcept and would have caused quite some pain. Why go through all of that, just to add some extra safety for developers? That would have been too expensive.

Let me argue more about that: making a breaking change in the ABI, even for a specific compiler on a specific platform, would mean having to re-compile all the libraries (and their dependencies) before being able to compile applications, unless the change is somehow backwards-compatible. If that happened with noexcept, we would have likely had many C++ libraries distributed in two binary forms, regular and "-ex" ABI (e.g. "gcc-ex"). Gradually, we would have moved to the -ex version. But, yeah, unless the change was 100% backwards compatible somehow, it would have been a mess for quite some time. That's why I believe nobody even considered going down that road.

Just, don't think all the decisions made by committees and compiler engineers are driven by "what is better, on the absolute scale". Most of the time, it's all about: how to maximize the benefits of a given feature with minimal effort?
Tilck, a Tiny Linux-Compatible Kernel: https://github.com/vvaltchev/tilck
Post Reply