Solar wrote:If you use one of the features that implies such overhead, yes. Those are features that C doesn't have. The question is, would you be able to create the same semantics, in C, without using up at least as much space?
Here I was thinking more along the lines of objects, rather than "peripheral" semantics such as exceptions. Constructors/destructors, accessors/mutators, multiple inheritance, and virtual base classes to name a few. This kind of functionality is more central to C++ design and can be done without in C.
Solar wrote:BCPL is the ancestor of C. It was superseded because C was considered more practical. Now, I don't say that C++ could or should supersede C, but it is an advancement.
It is an advancement both in terms of features and complexity. I feel that the added complexity is inherently counter productive to kernel development. This is subjective, but its my opinion on the matter.
Solar wrote:What was said about the C++ committee... Yes, they extended the standard over time, as new issues arose. At times, that required adding another keyword, making some identifiers illegal. So what? We are talking about one global search & replace here! BTW, C99 isn't fully compatible to K&R C, either...
The concern here is that the C++ committee does not feel as strong of a need to maintain backwards compatibility with existing C code than the C99 standards people do. Extensions and modifications to the C standard are more likely to be less obtrusive of existing C code.
Solar wrote:Then, the fragile base class problem... that is an issue for C++ interfaces, i.e. when you export a C++ API through syscalls. That's a whole different decision from writing a kernel in C++. I'm not even sure if that problem still persists now that the standard C++ ABI is around. (Hadn't had the time to check this yet.)
I think the C++ kernel interface _does_ play a critical role in choosing to write a kernel in C++, and should be part of the same decision. The kernel interface to device drivers and user applications is very important.
The new standard C++ ABI will resolve binary-incompatibilities among different versions of g++, although I didn't read anything about ameliorating FBC. Even if it were done, another overhead would be created. Every FBC solution I've read about in other programming languages have the offset tables generated at run-time, which places a speed overhead on it.
Solar wrote:Then, the aesthetics... well, that's personal preference, and certainly not a technical reason one way or another.
Agreed.
Solar wrote:Portability... well, really fine C++ code is more portable than C code, in the sense of modularity. But that would be going too far. But one other reason to use C over Assembler is readability, maintainability, abstraction. And especially on the abstraction level C++ beats C hands-down.
Modularity encourages code reuse and interchangability, but not really code portability (in the sense of platform-independent). C++ does have more abstraction than C (due to its object oriented nature), but I'd argue that at times OO's abstraction is too much and causes a burden. Too much data hiding, complex class hierarchies, and in general so much abstraction that at times its hard to tell exactly what's going on when you need to. But I digress; this is heading off into personal software design preferences again.
Solar wrote:But this is, basically, bickering.
You say that, if someone can't live
without C++, he should go for it. I say that, if some
can do C++, he should go for it. That's probably because I can do, and you can do without, but in the end, we basically agree.
I'll agree with you on that.