Re: C pros and cons (was: I want to create an OS!!!)
Posted: Sun Feb 17, 2013 6:18 pm
Its apparent simplicity comes from a lack of (often useful) features. But it's actually not simple at all; it has the most obscene pitfalls which programmers are forced to learn if they wish to avoid them. And I'm not talking about one or two---I'm talking about hundreds of such instances. It is this reason for which most people think they know C when they actually only have a superficial understanding of it. And I wouldn't blame them too much as many of the assumptions they make about the language are natural.zeusk wrote:I use C because it's simple,
Just because you know something doesn't mean you should close yourself from learning new things. Suppose you want to eat some steak. If you know how to use a spoon but not a fork, do you think it is a better idea to use the spoon or learn about the fork?zeusk wrote:i know it well
Just like a spoon can get the job done. But often, it gets the job done in a lousy manner.zeusk wrote:and it gets the job done.
First of all, most general-purpose languages don't make performance requirements. So, we're really talking about language implementations. Once upon a time, C++ was shunned for being considered inherently slow. Nowadays, programs written in C++ seem to exhibit performance characteristics that are extremely similar to their C equivalents. You can't really argue about languages, only the current unavailability of satisfactory tools.zeusk wrote:I've also used various other languages such as C#, C++ etc at work but never felt that the functionality they provide over C are worth the size, performance and portability issues they bring with them for use inside an OS.
Now that that's out of the way, let's focus on what people really care about. I have the following points to make:
- The primary cause for slow software is bad design. The next, is the use of inefficient algorithms.
- Things don't need to be perfect, they need to be good enough (even C makes compromises that reflect this reality). According to the 80-20 rule, 80% of the time is spent on 20% of the code. So why should 80% of the code be slower to develop, more difficult to maintain, and contain more bugs?
Linux seems to be very successful in contradicting your little rule of thumb. It has been ported to a huge amount of platforms, so it can be done. I'm sure you think it's a good idea, too. Why consider what you already know to be a bad requirement?zeusk wrote:(or if OS code needs to be portable at all, usually it's well defined for a few architectures/VM only),
I am uncertain about what you mean here so I don't know how to respond. Do you mean in general or for embedded systems, which you mention in the following sentence?zeusk wrote:just more easier to port than equivalent C#, C++ stuff.
I will grant you that C# is not as prevalent, esp. in the embedded world (in the non-embedded world, you still have a few cross-platform options, such as Mono). And this is not because people are afraid of managed langauges in that arena... Were you aware that the software that runs on most SIM cards was written in Java?zeusk wrote:Having worked with embedded stuff gave me another reason to use C, C compilers support far more architectures/CPUs than C# (which is mostly run on a MS VM) or C++.
As for, C++ is commonplace even on obscure architectures. It's used for everything from the fuel injectors in your car to the Large Hadron Collider.
It's not your fault. Very few people actually are because C++ is an overly complex beast. I was being unbiased above when defending C++---please do not interpret the points I made as affection towards it. I think it is a plague.zeusk wrote:although one could argue my C++ code sucks as I am no expert with it
I believe I covered this above but the short answer is: "usually, yes!" since, as I've said, things only need to be good enough. Not only that, but even with C implementations you don't escape this since they do the same compared to hand-optimized assembly. Furthermore, the waste is more than reasonable; we're not talking about orders of magnitude. Where do you draw the line? Simple: Focus on the development process and implicitly on reliability; only care about micro-optimizing when you must. I know I've pretty much repeated what I've said several paragraphs above but I want to stress that managing complexity is a software engineer's greatest responsibility.zeusk wrote:Yes, it certainly is. But i don't get your argument of strictly confining it to resource-scarce platforms. Just because you have a few hundred thousand extra cycles you'll use something that isn't efficient and hence waste energy + time ?Love4Boobies wrote: I think it is mainly useful for embedded programming on resource-scarce platforms, as no better alternatives exist there.
Well, the most important thing standard libraries do is to provide functionality which either describes the underlying platform or uses the underlying platform in some non-portable way. A regular library can surely do all this but, since it is not standard, it is not guaranteed to exist on all platforms where the language is implemented. Things like random number generator, sorting routines, etc. are there only there for the sake of convenience.zeusk wrote:Although the argument hasn't been raised here, The advantage of standard libraries in such managed languages and C++ doesn't matter at all, If done perfectly, You can have a well written library in C too. (ie. dlmalloc)