Brendan wrote:I'm thinking of an unmanaged language combined with a managed environment for developers to use for debugging, like C and valgrind (but not C or valgrind); and I'm not thinking of a managed language (and managed environment) that wraps programmers in bubble wrap and prevents them from doing their job properly.
Well, here you refuse to accept "a bubble wrap" because it "prevents them from doing their job properly". But in the following:
Brendan wrote:If DEFLATE was implemented as a service; then the cost of my messaging is around 1000 cycles per message (for a max. message size of 2 MiB; assuming both sender and receiver are on the same computer). To send 1 GiB of data to a "DEFLATE service" and receive 2 GiB of data back (assuming a 2:1 compression ratio) it's going to need to be split into 1536 messages and cost about 1536000 cycles for all sending and receiving. On an ancient 25 MHz 80486 (the oldest/slowest/worst CPU I intend to support) that works out to about 62 ms. Compared to the cost of compressing/decompressing the data, the overhead of messaging is almost entirely irrelevant. If it happened regularly nobody would care.
You have accepted another kind of "bubble wrap", but now it is called "services" with message passing related overhead. And you are eager to argue that in some cases the overhead is acceptable, but you refuse to listen to arguments about another "bubble wrap" and it's related pretty acceptable overhead. So, I conclude that it is just your religion, that prevents you from accepting the same thing when it is introduced by somebody else but you. But hopefully, the religion canons can be relaxed in the future and you finally will accept managed approach as efficient enough to compensate any involved overhead with really useful features.
Brendan wrote:How do you convince end users to switch from existing OSs (that are more mature - faster, less buggy and with more drivers) to your OS, when the applications run on both OSs? "Same but worse" is unlikely to be an good marketing strategy.
It's more efficient. And it's users (and developers) are more productive.
Brendan wrote:embryo wrote:Brendan wrote:I also want a language where unit testing is an inbuilt feature; and an IDE that automatically runs the unit tests in the background and highlights things that failed their unit tests while you type. You probably haven't thought about this, but I think it's important to detect all the bugs that can't be detected automatically (with or without run-time tests).
Here I miss the point of unit tests. Who will write them? If it is a developer, then such "inbuilt" feature works just as an ordinary unit test runner program. I mean I see no difference between the "inbuilt" and traditional approaches.
The most obvious benefit is integration - e.g. having mistakes highlighted for you while you're working on the source code in the IDE (and not having additional hassle, or steps to accidentally forget, or having to manually translate the unit test results into a location in the source code).
The less obvious benefit is that it provides "default scenarios" for debugging. For example, maybe there's a problem with an "isNumberPrime()" function buried deep within the program, and instead of running the entire program in the debugger you can just run an existing unit test in the debugger without all the unnecessary baggage.
The approach (as you describe it) was implemented long ago with the help of different plugins for different IDEs. But it was found inconvenient to run unit tests concurrently with code writing activity. It slows an IDE and produces permanent distraction for a developer because the code is just not ready to be unit tested. So, as a final solution a button "run unit tests" was chosen.
Brendan wrote:but I'm using asynchronous messaging where the service may be running on a completely different computer and there's some other things that come may into it.
May be such approach can draw some benefits. It's kin in form of web-services is accepted by many developers and still is in active development. But the overhead issue still here and another part of developers still refuse to accept such approach as efficient. But it seems that here we have more religion than actually viable statistics or similar numbers, so, it is still an open issue (just as managed vs unmanaged for you and me).
Brendan wrote:How does the managed environment "manage the low level parts" (e.g. prevent intentionally malicious assembly language code from doing unsafe things)?
It doesn't prevent all forms of malicious behavior. But it manages such code in the following way - it detects such code fragments, next it notifies user that provided application will be run in dangerous mode, next it accepts user's decision, next it compiles managed and unmanaged parts of the program, next it runs resulting program under hardware protection (and prevents some malicious activity from being possible). And all specified actions are acts of management, so we can see the management is still in place.
Brendan wrote:I think you'll find that for all the security vulnerabilities most would've been prevented by a better unmanaged language and the remainder would not have been prevented by a managed language.
Oh, Brendan, please, just try to think a bit out of the box, just let yourself to see that there are many ways to implement different things and there is no absolute winner just because requirements are always different too. And it is about your personal assessment of a weight of requirements that differs from assessment of another person. But if you remember that assessments are mostly based on some subtle human's internal argumentation, then it should be obvious for you that here we are discussing mostly religious issue of some assessments being more preferred than others.