Hi:
rdos wrote:Combuster wrote:I couldn't help but notice the amount of really fundamental bugs fixed in the latest OpenWatcom. I wouldn't trust that compiler with my life - it's undermaintained and not thoroughly tested. But yea, it exists.
Yes, but it is not a valid argument. If people want a good segmented compiler, they need to fix these issues. It is an open source project, so these things will not become fixed by themselves.
The best thing is that OpenWatcom once supported OS/2s segmented memory models, which means this code should be fairly bug-free. At least compared to using any other compiler, or writing your own compiler.
But I agree that there are some issues, even in the flat memory model. I have some problems with memory corruption, but I'm not sure if it is related to the compiler, the heap manager, or RDOS not saving registers it should save.
There is also an issue with the long double type not being properly implemented.
Once upon a time there were large computers with custom builds and they usually were designed to do one set of fixed, known functions, and their custom hardware builds made it both feasible, and probably easy to write a kernel specifically for that hardware. The fixed requirements for functionality of that computer made it also easy and at that time, feasible to write non-portable software: the software was custom made for that specific purpose, and it would probably only ever need to run on that hardware, so writing an application in HLL with bit of "asm volatile()" (or even writing applications in assembly wholesale) would have made sense, and in such a case, using segmentation for example, by placing ASM statements in software would certainly make sense.
Then came general purpose computers, and the need for a single computer to run several different varying software with different purposes. Then portable languages became even more important, and their portability became even more profoundly needed. At this point, kernels became even more burdened with the responsibility to abstract hardware specifics from applications. And so applications used highly hardware specific features less and less, and relied more on kernel APIs.
Next came the age of the portable kernel, which would run on multiple platforms. At this point, hardware became increasingly more similar in design, and even began to be designed with the available operating system software's functionality support in mind, rather than custom kernel's being written with the features of their target hardware being kept in mind, and now hardware manufacturers keep software in mind, and kernel writers also optimize for hardware: a very nice circle of consideration and care for each other, and so most modern platforms are at least similar enough to lend themselves to portable design.
And so out the window goes segmentation, etc etc, all things that the hardware manufacturers realized nobody uses, and the kernel developers realized that no other architectures implement. My point?
Portable design is not bad simply because it does not take advantage of features that are to be phased out due to hardware and kernel developers' joint, unstated agreement that it's not useful to support.
Good portable design will
always take full advantage of all available useful features on a hardware platform and not sacrifice anything to the goal of achieving portability: in fact the aim of portable design is to ensure that every useful feature is
fully exploited on all platforms that do support it, and either provide basic support on platforms that don't, or find a way to handle the special case of no-support without any over-compromising and without excessive code paths to handle the special case.
Portable design is good, yo
--Nice topic
gravaera