If there's CPU type and features information then OS should deny to run the application on a system with lesser feature set. It breaks backward compatibility. It's bad for users to find an application useless on a notebook a few years older than the workstation where application was built.Brendan wrote:Previous versions of my OS didn't use bytecode. For these, the executable file format included information about the CPU it was designed for (CPU type and a 256-bit bitfield of required/used CPU features) plus some other information (how much processing, memory and communication the executable does); where all this information (plus load statistics, etc) is used by the OS to determine where the executable should be run.
Starting with native IDE and stuff is harder. Existing IDEs are feature rich and your native IDE can take years to implement these features. So, the time spent while working with the featureless IDE can be saved in case of using an existing feature rich IDE.Brendan wrote:My plan is to create enough of the OS so that a native IDE and toolchain (and AOT compiler) can be implemented on top of it; and then port one piece at a time into my own language (with my own tools); so that eventually the current version will become something that does use bytecode and AOT.
Have you decided on the form of such information? Does you new language support annotations? Or is it something else? I suggest something like a database with flexible ways of usage of it's information. Like prolog's facts database. Such kind of [meta-]database can be pervasive for the entire OS.Brendan wrote:For the final system; the programmer will still provide some information (how much processing, memory and communication the executable does) that's used by the OS to determine where an executable should run;
Because the generated code most probably won't be used on another computer there's no value in storing the executable file with it's "type and features" information. I suggest it is better to use a centralized database with all the meta-information about all the software installed (precompiled or not), so, there's no need for a tricky structures paired with executable. Just pieces of code referred from the database.Brendan wrote:but the "CPU type and features" information will be generated by the byte-code to native compiler, and the OS will cache multiple versions of the final/native executable (one for each different "CPU type and features").
Well, I see here the need for JIT. Isn't it there? The environment should manage the code for every possible situation and caching every piece of a code for every situation seems to be too inefficient when there's just a simple way of creation of the code on demand.Brendan wrote:When a process is being started the OS will use the processing/memory/communication information (from the byte-code) to determine the best place for the executable to be run; but then it will examine the cache of the final/native executable versions. Mostly:
- If there's already final/native executable that suits where the OS wants to run the executable; use that. This will be the most common case.
- Otherwise; determine if there's something "close enough" to use for now (either a final/native executable that's close enough for the best place for the executable to be run; or a final/native executable for a different "less best" place for the executable to be run); and:
- If there is something "close enough" to use; then use it (to avoid delays caused by byte-code to native compiler) but also compile the byte-code to native to suit the "CPU type and features" in the background so that it exists next time the executable is executed.
- If there isn't anything "close enough" (either the delays caused by byte-code to native compiler can't be avoided or the delays are preferable to slower run-time and/or worse load balancing); compile the byte-code to native to suit the "CPU type and features" and then use that.
Also it is worth to consider situations like preemption. It's when more important task takes place of the current task on a preferred CPU and the current task should be run on a CPU with lesser feature set despite of the programmer's/user's annotations. So, the JIT recompiles it and scheduler starts it on a weaker processor. It's actual for ARM processors with heterogeneous cores and may be actual for Intel processors in the future.
And what is "close enough"? If there's an AVX instruction then there must be the AVX support. No fuzziness.
So, the communication facilities are abstracted to the very basic level of the hardware? If there are different cores and they have a need for communication then the caching issues can impact performance, so the communication layer can be implemented with care for caches or without such care. But if the care is present then the number of possible combinations looks a bit huge to be implemented.Brendan wrote:For my system an application is multiple cooperating processes. All of the stuff above (to determine where an executable/process should be run) only cares about processes, and not applications, and not threads. For a multi-threaded process, each of the process' threads use CPUs that match the process' requirements.
Yes, when you have no millions for advertising then it's hard to attract people. But may be it's a good idea to show your OS to some venture capitalists and to ask for millions for advertising? That's how the FaceGoogles were grown.Brendan wrote:You're under-estimating what would be required to compete favourably against existing OSs. If people start switching from existing OSs to your OS because your OS looks better; then within 6 months the existing OSs will improve their appearance and you'll be screwed.
The switching cost here is the applications missed on the old platform. So, still there are some ways forward.Brendan wrote:Immediately after switching to your OS users can switch back to whatever existing OS they were using before (there's almost no "switching costs" because they're still already familiar with that previous OS).
I have the browser switching experience and I should tell you the final decision was made because of just one feature which is important for me. And the monitoring for the availability of such feature in competing browsers is tedious, so I still use Firefox while (may be) there are better solutions available now.Brendan wrote:I'd estimate that it'd take a few years before the "switching costs" starts to work in your favour, and people using your OS don't want to switch to another OS
There's a feature set a user looks for. The feature set can be rich but useless for a particular user. And the feature set can be narrow, but very useful for a particular user. And feature sets form a universe of user preferences. Popular OSs target the most dense part of the universe. Your OS can target some sparse parts. To be the best for the dense part is too hard. To be the only one for the sparse part is absolutely possible.Brendan wrote:Basically; your OS needs to be significantly better (not just better), then has to remain better (not necessarily significantly better) for multiple years (while other OSs are improving and trying to catch up). Ideally; you'd want multiple reasons why your OS is better and not just one, where at least some of those reasons are extremely hard (time consuming) for existing OSs to copy/adopt (or are impossible for existing OSs to copy/adopt - e.g. patents).
Basically - there are ways and your choice is hard. But somebody on earth just must to try such way. That's how the trial and error method works. And it really works for things like evolution.