Yes, so if your OS will have all required configuration capabilities and some means for a programmer to configure it's actions at run time, then I see no problem with it's unexpected/unwanted behavior.Brendan wrote:If the user doesn't want the extra load caused by redundancy, then they wouldn't have enabled it in the first place.
Yes, bugs are possible even in the VM code, but the maturity of the basic component of a system is always makes it less buggy than you expect. The situation is just like with any other service software from OS and to network support, databases, file systems and so on. The service layer adds some benefits to the system despite of possible bugs.Brendan wrote:I'm only pointing out that people who think VMs actually help rely on flawed logic - they assume it's impossible for programmers to write software without bugs while at the same time they assume that programmers can write a VM without bugs.
Partially your goal was achieved by using virtualization, when low end (and cheap) computers are used as terminals and actual job is done by one powerful server. But your case is more complex and supposes some splitting of the job among all available computers. Such splitting (even with new programming paradigm) is a non trivial problem.Brendan wrote:The goal is to allow a collection of 1 or more computers to behave like a single system; for the purpose of maximising performance and/or minimising hardware costs. For a simple example, instead of having an office for 20 people with one computer per user plus a server (21 computers) where most computers are idle most of the time, you could have an office for 20 people with 4 people per computer and no server that's faster (due to resource sharing and effective utilisation of resources) and a lot cheaper (5 computers and not 21).
For massive permanent job it's cheaper. It was discussed before, but I can repeat - the economy of scale can decrease costs a lot.Brendan wrote:"Cloud" (paying for remote processing) is fine when you've got a massive temporary job. For a massive permanent job it's stupid (cheaper to buy the hardware yourself instead of paying to use someone else's and paying for internet costs);
It depends on the latency. If it is measured in tens of seconds then yes, it's unacceptable. But if it's about one second or even less, then it can be perfectly acceptable for many kinds of jobs.Brendan wrote:and for thousands of tiny separate little jobs (e.g. where the latency is far more important than the processing) it's stupid.
Well, suppose you have a medium sized enterprise and use cloud of some big company. What do you expect the big company would do with your data? Would it sell it to competitors of the same size? But then your enterprise can buy the data of the competitors in the same manner they can buy your data and it is enough to start a successful process in the court. If the big company would sell the data to another big company then what is the purpose of the data for the buyer? Big boys need data about biggest part of all companies and next they need some very expensive department that is able to analyze the gathered data (without knowing it's structure and purpose), so it looks like a whole CIA staff will be working for such task to be completed. Isn't it look a bit overextended?Brendan wrote:Of course even for the very rare cases where cloud might make sense in theory, you have to trust someone else with your data and (for sensitive data) it might not make sense in practice due to lack of trust.
Corporate world has it's ways to get required information without Google or other cloud providers, so your trust doubts look a bit unreal.
For your OS to be used as a cloud it should include many configuration tools, easy to use and available on the net. But it's an easier part of your goal.Brendan wrote:Finally, for the extremely rare cases where "cloud" isn't stupid; there's no reason why someone using my OS wouldn't be able to rent some servers on the internet, put my OS on those servers, and make those servers part of their cluster.
The communicating entities boil down to the ability of some entity to extract independent subtasks from a bigger one. Such parallelization efforts still deliver no viable results, so you are going to outperform the whole academic society.Brendan wrote:For normal software (e.g. "oversized monolithic blob" applications using procedural programming) the cost of distributing it across multiple machines is too high (and quite frankly, most of the victims of the "oversized monolithic blob using procedural programming" approach are struggling just to use 2 or more CPUs in the same computer). To be effecitve, the "oversized monolithic blob using procedural programming" stupidity has to be discarded and replaced with something suitable, like "shared nothing entities that communicate". The "shared nothing entities that communicate" approach is already used for almost everything involving 2 or more computers.
Or the application just serializes it's current state and sends it to the Jane's application. No process was stopped or started, nothing was written to disk.Brendan wrote:Fred is using an application. That application communicates with Fred's GUI (sending video and sound to the GUI, and receiving keyboard, mouse from the GUI). Fred tells his GUI to send the application to Jane's GUI. Now the application communicates with Jane's GUI. No process was stopped or started, nothing was written to disk.
But collaborative team work is done using one computer with one person typing and another looking and suggesting improvements. In your case both need headset and computer just to do the same job they did without the headsets and additional computer.Brendan wrote:The second type of teamwork is more like pair programming. Imagine a small group of developers that have headsets and are constantly talking/listening to each other and watching each other type and correcting each other's typos while they're typing. In this case team members do not work in isolation, but work on the same task at the same time. For programming; there are existing solutions for pair programming (including collaborative real-time editors), but they're not widespread.
The goal would be to provide both types of teamwork in a way that allows people to use either type of teamwork whenever they want however they want (including "ad hoc" where no prior arrangements are made); and seamlessly integrate it all into the IDE so that it's a single consistent system rather than an ugly and inflexible collection of separate tools.
Everything is possible, but there's still no maintenance tool that suits almost everyone. One prefers red while another prefers green and satisfying both is at least very costly. And much worse, there are many existing maintenance tools that are integrated with some other corporate software, so your task now is to integrate your tool with all possible corporate software. Yes, it is possible, but you know about required time.Brendan wrote:Why don't you think a maintenance tool that suits almost everyone is possible? Can you think of anything people might want in a maintenance tool that is impossible?