Hi,
mrstobbe wrote:Brendan wrote:mrstobbe wrote:The keep alive process thing... why couldn't that be a wrapped launcher for the process (like mysqld_safe?)? It would be sleeping until the process ends and never context switch. When the process ends, it could easily figure out why and react accordingly.
If one process starts a second process and waits until the child process terminates, then that's 2 processes (where only one is given CPU time, but both share memory, have file handles, etc) and not a single process. Of course if you're planning to have drivers running in their own virtual address spaces (as processes) it's not really single process anyway; and you're effectively doing "multiple processes and multi-tasking, with different obscure limits on what different types of processes can do".
Pure semantics... if one process starts another but can't exist (in terms of processor... can't see the light of day again) until the other one exits, it's still a mono-process system. Again, pure semantics.
If an elephant has ears, legs and a heartbeat, and you temporarily pause its heartbeat, does the elephant cease to exist? Of course not.
If a process consists of a virtual address space, one or more threads, and any number of other things (file handles, signal handlers, etc); and you pause its threads, does the process cease to exist?
You can call it pure semantics if you like; as long as you understand that the semantics you've been using to describe your ideas are extremely confusing because you've mangled the terminology every else uses.
mrstobbe wrote:Brendan wrote:mrstobbe wrote:Hmmm... that could be a problem. Need to think about that more. The whole IRQ management part of this is still fuzzy in my head and I'd like to have a firmer grasp on it by the time the basics are done (I'm still working on simple/generic things like paging and clock management and the like).
I'd assume that you wanted to keep IRQs away from other CPUs (and concentrate them on a single CPU) so that you could do some hard real-time thing on those other CPUs (even though SMM and power management on 80x86 will screw that up more than IRQ handling will). Because of the "single application" nature of the OS, everyone is just going to run it inside virtual machines to avoid wasting hardware resources, and their virtual machine is going to completely destroy any hard real-time thing you attempt.
You're probably right about the point about IRQs. I'll focus on trying to figure out the best way for CPU1..n to handle interrupts. I still don't think you're right about you thinking that server environments expect that their servers be general purpose, but we'll agree to disagree on that point (I think we understand each other).
I didn't say that I expect servers to be general purpose; only that you can have special purpose servers without pointlessly crippling a kernel for no reason.
mrstobbe wrote:The virtualization point is a great one actually because you're essentially already in an SMP and the last thing you need is another SMP under that SMP when all you want to do is something specific. I mean... way add more context switching and IPC stuff in a virtualized environment when you just want to do something specific? Another good use-case for this.
The problem with running multiple operating systems under virtual machines on the same physical hardware is that those OSs don't/can't cooperate to ensure that the most important work is done before less important work. To allow more important work to be done before less important work, it's far better to run both applications under the same OS. This has the additional benefit of avoiding the overhead of virtualisation.
For a simple example, imagine running an FTP server and a HTTP server on a physical machine; where the HTTP server is more important (response times) and the FTP server is less important (as FTP has always tolerated "varying upload/download speed" well). If you run both in their own little "single application" OS, and put both of the OSs inside virtual machines on the same physical machine; then you're screwed - they will share CPU and network bandwidth equally (even though it is less important, the FTP server is allowed to take CPU time and network bandwidth away from the HTTP server).
mrstobbe wrote:Well, definitely disagree. What you just described is (from experience) no where near the reality in a more complex infrastructure. You'd obviously want to do "utility" tasks on general purpose OSs. I mean, we have quite a few servers but only a couple are dedicated to "general purpose" utility tasks, the rest: static HTTP this, or dynamic HTTP that, or DB this, or load-balance that, or distributed filesystem this, or what-have-you. Everyone else seems to operate the exact same way (and for good reason). You think Google or Facebook operates differently than that? But, again, we'll just have to agree to disagree.
Do you honestly think that Google or Facebook use "single application only" OSs? Why would they bother when any "as many application's as you like" OS is capable of only running one application?
Maybe you should design a car that can only do left-hand turns. I'm sure there are plenty of people that only ever turn left; and these people won't care about all the cars that have always been able to turn left or right that have the same performance.
Cheers,
Brendan