I'm think using an emulator as an example was an unfortunate choice - I've got a few strange plans involving devices and emulation, and this is probably going to get complex (and off topic) very quickly...Kevin wrote:Well, convert each supported device and backend into a separate process, then your process diagram is overloaded.
A lot of these units don't really exist in any running process anyway, they are mostly alternative. So for your disk backend, you usually have either a raw image, or a qcow2 image, or a VMDK image, or an NBD connection, etc. but rarely you use all of them at the same time. Similarly, you have either PC hardware or some ARM board or an old PPC Mac platform, but you never need all the devices at the same time.
The other thing is that for perfomance reasons you'll likely want to have the device implementation in the same thread as the backend (I assume for purpose of this IDE, thread and process are mostly equivalent). So a running instance will have an IDE device and a qcow2 backend in one process, and the SCSI device and a raw image in a second process. So you actually have a lot of processes/threads involved, but from the perspective of this IDE they are just one type of process, which would contain all block backends and all block device emulations as its units, right?
Imagine you've got a real PCI device with a real device driver running on the OS where the device driver happens to export the "emulated PCI device" messaging protocol; so that any/all emulators can take advantage of the real PCI device (by using the "emulated PCI device" messaging protocol to talk to the real device driver).
Now let's assume this is a distributed system. We can have a single a virtual machine that happens to be using a mixture of real and emulated devices on several different/remote computers without knowing it. For example, we could have a LAN of 20 computers where each computer has 2 separate real video cards (and a real SATA controller and 3 real USB controllers); with an OS (e.g. Windows or Linux) running inside a virtual machine that's able to use all 40 real video cards (and all 20 SATA controllers and all 120 USB controllers).
Also...
By using a complex arrangement of exception handler abuse; it's possible for micro-kernel to trick a real device driver, such that the device driver thinks it's talking to real hardware but each time it accesses memory mapped IO or IO ports the kernel is actually trapping these accesses and silently forwarding them to a normal process that implements the "emulated PCI device" messaging protocol (and for any "virtual IRQs" sent from that normal process the kernel delivers the IRQ to the real device driver and pretends it was a real IRQ). In this way, a real kernel can have real device drivers that happen to be using emulated hardware.
Of course it's still a distributed system. For example, you can one computer running 20 processes that emulate 20 different video cards; and then have 20 more computers on that LAN that are running 20 real device drivers (but talking to the emulated video cards on the first machine).
Now; let's imagine you're writing a device driver for a Nvidia video card. You start by implementing a "dummy driver" that doesn't actually do much more than support the "emulated PCI device" messaging protocol. This is relatively easy (it's mostly "pass through") - e.g. if the driver is told (by the emulator) to emulate a write to a video card register it just does the write to the real video card's register. It's only when bus mastering is involved that the dummy video driver would have to actually do any emulation (e.g. do the bus mastering transfer using physical addresses of a temporary buffer rather than "virtual physical addresses" from the virtual machine).
Once the "dummy driver" is done and working right; you'd be able to run an OS (e.g. Windows/Linux) inside a virtual machine and that OS would be using your real "dummy driver" and use the real video card. Of course you'd also implement a whole pile of logging in your "dummy driver", and end up with the most powerful reverse engineering tool you could hope for (e.g. log every single read and write to the video card that Window's native video driver makes).
The next step would be run a "testing" video driver that thinks it's using real hardware, but get the kernel to trap everything and send it to a copy of the previous "dummy driver" (that still does a pile of logging, and can reset/recover the device when your "testing" video driver crashes or does something silly). That way you get the most powerful device driver debugging environment you could hope for.
Of course this is still a distributed system. You could have a computer for testing somewhere on the LAN that is running the "dummy driver" and the virtual machine and/or "test video driver"; and use a different computer for viewing the logs, running the IDE, etc. That way if something goes very wrong and the test computer crashes hard (e.g. blows chunks and triple faults) then you just have to wait for it to reboot again without having the computer you're actually using interrupted.
TL;DR; The way existing emulators are designed isn't really what I'd be aiming for; and (for my purposes) each "virtual device" would be a separate project that isn't really part of the emulator project at all.
Cheers,
Brendan