Rusky wrote:So what is it exactly that's set in stone with no versioning, and what is just stuff on top of that and thus subject to the same problems you claim to solve?
What's set in stone is the virtual hardware.
iansjack wrote:So how do user programs know that the function exists so that they can use it? In what sense are the specifications set in stone if you can introduce new capabilities of the hardware?
The user applications and/or their libraries for the VM rely on the virtual drivers. These drivers are for virtual hardware that never changes, but they themselves can be updated. When the virtual hardware is simulated, this is Low-Level-Emulation, and all drivers are guaranteed to work exactly how they were written. Simulating hardware is slow though. .NET and Java sidestep this issue by not using virtual hardware and simply allowing the software to run native code. This makes the software both hardware and platform dependent.
My solution is to not allow anything running inside the VM (including the virtual drivers, libraries, and applications) to run any native code or probe information about the physical hardware or platform in any way. They are blind, deaf, and numb to anything outside of the VM. However, the VM sees that a driver called 'opengl.drv' is being uses by a program or library. It checks if it recognizes the driver by comparing either a checksum or globally unique identifier. If it recognizes it, it iterates through a list of all the functions in the driver.
Suppose it sees, "glDrawArrays". It checks the list of hooks it has for 'opengl.drv' and notices it has a hook for that. Next, it checks if the physical hardware drivers support that hook. If so, then when it recompiles the code that executes this function (remember, this is a VM) it uses High Level Emulation; substituting the call to 'glDrawArrays' with one to the physical driver. The software running in the VM does not know this is happening. 'glDrawArrays' still looks the same in memory, is at the same location, has the same size, etc. It has no clue whatsoever that anything has changed. The only difference is that the function is magically running either faster or slower than normal.
This idea is nothing new. Emulators do this all the time. For example, Nintendo64 emulators do a checksum of the microcode for the RSP when it's loaded to determine whether to provide HLE for certain instruction sets like Fast3D or F3DEX2. The virtual hardware has nothing to do with performance, all it does is abstract away the physical hardware so that all drivers are consistent. They work. No ifs, ands, or buts. How well they work is dependant on whether the VM decides to use LLE (slow) or HLE (fast).
So to be clear, the VM can update, the virtual drivers can update, but the virtual hardware cannot. The virtual hardware doesn't need to support 50 billion different things from the past or future. Those things are implemented in software with drivers, which the VM provides true hardware acceleration for.