Page 3 of 4

Re: Design Choices of my OS

Posted: Thu Mar 05, 2015 11:44 pm
by SoulofDeity
Rusky wrote:The point is, the software must be written to some particular interface (which you intend to set in stone), but that interface cannot provide optimal performance for all hardware configurations and designs. This was hinted at by iansjack's comment here:
iansjack wrote:If you have discovered a way to virtualize, for example, a graphics card without any loss of performance then I congratulate you.
Try coming up with a specific design for one of these virtual drivers and we'll see how it fares.
The virtual hardware has absolutely nothing to do with performance. It simply provides a standard interface for the virtual drivers. What they decide to do with it is up to them. If they want to add a 'drawKitten' function, that's their prerogitive. The job of the VM is: "do I recognize this driver? if yes, do I recognize the function? if yes, are there any functions in the physical drivers that could be used to provide hardware acceleration for this function? If yes, patch a HLE hook for the 'drawKitten" function to provide hardware acceleration for it. If no to any of these questions, just use normal LLE emulation."

I don't know why I have to explain it so many times.

Re: Design Choices of my OS

Posted: Fri Mar 06, 2015 12:53 am
by Rusky
So what is it exactly that's set in stone with no versioning, and what is just stuff on top of that and thus subject to the same problems you claim to solve?

Re: Design Choices of my OS

Posted: Fri Mar 06, 2015 1:00 am
by iansjack
So how do user programs know that the function exists so that they can use it? In what sense are the specifications set in stone if you can introduce new capabilities of the hardware?

You have to keep explaining beause, to me, it is all smoke and mirrors. The best way to explain would be to implement your ideas. Until then they are about as convincing as the claims made for Java when it was first introduced.

Re: Design Choices of my OS

Posted: Fri Mar 06, 2015 8:53 am
by JAAman
ok, lets look at what you are saying carefully:

50 years from now, some science fiction writer is going to dream up some strange new hardware device that nobody today could ever even imagine
150 years from now some engineer will find a way to actually make that hardware device in the real world
300 years from now it will actually become something common that everyone wants to use

and somehow software running in your VM will be able to use this, and it will work on the version of your VM that you are writing right now... even though you have no idea what kind of thing it will be? but it will be completely functional on your current systems now, even though it hasn't yet been imagined? because you are providing a "virtual device" that will work with this new device that is beyond anything you could ever imagine?

this doesn't sound reasonable to me... instead it sounds much more reasonable that when the hardware device is invented, you will need to create an interface to represent it (a virtual version of the device), and then programs that want to use it will only be able to run on newer versions of your VM that support that new hardware (because older versions of your VM cannot support something that they don't know about, don't know what it does, don't know how it works, and don't know how to communicate with it)


but then I'm not too smart, so maybe I just don't get it

Re: Design Choices of my OS

Posted: Fri Mar 06, 2015 2:34 pm
by SoulofDeity
Rusky wrote:So what is it exactly that's set in stone with no versioning, and what is just stuff on top of that and thus subject to the same problems you claim to solve?
What's set in stone is the virtual hardware.
iansjack wrote:So how do user programs know that the function exists so that they can use it? In what sense are the specifications set in stone if you can introduce new capabilities of the hardware?
The user applications and/or their libraries for the VM rely on the virtual drivers. These drivers are for virtual hardware that never changes, but they themselves can be updated. When the virtual hardware is simulated, this is Low-Level-Emulation, and all drivers are guaranteed to work exactly how they were written. Simulating hardware is slow though. .NET and Java sidestep this issue by not using virtual hardware and simply allowing the software to run native code. This makes the software both hardware and platform dependent.

My solution is to not allow anything running inside the VM (including the virtual drivers, libraries, and applications) to run any native code or probe information about the physical hardware or platform in any way. They are blind, deaf, and numb to anything outside of the VM. However, the VM sees that a driver called 'opengl.drv' is being uses by a program or library. It checks if it recognizes the driver by comparing either a checksum or globally unique identifier. If it recognizes it, it iterates through a list of all the functions in the driver.

Suppose it sees, "glDrawArrays". It checks the list of hooks it has for 'opengl.drv' and notices it has a hook for that. Next, it checks if the physical hardware drivers support that hook. If so, then when it recompiles the code that executes this function (remember, this is a VM) it uses High Level Emulation; substituting the call to 'glDrawArrays' with one to the physical driver. The software running in the VM does not know this is happening. 'glDrawArrays' still looks the same in memory, is at the same location, has the same size, etc. It has no clue whatsoever that anything has changed. The only difference is that the function is magically running either faster or slower than normal.

This idea is nothing new. Emulators do this all the time. For example, Nintendo64 emulators do a checksum of the microcode for the RSP when it's loaded to determine whether to provide HLE for certain instruction sets like Fast3D or F3DEX2. The virtual hardware has nothing to do with performance, all it does is abstract away the physical hardware so that all drivers are consistent. They work. No ifs, ands, or buts. How well they work is dependant on whether the VM decides to use LLE (slow) or HLE (fast).

So to be clear, the VM can update, the virtual drivers can update, but the virtual hardware cannot. The virtual hardware doesn't need to support 50 billion different things from the past or future. Those things are implemented in software with drivers, which the VM provides true hardware acceleration for.

Re: Design Choices of my OS

Posted: Fri Mar 06, 2015 4:45 pm
by iansjack
So the only functions that a driver can provide are those that the virtual hardware understands? It is not possible to introduce new capabilities to a device. And those functions that it does know about may be called (almost) directly or may be emulated. I don't see the point. This is limiting what the real hardware can do - only those functions that the virtual hardware (which is set in stone) has been set up to understand - and it is ensuring that those functions run either slightly slower or much slower than they would without this unnecessary layer of emulation. What's this supposed to achieve?

It seems significant to me that almost every example that you come up with relates to emulating gaming consoles. I suspect that we are interested in totally different users of an operating system.

Re: Design Choices of my OS

Posted: Fri Mar 06, 2015 5:04 pm
by SoulofDeity
iansjack wrote:So the only functions that a driver can provide are those that the virtual hardware understands? It is not possible to introduce new capabilities to a device.
No. What functions are available depends on the virtual drivers. Whether or not they will be executed via LLE (slow) or HLE (fast) depends on the VM. The virtual hardware just gives you a minimalist interface to build off of.
iansjack wrote:And those functions that it does know about may be called (almost) directly or may be emulated. I don't see the point. This is limiting what the real hardware can do - only those functions that the virtual hardware (which is set in stone) has been set up to understand - and it is ensuring that those functions run either slightly slower or much slower than they would without this unnecessary layer of emulation. What's this supposed to achieve?
All software is guaranteed to work on all platforms and hardware combinations. Period. The only factor is how fast they will be executed, and that's determined by both the physical drivers and the VM.
iansjack wrote:It seems significant to me that almost every example that you come up with relates to emulating gaming consoles. I suspect that we are interested in totally different users of an operating system.
That's because that's what I'm talking about.
SoulofDeity wrote:...what's the point in using a virtual machine if the application is still platform dependent? It makes no sense. You're just sacrificing speed. This applies to Java as well. So, what I want to do is create a virtual machine/platform abstraction layer. It'll have a single interface for devices and absolutely will not allow peeking at hardware or executing/linking against programs and libraries not built for it. By doing this, all of the software written for it will be 100% platform independent. Porting the VM/PAL to a different platform would mean that anything ever written for it is guaranteed to work on that platform; thus the software never becomes obsolete. The only factor that matters is performance.
The entire point of this idea was to provide a better (and portable) environment for user applications, born out of frustration of cases when developers do stupid things like making their software only work on NVidia cards, or use their own proprietary drivers for things that don't work on all machines.

Re: Design Choices of my OS

Posted: Fri Mar 06, 2015 8:22 pm
by Rusky
You may be able to keep software on your platform from depending on a specific graphics card, but there are much simpler and faster ways to accomplish that, which don't also prevent the capabilities of the hardware from ever expanding.

Re: Design Choices of my OS

Posted: Fri Mar 06, 2015 9:00 pm
by SoulofDeity
Rusky wrote:You may be able to keep software on your platform from depending on a specific graphics card, but there are much simpler and faster ways to accomplish that, which don't also prevent the capabilities of the hardware from ever expanding.
1. I've already stated this method would be just as fast as .NET or Java. No one is complaining about them.

2. I've said several times, the hardware doesn't matter. Everything is implemented in software and the VM handles the hardware acceleration by recognizing and patching the drivers during recompilation.

Re: Design Choices of my OS

Posted: Fri Mar 06, 2015 9:23 pm
by Rusky
You are completely missing the point. Your virtual set-in-stone hardware sets an upper limit (and some horizontal limits, if that makes any sense) on what things the VM is even capable of recognizing and thus accelerating with real hardware.

Re: Design Choices of my OS

Posted: Fri Mar 06, 2015 9:43 pm
by SoulofDeity
Rusky wrote:You are completely missing the point. Your virtual set-in-stone hardware sets an upper limit (and some horizontal limits, if that makes any sense) on what things the VM is even capable of recognizing and thus accelerating with real hardware.
No, the one who's missing the point is you. The VM doesn't accelerate hardware, it accelerates software. Perhaps it'd be more clear if instead of saying "virtual drivers", I said "microcodes". Take for example, the N64. It had no hardware support for shadows, and only a 1-bit stencil buffer. The guys at Rare wrote their own microcode that allowed them to do this and have crisper graphics, thus extending the functionality of the N64 (this is actually somewhat surprising considering that there were no tools to do so. they actually reverse engineered it and made their own tools)

Rather than emulating the hardware that executes the microcodes, most emulators emulate what the instructions in the microcodes do.

Re: Design Choices of my OS

Posted: Fri Mar 06, 2015 9:55 pm
by Rusky
Right, the VM accelerates software *with* hardware, like I said.

The N64 is a perfect example of why this is an awful idea. If your hardware doesn't (by analogy) support shadows, which is inevitable, there are a couple possible outcomes, depending on how I interpret your completely vague description of how things work:
  • You write a new virtual driver (to what virtual hardware interface? one that actually is capable of supporting shadows the way the N64 was? or to a new interface, no longer set in stone?) which is now a compatibility hazard because some software will depend on that driver
  • You write a program that does shadows in software and rely on the VM to somehow guess what's going on and try to accelerate it correctly (again a compatibility hazard because now you need a new guessing algorithm for every new feature)

Re: Design Choices of my OS

Posted: Fri Mar 06, 2015 10:07 pm
by SoulofDeity
Rusky wrote:Right, the VM accelerates software *with* hardware, like I said.

The N64 is a perfect example of why this is an awful idea. If your hardware doesn't (by analogy) support shadows, which is inevitable, there are a couple possible outcomes, depending on how I interpret your completely vague description of how things work:
  • You write a new virtual driver (to what virtual hardware interface?
The one provided by the VM. Whether or not the hardware itself can perform the task is irrelevant.
Rusky wrote:one that actually is capable of supporting shadows the way the N64 was?
No. It has nothing to do with functionality of the hardware. If the hardware doesn't support shadows, then you implement them via software. It's not impossible, it's just math.
Rusky wrote:or to a new interface, no longer set in stone?) which is now a compatibility hazard because some software will depend on that driver
It's not a compatibility hazard because that driver will work on any version of the VM running on any platform with any physical hardware combinations. It's essentially a portable driver.
Rusky wrote:
  • You write a program that does shadows in software and rely on the VM to somehow guess what's going on and try to accelerate it correctly (again a compatibility hazard because now you need a new guessing algorithm for every new feature)
There is no guessing involved. The VM just uses a UID or checksum to identify the driver. The magic is that even if the VM doesn't recognize the virtual driver, IT STILL WORKS! Why? Because the virtual hardware never changes, and thus it can fall back on LLE. It'll be slow, sure. At least until someone creates an HLE hook for it. Then it's just as fast as if you're directly using the physical hardware.

Re: Design Choices of my OS

Posted: Fri Mar 06, 2015 11:14 pm
by Rusky
So when you need to add a new feature in the virtual driver, which is unsupported by the set-in-stone virtual hardware, you just implement it via software. Fine, that's what everyone else does already, although now you do have version dependencies on virtual drivers. Not a huge problem because you can just distribute them with the software, I suppose- they run on set-in-stone virtual hardware anyway.

Now, how do you get that new feature to use the newly-available acceleration on the real hardware underneath? A new HLE hook, you say? How does the VM know to use the HLE hook (which is necessarily part of the VM, not anything above it), if not through some kind of interface between the virtual hardware and the virtual drivers? But that interface was supposed to be set in stone!

You see the problem yet, or are you just arguing blindly at this point?

Re: Design Choices of my OS

Posted: Fri Mar 06, 2015 11:31 pm
by SoulofDeity
Rusky wrote:Now, how do you get that new feature to use the newly-available acceleration on the real hardware underneath? A new HLE hook, you say? How does the VM know to use the HLE hook (which is necessarily part of the VM, not anything above it), if not through some kind of interface between the virtual hardware and the virtual drivers? But that interface was supposed to be set in stone!
The interface is simply a bare minimum for the drivers to build off of. Take for example, the TI-83+. It literally has no direct graphical hardware interface aside from a port that can be used to select a row and column, then either read or write 8-bits representing 8 pixels in a horizontal line. What did people do with this? Well, first they wrote a fastcopy routine that could blit an entire block of memory to the screen representing a framebuffer. Next, they wrote putsprite routines that would render to the framebuffer by either simple copying, xor-masking, or using 2 sprites (1 and-mask to clear space and another to blit it with or-ing). But it didn't stop there. Someone else calculated the frequency of the screen and determined exactly how long it takes for the juice to leave it. Then, they created an interrupt routine that would apply a checkered mask between 2 or more framebuffers; allowing 3 to 11-level grayscale on a monochrome screen. Someone else wrote a routine that could render skewed or rotated sprites. Then another person wrote a 3D renderer with texturing. From nothing but simple port io, you can now play DOOM on a TI-83+ or watch grayscale movies.

The VM just says, "oh, I see you're doing fastcopy, setpixel, putsprite, putrotatedsprite, and drawarrays. the physical drivers can do those hella faster, so I'll just use those instead. You'll never know the difference."