There are quite a few discussions about VBE and about virtual 8086 (called V86 henceforth) mode's incompatibility with long mode on the forums.
Many seem to conclude that as it is not possible to use V86, either VBE is unusable or you need to write a software x86 emulator.
Would like to present a solution which is to run BIOS (or VBE) code inside a SVM/VMX VM.
Benefits of this solution:
1. Works in 32bit, 64bit or PAE modes, works anytime (no concept of ExitBootServices()).
2. Well supported by HW and FW vendors, as virtualization is an important usecase in the foreseeable future.
3. Likely simpler than writing a working x86 emulator, more stable (the CPU itself runs most of the BIOS VBE code), maybe faster as well.
The following is my mini guide on how to do it:
Prerequisite:
1. The system needs a working BIOS (or UEFI CSM) that supports VBE.
2. The CPU needs to support SVM/VMX with nested paging. Most AMD64 CPUs in the last decade do support this.
Overall idea:
Run BIOS's VBE functions in real mode in the VM, pass-through their IO accesses to the host and thus set the host's video mode.
How to execute on this overall idea:
Host side:
If you've worked with V86, SVM and VMX are so similar that it is trivial to expose the same interface from all 3.
If you've not worked with V86, it is also very similar to running a user space program:
step 1: initialize the CPU to a point where it can run user space program, such as setting up GDTR and TR correctly
loop:
step 2: specify states (the 5 items on stack for entry, items in TSS for getting back) and ask the CPU to act (IRET)
step 3: handle exceptions and decide to either do something and goto loop; or finish the run
In V86/SVM/VMX, it is like this:
step 1: initialize the CPU to a point where it can run SVM/VMX (CRs and MSRs)
loop:
step 2: specify states (guest entry states and host states for getting back) and ask the CPU to act (vmrun, vmlaunch or vmresume)
step 3: handle exceptions (GPF or vm exits) and decide to either do something and goto loop; or finish the run
Guest side:
You can invoke VBE functions directly by "inject interrupts", namely set up guest entry states so that your BIOS thinks it has just taken interrupt 10h and 4f00 is in ax.
V86 and SVM can directly trap IRET so that when the VBE function returns, control is back automatically. AFAIK VMX can't trap IRET, this can be solved by preparing a real mode stub for it that will cause a vmexit.
MMU setup:
For VBE, you'd want to use the mode called "paged real mode" by AMD and "unrestricted guest" by Intel. Which means that the guest runs in real mode but the MMU is enabled under them and uses nested pagetables prepared by the host to translate guest's accesses. Thus, the IVT can be relocated to any page aligned address in physical memory. Although I would not nuke the first 1MB of the physical address just because of this.
Detailed steps:
Can't fit them inside a mini guide. Please refer to the manuals.
Other bits of info that you might find helpful:
Works on real hardware?
Yes, tested on multiple recent computers with both AMD and Intel CPUs. May not work everywhere though.
How big an effort is this?
Likely simpler than writing a correct x86 emulator.
Needed less than 1000 LOC for SVM, about 2000 LOC for VMX (excluding headers) to get VBE mode setting working.
Choice between the two?
In terms of manual:
SVM's manual is terse and to the point. VMX's manual is verbose and rich in details.
In terms of usage:
SVM is more flexible and easier to use. If you have an AMD machine to develop this on you're in luck. Otherwise you can use Simnow which is actually very fast for an emulator and allows GDB connection and serial redirects for debugging.
VMX is not more difficult, just way more tedious. In short, Intel "invented" many new things for the sake of inventing new things and ended up with a more restrictive interface. While AMD reused many existing things and was able to end up with a more flexible interface.
How to pass-through BIOS's IO accesses to the host?
Your choice of either trap them and broker it for them (like do an INB in monitor code and resume the VM with result in AX); or open up all the ports and identity map all the physical memories and PCI config spaces and let BIOS do its things uninterrupted. MMIO seems highly uncommon in VBE code, my guess is that the port IO is a short hand which is trapped by the FW itself (the UEFI part) which then performs MMIO access as needed, or VBE has existed for so long that GPUs kept the old port IO interface for it.
Need to virtualize any device for the VM?
No, IIRC you don't even need to virtualize hardware interrupts for BIOS if only using this for VBE. If I remembered wrong or some VBE implementations are different and you took an interrupt that isn't yours, simply inject it into the VM.
Need to virtualize P mode?
I've never seen VBE code trying to enter P mode on real computers. Again my guess is that the port IOs are trapping into UEFI instead. Either ways changes to CR0 can be trapped by SVM/VMX and they do support P mode guests if needed (SVM seems to even support long mode guests on 32bit host).
Some debugging ideas:
Can't enable SVM/VMX:
FW can lock these modes disabled, check the lock bits or look at your BIOS settings to make sure.
VM entry fails:
Read the error code which is quite detailed. Otherwise Intel's VMX manual has detailed listing of things that the CPU checks and the order that it checks them before entering VM that you can refer to. Many of these checks are common x86 stuff that apply to SVM as well, and some of them are rarely set manually (such as the hidden/shadow bits in segment selectors) thus easy to get wrong.
Host kernel becomes unstable after VM exit:
Some host registers aren't automatically restored by the CPU on VM exit. Including ones that can cause big troubles for your kernel such as TR and LDTR.
BIOS behaves strangely after one use:
If you moved it around in physically memory, remember to keep its original data area in sync with the updated video mode.
"But I heard that VBE is obsolete?"
Not on most x86-64 computers. Which is to say, probably not obsolete on your current computer that will likely be fully dedicated to your OS after you get a shiny aarch64 computer n years down the road .
mini guide: use SVM/VMX for VBE mode setting
Re: mini guide: use SVM/VMX for VBE mode setting
Personally, I just drop out of long mode by unsetting PG and PM on CR0, reload the BIOS IDT and use INT 0x10 or 13 or 15 as needed for video modes, memory maps, disk access before I have a proper driver written, etc.. From the kernel side, you set up a pointer to a struct with all your register vars which lives < 0x90000 and you can copy the interrupt number in line with the source 0xCD <interrupt number>, then copy the register vars back where they were as a return and jump back into long mode. Xorgs VESA driver does this too, although with much more checks.
I figured the calls would be infrequent enough to not worry about the performance hit, unless you are using it for disk I/O then its really only to load the driver in if its a shared module. You can even set it in your kernel ELF file by adding a section with a linker script and using a non standard offset. My bootloader parses the ELF file using sections instead of the PHT.
I figured the calls would be infrequent enough to not worry about the performance hit, unless you are using it for disk I/O then its really only to load the driver in if its a shared module. You can even set it in your kernel ELF file by adding a section with a linker script and using a non standard offset. My bootloader parses the ELF file using sections instead of the PHT.
-
- Member
- Posts: 510
- Joined: Wed Mar 09, 2011 3:55 am
Re: mini guide: use SVM/VMX for VBE mode setting
Do note that an OS that uses this method will only work under virtualization if the host system supports and is configured for nested virtualization.