VESA in Long Mode using interpretation

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
mariuszp
Member
Member
Posts: 587
Joined: Sat Oct 16, 2010 3:38 pm

VESA in Long Mode using interpretation

Post by mariuszp »

Have any of you attempted to call VESA interupts from long mode using interpretation?

I'm writing a library for my kernel which interprets 16-bit x86 code while running in Long Mode, and there are 2 main problems I'm trying to figure out:

1) How should CLI/STI behave? Should they disable/enable the real CPUs interrupts too?
2) Apparently some BIOS-es switch to Protected Mode to do mode switches. This seems like it makes no sense since you can't even emulate that with virtual 8086 mode. Would my interpreter have to support Protected Mode too?

How have other people done it?
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: VESA in Long Mode using interpretation

Post by Brendan »

Hi,
mariuszp wrote:I'm writing a library for my kernel which interprets 16-bit x86 code while running in Long Mode, and there are 2 main problems I'm trying to figure out:

1) How should CLI/STI behave? Should they disable/enable the real CPUs interrupts too?
You're mostly implementing a virtual machine that runs "guest code" (the 16-bit real mode code). When the guest code does CLI you begin postponing any IRQ that would be handled by guest code (which have nothing at all to do with IRQs that are handled by the "host" or rest of your OS); and when the guest code does STI you stop postponing the delivery or IRQs to guest and deliver any IRQs that were postponed while interrupts were disabled. Of course this would mean emulating the PIC chips too.
mariuszp wrote:2) Apparently some BIOS-es switch to Protected Mode to do mode switches. This seems like it makes no sense since you can't even emulate that with virtual 8086 mode. Would my interpreter have to support Protected Mode too?
If a video card's ROM does switch to protected mode; then you don't have to support that video card's ROM.
mariuszp wrote:How have other people done it?
I setup a framebuffer during boot when firmware can be used directly (using either VBE on BIOS; or GOP or UGA on UEFI where VBE doesn't exist); and then use the framebuffer after boot (until/unless a native video driver is started). This means I don't need to bother implementing a virtual machine; and also means that you can't switch video modes after boot without a native video driver.

Note that (assuming you provide a decent resolution independent video driver API), there are very few reasons for a user to care about switching video modes after boot that VBE support solves.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: VESA in Long Mode using interpretation

Post by Rusky »

Brendan wrote:Note that (assuming you provide a decent resolution independent video driver API), there are very few reasons for a user to care about switching video modes after boot that VBE support solves.
Adding and removing monitors?
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: VESA in Long Mode using interpretation

Post by Brendan »

Hi,
Rusky wrote:
Brendan wrote:Note that (assuming you provide a decent resolution independent video driver API), there are very few reasons for a user to care about switching video modes after boot that VBE support solves.
Adding and removing monitors?
VBE doesn't provide any kind of notification if/when a monitor has been added or removed.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Octocontrabass
Member
Member
Posts: 5587
Joined: Mon Mar 25, 2013 7:01 pm

Re: VESA in Long Mode using interpretation

Post by Octocontrabass »

VBE does provide DDC (EDID) access, which can be polled to detect monitor changes.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: VESA in Long Mode using interpretation

Post by Brendan »

Hi,
Octocontrabass wrote:VBE does provide DDC (EDID) access, which can be polled to detect monitor changes.
It's a "one bit at a time" transfer over a relatively slow serial channel, where you ask for a whole block (2 Kib) and not a byte. This means you can't poll frequently enough to avoid "wrong video mode when different monitor plugged in" without a large performance problem. Also, some (older) video cards blank the screen when you try to use DDC, and frequently polling won't work well for that.

While it can work in theory for some cases; most users don't change their monitor often enough to care and it'd probably be easier to implement native video drivers (for mode switching only; without GPU, acceleration, movie decoder, etc) instead.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Octocontrabass
Member
Member
Posts: 5587
Joined: Mon Mar 25, 2013 7:01 pm

Re: VESA in Long Mode using interpretation

Post by Octocontrabass »

Brendan wrote:It's a "one bit at a time" transfer over a relatively slow serial channel, where you ask for a whole block (2 Kib) and not a byte.
For graphics cards that support it, you can use VBE/SCI to detect when a display is plugged in, without transferring the entire EDID.
Brendan wrote:This means you can't poll frequently enough to avoid "wrong video mode when different monitor plugged in" without a large performance problem.
VESA recommends once every 6 seconds, but you can do it even less frequently than that (especially if the only method available is transferring the entire EDID). You also don't need to avoid using the "wrong" video mode if the new monitor supports it; you can postpone the mode switch to a more convenient time.
Brendan wrote:Also, some (older) video cards blank the screen when you try to use DDC, and frequently polling won't work well for that.
Monitor hotplug is impossible on those video cards, but at least VBE/DDC will report the issue to you.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: VESA in Long Mode using interpretation

Post by Brendan »

Hi,
Octocontrabass wrote:
Brendan wrote:It's a "one bit at a time" transfer over a relatively slow serial channel, where you ask for a whole block (2 Kib) and not a byte.
For graphics cards that support it, you can use VBE/SCI to detect when a display is plugged in, without transferring the entire EDID.
Last time VBE/SCI was mentioned I tested it on 3 computers and found one didn't support it and for one it was unusably buggy. The first question you need to ask is how much time you're going to have to spend to build some sort of blacklist/whitelist (to avoid the buggy implementations) before you're able to claim that it's stable enough to actually use.
Octocontrabass wrote:
Brendan wrote:This means you can't poll frequently enough to avoid "wrong video mode when different monitor plugged in" without a large performance problem.
VESA recommends once every 6 seconds, but you can do it even less frequently than that (especially if the only method available is transferring the entire EDID). You also don't need to avoid using the "wrong" video mode if the new monitor supports it; you can postpone the mode switch to a more convenient time.
6 seconds was probably fine for old CRT displays that take ages to warm up and show a picture. For modern monitors (e.g. LCD with LED backlight) it'll look unprofessional (e.g. going from "off" to "on" to "Unsupported/old video mode" to "new video mode") and to fix that you'd want to bring it back to maybe 1 second. Of course even 10 seconds would be too frequent for laptop power management (who wants their laptop to be taken out of a sleep state every 10 seconds when its idle?).

You'll need EDID before you can determine if the new monitor does/doesn't support the old video mode.
Octocontrabass wrote:
Brendan wrote:Also, some (older) video cards blank the screen when you try to use DDC, and frequently polling won't work well for that.
Monitor hotplug is impossible on those video cards, but at least VBE/DDC will report the issue to you.
So now you've got 3 cases (polling with SCI and fetching EDID if/when monitor is changed, polling with EDID, and "no hotplug").

Mostly you would've wasted several years implementing and testing a "real mode emulator" before you even begin wasting another 6 months getting this monitor hotplug junk working; and as soon as you've finished you're still going to have to write native video drivers (that you could've been implemented instead), because none of the options work on UEFI (which means that by the time the OS is released it's probably only going to work on "not much more than emulators", where none of the emulators support DDC or SCI in the first place), and because it fails for multiple other cases (e.g. multiple video cards, "video card supports monitor's native resolution but VBE doesn't", video cards that blank the screen, etc).

Most end users do understand that if there's no video driver then they're officially running in "reduced functionality limp mode" and don't expect things like hardware acceleration or video mode switching. Why not just skip the "not going to matter enough to matter" work completely and start writing native video drivers sooner?


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Octocontrabass
Member
Member
Posts: 5587
Joined: Mon Mar 25, 2013 7:01 pm

Re: VESA in Long Mode using interpretation

Post by Octocontrabass »

Brendan wrote:Last time VBE/SCI was mentioned I tested it on 3 computers and found one didn't support it and for one it was unusably buggy. The first question you need to ask is how much time you're going to have to spend to build some sort of blacklist/whitelist (to avoid the buggy implementations) before you're able to claim that it's stable enough to actually use.
If you have decided that VBE/SCI is not worthwhile to implement, then don't implement it.
Brendan wrote:6 seconds was probably fine for old CRT displays that take ages to warm up and show a picture. For modern monitors (e.g. LCD with LED backlight) it'll look unprofessional (e.g. going from "off" to "on" to "Unsupported/old video mode" to "new video mode") and to fix that you'd want to bring it back to maybe 1 second. Of course even 10 seconds would be too frequent for laptop power management (who wants their laptop to be taken out of a sleep state every 10 seconds when its idle?).
If you have decided that polling for monitor hotplug is not worthwhile to implement, then don't implement it.
Brendan wrote:So now you've got 3 cases (polling with SCI and fetching EDID if/when monitor is changed, polling with EDID, and "no hotplug").
If you have decided that monitor hotplug is too complicated to be worthwhile to implement, then don't implement it.
Brendan wrote:Mostly you would've wasted several years implementing and testing a "real mode emulator"
I think you vastly overestimate the complexity of an emulator capable of running VBE code. (Besides, there are already-written emulators available.)
Brendan wrote:UEFI
You can read the PCI option ROM, regardless of the host machine's firmware (or CPU).
Brendan wrote:Most end users do understand
Most end users don't understand anything about computers.
Brendan wrote:Why not just skip the "not going to matter enough to matter" work completely and start writing native video drivers sooner?
Because it's fun.
User avatar
BrightLight
Member
Member
Posts: 901
Joined: Sat Dec 27, 2014 9:11 am
Location: Maadi, Cairo, Egypt
Contact:

Re: VESA in Long Mode using interpretation

Post by BrightLight »

Brendan wrote:If a video card's ROM does switch to protected mode; then you don't have to support that video card's ROM.
Why not? Maybe the graphics card has memory-mapped registers in high memory.
You know your OS is advanced when you stop using the Intel programming guide as a reference.
mariuszp
Member
Member
Posts: 587
Joined: Sat Oct 16, 2010 3:38 pm

Re: VESA in Long Mode using interpretation

Post by mariuszp »

According to the wiki, "BIOSs can switch to protected mode to implement this, and might reset the GDT. This is observable on QEMU 2.2.x.
(http://wiki.osdev.org/VESA_Video_Modes).

Also, which IRQs do I have to forward to the guest BIOS, if any?

@Brendan: why would I need to emulate PCI chips? I will simply allow the BIOS to in/out any port it wants to.

And as for memory-mapped registers in high memory: isn't it the case that the motherboard chipset performs the memory mapping? The BIOS can put those registers wherever it wants to.

If I really need 32-bit code to run in the BIOS, I guess I could run that in Compatiblity Mode, identity-map the bottom 4GB of memory, and just get a monitor to emulate the priviliged instructions.

alternatively, is it possible to somehow switch to Real Mode from Long Mode? I know you can do it from Protected Mode, but when I once accidentally attempted to disable Long Mode, the WRMSR threw a #GP.
Octocontrabass
Member
Member
Posts: 5587
Joined: Mon Mar 25, 2013 7:01 pm

Re: VESA in Long Mode using interpretation

Post by Octocontrabass »

mariuszp wrote:Also, which IRQs do I have to forward to the guest BIOS, if any?
Probably none, since you're only running the video BIOS. You can avoid using the system BIOS entirely by filling the IVT with dummy addresses that break from the interpreter and call a BIOS simulation that determines the appropriate responses.
mariuszp wrote:@Brendan: why would I need to emulate PCI chips? I will simply allow the BIOS to in/out any port it wants to.
Should the BIOS decide to mess with your interrupt controllers, you will have a very hard time. You'll be much better off sending only video-card-related I/O accesses to hardware, and simulating (mostly ignoring) the rest.
mariuszp wrote:And as for memory-mapped registers in high memory: isn't it the case that the motherboard chipset performs the memory mapping? The BIOS can put those registers wherever it wants to.
It can, and it does. That's how they ended up in high memory in the first place. Fortunately, you can read the PCI configuration to determine where in high memory they are, and map them appropriately in your interpreter.
mariuszp wrote:If I really need 32-bit code to run in the BIOS, I guess I could run that in Compatiblity Mode, identity-map the bottom 4GB of memory, and just get a monitor to emulate the priviliged instructions.
That's certainly a possibility, but you only need to identity-map the video card's MMIO. (Plus, it requires an x86 CPU, whereas an interpreter will work on any CPU.)
mariuszp
Member
Member
Posts: 587
Joined: Sat Oct 16, 2010 3:38 pm

Re: VESA in Long Mode using interpretation

Post by mariuszp »

How do I know which ports the graphics card uses? As far as I know, finding the graphics card on PCI will only tell me the base of the registers, not how many there are.
User avatar
BrightLight
Member
Member
Posts: 901
Joined: Sat Dec 27, 2014 9:11 am
Location: Maadi, Cairo, Egypt
Contact:

Re: VESA in Long Mode using interpretation

Post by BrightLight »

mariuszp wrote:How do I know which ports the graphics card uses? As far as I know, finding the graphics card on PCI will only tell me the base of the registers, not how many there are.
Erm. :roll:
OSDev Wiki wrote:To determine the amount of address space needed by a PCI device, you must save the original value of the BAR, write a value of all 1's to the register, then read it back. The amount of memory can then be determined by masking the information bits, performing a bitwise NOT ('~' in C), and incrementing the value by 1. The original value of the BAR should then be restored. The BAR register is naturally aligned and as such you can only modify the bits that are set. For example, if a device utilizes 16 MB it will have BAR0 filled with 0xFF000000 (0x01000000 after decoding) and you can only modify the upper 8-bits.
You know your OS is advanced when you stop using the Intel programming guide as a reference.
mariuszp
Member
Member
Posts: 587
Joined: Sat Oct 16, 2010 3:38 pm

Re: VESA in Long Mode using interpretation

Post by mariuszp »

Does any OS at all use VESA from long mode anymore? If it's really as problematic as described here, I will go with the native driver solution.
Post Reply