Page 1 of 2

Kernel video?

Posted: Wed Jul 29, 2009 8:35 am
by GeniusCobyWalker
I was thinking about coding a video driver for my OS.
Not a full GUI just something where I can load pictures.

Will I need to use VESA? (I think that's what it is, from what I have gathered so far)

What are the requirements for starting to code a VESA driver? (gdt, idt, isr, etc)

Re: Kernel video?

Posted: Wed Jul 29, 2009 9:13 am
by xvedejas
Correct me if I'm wrong, but I believe you'll need to go back to a mode that has 16-bit memory addressing (like real mode) and use a BIOS interrupt to switch to vesa mode, then switch back. From there you write a vesa driver, I suppose.

And you'll want to use vesa unless you want to write an individual driver for each graphics card :)

Re: Kernel video?

Posted: Wed Jul 29, 2009 9:17 am
by Thor
Yeah... if you need a rundown of how to get VESA mode set up, there's this awesome thing called the internet! It has all kinds of info :D

Re: Kernel video?

Posted: Wed Jul 29, 2009 9:45 am
by f2

Re: Kernel video?

Posted: Wed Jul 29, 2009 10:10 am
by quanganht
I'm working with Coby. I know that we have to use Vesa. But I don't want to trash the bootloader, meanwhile i want the video mode can be changed.
But is there any way to jump from Protected mode to Unreal mode, set up and go back to Protected mode, without getting kicked in the @ss ?

EDIT: Do I only need to change the Control Register, then make a far jump?

Re: Kernel video?

Posted: Wed Jul 29, 2009 10:38 am
by neon
GeniusCobyWalker wrote:Will I need to use VESA? (I think that's what it is, from what I have gathered so far)
No, Vesa is not needed. You can also program the card directly via VGA or SVGA-class for higher resolutions. This is harder then Vesa, of course, but is an option.
What are the requirements for starting to code a VESA driver? (gdt, idt, isr, etc)
None of those. Just a VBE-compliant BIOS and real mode is all that is needed. There is also a protected mode interface but that does not seem to be very supported.
But is there any way to jump from Protected mode to Unreal mode, set up and go back to Protected mode, without getting kicked in the @ss ?
Of course. You can go straight from pmode to rmode if you want to - you just need to insure that you save the state of everything so it can all properly be restored later.

Re: Kernel video?

Posted: Wed Jul 29, 2009 12:08 pm
by Brendan
Hi,
neon wrote:
But is there any way to jump from Protected mode to Unreal mode, set up and go back to Protected mode, without getting kicked in the @ss ?
Of course. You can go straight from pmode to rmode if you want to - you just need to insure that you save the state of everything so it can all properly be restored later.
Switching to real mode is easy. Switching to real mode while a protected mode (or long mode?) OS and all it's IRQ handlers (including PICs, I/O APICs and local APICs) is running; without risking losing IRQs (and locking up the device/s because you failed to send an EOI) and other problems is *extremely* difficult.

However, I *think* it is technically possible, with enough care (and with enough ugly thunking). Here's some steps as a rough guideline:
  • If paging is being used, make sure everything is identity mapped
  • Disable IRQs (CLI)
  • If paging was being used, disable paging
  • Find somewhere for the stack, where ESP = SP (e.g. the high 16-bits of ESP must be zero); and also make sure that the stack makes sense in both protected mode and real mode. For example, in protected mode SS:ESP might be "0x0010:0x00000F00" with the SS descriptor base equal to 0x00000100 so that the stack is at the address 0x00001000, and when you switch back to real mode then SS:SP = 0x0010:0x0F00 which still points to the exact same place (0x00001000).
  • Create a temporary IDT that only contains 2 entries for NMI. The first entry will be for a 32-bit NMI handler at offset "2 * 8" in the IDT, and the second entry will be for a real mode NMI handler at offset "2 * 4" in the same IDT. This allows you to switch back to real mode without messing with NMI handling. The real mode NMI handler would switch back to protected mode (with extreme care taken with segment register usage, as segment registers may not contain sane values) and allow the OS's normal (protected mode) NMI handler to be executed.
  • Disable protected mode.
  • Load "real mode" segment registers, etc. Note: I skipped the part about loading "16-bit protected mode" segments, as this is only necessary to reset the segment limits to 0xFFFF, and I'm going to assume that you either don't care about segment limits or that you actively want 4 GiB limits.
  • Load a new IDT with real mode interrupt handlers. Things like IRQ handlers are stubs that switch back to protected mode and run the OS's normal protected mode IRQ handlers. The same would be done for things like IPIs, and maybe exception handlers. Note: If the real mode code you want to run expects certain interrupts, then you'll need to provide them, and you'll need to make sure your real mode IDT is at 0x0000000. For example, for running VBE code you'd need a valid "int 0x10", but there's no guarantee that the VBE code won't also use other BIOS functions. This means that your OS can't use any of these interrupt vectors for it's own IRQ/IPI purposes (even when it's running normally). Fortunately most standard BIOS functions use lower numbered interrupt vectors, so you might have to live without decent exception handling, but your IRQ handlers (and IPI handlers) should be OK.
  • DO NOT reprogram the PIC (and/or APICs). You risk losing IRQs if you do, because reprogramming PICs can reset any pending IRQs and I don't think it's possible to do it without race conditions. However, I would make sure some things are disabled; like debugging (e.g. DR6) and performance monitoring (in the relevant MSRs).
  • Enable IRQs (STI). Now you're running in real mode, where certain interrupts have handlers that switch back to protected mode (basically they do the reverse of everything above) to make sure the interrupt is handled properly.
  • Now, do what you like in real mode!
  • Finally, when you're finished do most of the above steps in reverse to return back to protected mode.
WARNING: I have not tested any of this, and to be honest I don't even know how some parts of it can be tested (e.g. how can you trigger an NMI at the exact point in time that you disable protected mode, to make sure the NMI stuff works after protected mode is disabled but before you've had a chance to setup segment registers?); and I seriously suggest that you design your OS in such a way that such a horrid mess isn't necessary; but AFAIK, in theory, it should (might) work.

Also, someone might try to tell you that you can skip all of this, and that you could just do a "CLI", switch to real/unreal mode, do your thing, then switch back to protected mode and do a "STI" (and pretend that the BIOS/VBE code won't enable interrupts or do anything else). There's no way to guarantee that all video cards don't enable interrupts; and even if some/all video cards don't enable interrupts, things like setting video modes takes far too long, which will cause extremely bad interrupt latency and causes problems elsewhere (like high speed ethernet cards dropping packets, missed timer IRQs, etc).

Basically, any other method (e.g. setting the video mode during boot, using virtual80x86, writing native device drivers, etc) is better (and probably easier in most cases) than switching back to real mode while an OS is running... ;)


Cheers,

Brendan

Re: Kernel video?

Posted: Wed Jul 29, 2009 12:11 pm
by Thor
In other words, it will be a lot easier for you to do it before entering protected mode :D

Re: Kernel video?

Posted: Wed Jul 29, 2009 12:41 pm
by Creature
Thor wrote:In other words, it will be a lot easier for you to do it before entering protected mode :D
That's fine and all, but what will you do if you've set up the resolution, gone to protected mode, your OS is running and all, and the user wants to change the resolution? Wouldn't you have to reboot every time the user wants a resolution change because it requires a BIOS interrupt? Or does setting VBE inside real mode once allow you to use some parts of it in protected mode? Or have I simply got this all wrong?

Re: Kernel video?

Posted: Wed Jul 29, 2009 2:45 pm
by GeniusCobyWalker
I was thinking the same

Re: Kernel video?

Posted: Wed Jul 29, 2009 3:02 pm
by gzaloprgm
There is a VBE pmode interface but it is not supported by many cards.

I think the easiest way is to either implement multitasking and virtual 8086 mode.

If that's impossible (IE: A single tasking kernel or a 64 bit kernel), you can implement a real mode emulator. thepowersgang was working on one -> http://forum.osdev.org/viewtopic.php?t=17573

Cheers,
Gzaloprgm

Re: Kernel video?

Posted: Wed Jul 29, 2009 3:30 pm
by TyrelHaveman
Creature wrote:
Thor wrote:In other words, it will be a lot easier for you to do it before entering protected mode :D
That's fine and all, but what will you do if you've set up the resolution, gone to protected mode, your OS is running and all, and the user wants to change the resolution? Wouldn't you have to reboot every time the user wants a resolution change because it requires a BIOS interrupt? Or does setting VBE inside real mode once allow you to use some parts of it in protected mode? Or have I simply got this all wrong?
From protected mode you can create a task that runs in Virtual 8086 mode (this is how 32-bit operating systems like Windows can run 16-bit programs). This is what I used for switching video modes and such with VESA. There's info on this in the forums here, the wiki, Google, and the IA-32 manuals from Intel.

Re: Kernel video?

Posted: Wed Jul 29, 2009 5:20 pm
by neon
Brendan wrote:Switching to real mode is easy. Switching to real mode while a protected mode (or long mode?) OS and all it's IRQ handlers (including PICs, I/O APICs and local APICs) is running; without risking losing IRQs (and locking up the device/s because you failed to send an EOI) and other problems is *extremely* difficult.
Very good points. It is much easier to do it without the entire system already running without possibly causing issues.

v86 mode might be possible - however I do not know how well Vesa works in it so do not know if it will work or not.

Re: Kernel video?

Posted: Wed Jul 29, 2009 5:28 pm
by gzaloprgm
If you manage to emulate all sensible opcodes (int-cli-sti-pushf-popf-out-in-prefixes) it will work exactly as if you were in realmode.

I am using VM8086 for Vesa and works fine in Bochs, Qemu, Vbox, Vpc and Vmware.

The only drawback may be speed. if your vm8086 monitor prints debug info per each opcode then it will take some seconds to execute vbe functions.

Re: Kernel video?

Posted: Thu Jul 30, 2009 1:31 am
by Brendan
Hi,
Creature wrote:That's fine and all, but what will you do if you've set up the resolution, gone to protected mode, your OS is running and all, and the user wants to change the resolution? Wouldn't you have to reboot every time the user wants a resolution change because it requires a BIOS interrupt?
If there isn't a native video driver, then IMHO rebooting to change video modes is acceptable.

In my experience, there's only really 3 main reasons why the user changes video mode:
  • They're actually configuring things (e.g. just installed the OS, or bought a new video card or new monitor). In this case, rebooting to change video modes is acceptable because it's very rare (e.g. might happen a few times in several years).
  • The application/s they're using don't support the video mode the user wants to use. This is entirely avoidable. Basically if a user needs to switch video modes because of a normal application, then it's your fault for not making everything resolution independent to begin with (e.g. where the video driver translates graphics from some standardized representation into whatever the video mode happens to need on behalf of all other software, and all applications work in all possible video modes because of this).
  • They're playing 3D game/s, and want to reduce the resolution to improve frame rates or increase the resolution to improve video quality. In this case, (if there's no native video driver) the user won't be happy because there's no hardware acceleration, and being able to change video modes isn't going to make them "less unhappy".
IMHO this means that it's sensible for an OS developer to save time by setting a video mode during boot, and to spend the time they saved writing device drivers (e.g. device drivers for sound, ethernet, USB, etc, where the devices are entirely unusable when no driver is present).


Cheers,

Brendan