Page 2 of 2

Posted: Sat Jan 05, 2008 5:55 pm
by bewing
Hmmmm. It sounds to me like real mode is excellent for basic development ... but I suspect that Brendan is right that it is not sufficient when it comes to supporting multiple video boards/monitors.

I'm kinda curious, though -- how do you do the INT call in bios, in real mode, when you have the interrupts turned off? Fake up the stack, so that the bios IRET call works properly, grab the entrypoint out of the realmode IDT, and call it?

Posted: Sat Jan 05, 2008 11:10 pm
by Brendan
Hi,
bewing wrote:I'm kinda curious, though -- how do you do the INT call in bios, in real mode, when you have the interrupts turned off? Fake up the stack, so that the bios IRET call works properly, grab the entrypoint out of the realmode IDT, and call it?
CLI/STI only effects maskable IRQs, and has no effect on software interrupts or exceptions or NMI. If you leave the real mode IVT intact you can just use an INT instruction even though interrupts are disabled.


Cheers,

Brendan

Posted: Sun Jan 06, 2008 2:02 am
by Brendan
Hi,
Dex wrote:I would say, going to and from realmode for mode changing is as safe as any other method.
For virtual80x86 there's no need to disable interrupts or have extremely high interrupt latency.

I did a quick test - I measured the time it takes to switch from 40*25 text mode to different graphics video modes using the video card's ROM in real mode, and using RDTSC for measuring time taken. The test computer has an AMD K6 CPU (300 MHz CPU clock) with an S3 video card.

The results are:
  • 640 * 480 * 8 bpp = 48590681 cycles = 161.97 ms
    640 * 480 * 16 bpp = 48390218 cycles = 161.30 ms
    640 * 480 * 32 bpp = 48393175 cycles = 161.31 ms
    800 * 600 * 8 bpp = 48604811 cycles = 162.02 ms
    800 * 600 * 16 bpp = 48402809 cycles = 161.34 ms
    800 * 600 * 32 bpp = 48405055 cycles = 161.35 ms
    1024 * 768 * 8 bpp = 55475411 cycles = 184.92 ms
    1024 * 768 * 16 bpp = 55269998 cycles = 184.23 ms
    1024 * 768 * 32 bpp = 55271380 cycles = 184.24 ms
This is just one computer, but it gives a rough idea of the amount of interrupt latency we're talking about.

Now let's consider a simple device - a serial port. Worst case would be a baud rate of 115200 configured for 5 data bits, no parity and one stop bit, without any FIFO. In this case you'd get 19200 interrupts per second, or there'd be about 52 us between IRQs. This means that while you're switching video modes you miss up to 3550 IRQs.

This isn't too likely though - typically at this baud rate you'd be doing data transfers with 8 data bits and using a 16 byte FIFO. Using "8N1" we'd be looking at 12800 bytes received per second. If you set the FIFO interrupt threshold to 1, the serial port would generate an IRQ after one byte is received and would be able to receive 15 more bytes of data between triggering an IRQ and having that IRQ serviced. In this case we'd be able to tolerate interrupt latency that is as bad as 1.17 ms before losing any data. The interrupt latency caused by your video mode switching would only be about a hundred times too much.

We could also look at this from the opposite perspective: if we know the worst case interrupt latency is severely crap, then we can calculate the maximum speed the serial port can operate at without losing data. With "8N1" and the 16 byte FIFO (with FIFO interrupt threshold set to 1) we can receive 15 bytes of data in 185 ms without losing data, which works out to a maximum baud rate of 730 bits per second. Unfortunately this is far too slow, even for a simple/slow device like a serial mouse (which typically operates with 1200 bps, 7 databits, 1 stop-bit).
Dex wrote:I use this method and have never had or had any reports of problems.
Are you saying you've done tests where you've used this method to change video modes while one or more high speed network cards are generating around 10000 IRQs per second each, while the serial port/s are sending and/or receiving data, while other devices (SATA, USB controllers, etc) are also generating lots of IRQs, and while you're trying to handle IPIs from other CPUs; and that the insanely high interrupt latency didn't cause any problem?

Or, are you saying your video code has never been used under conditions involving a high number of interrupts? Or perhaps you're just saying your video code crashes often but nobody reports it?
Dex wrote:If you set the OS in a state, that it would be in, if you where going to realmode or Pmode to stay, then you should have no problems.
Agreed; but putting the OS into a state similar to if you where going to realmode to stay means you've got no reason to leave interrupts disabled while in real mode. Alternatively you could probably shutdown or suspend device drivers, daemons, etc to severely reduce the number of IRQs (and the chance of interrupt latency related problems). However, in both of these cases you can't arbitrarily switch video modes while the OS continues to operate normally, which (IMHO) is what other people here are hoping to do.

Let me be brutally honest here. Disabling interrupts and switching back to real mode to change video modes is something that a lazy novice would do. It isn't something any decent programmer should consider or recommend, unless it's done under strict conditions (e.g. during boot only, where you don't need to worry about other devices).


Cheers,

Brendan

Posted: Sun Jan 06, 2008 4:02 pm
by Dex
Have you tried it ? and tested it under these conditions ?.
I have tested it ,under all normal operating conditions and it works fine.

It's a common tactic, when your argument has no foundation, to put down the people that disagree with you.
That's fine, if you want to waste your time, i will just post what works for me, its up to the OS Dev to decide what method to use.
But why should they listen to me, i am only a lazy novice :lol:

Posted: Sun Jan 06, 2008 4:18 pm
by Combuster
Dex wrote:Have you tried it ? and tested it under these conditions ?.
I have tested it ,under all normal operating conditions and it works fine.

It's a common tactic, when your argument has no foundation, to put down the people that disagree with you.
That's fine, if you want to waste your time, i will just post what works for me, its up to the OS Dev to decide what method to use.
But why should they listen to me, i am only a lazy novice :lol:
I want to note here that this conclusion depends on what you consider 'normal'. In your Dex' case (the single task OS) the circumstances are much less likely to occur than in Brendan's OS which is very heavy in networking.

I.e. you both have a valid point.

Posted: Mon Jan 07, 2008 8:38 am
by Ready4Dis
MarkOS wrote:
Ready4Dis wrote:I dunno, I use my own bootloader, and my function is about 40bytes, requires no multitasking support, or anything else in the kernel. I load my kernel to a memory location down low, then memory map it to 0xF0000000. What is GRUB saving the lower 1mb for? Also, it doesn't have to be the kernel, like I said, my vesa driver does the dropping to real-mode, so as long as your kernel can allocate memory below 1mb, i don't see the problem :). I know it's not that huge, but then your video driver needs to spawn a thread, and do special case IPC to tell that thread what to do (set screen mode + resolution, or whatever you need). Mine doesn't require any special code in my kernel (besides having a 16-bit entry setup so i can drop back into 16-bit pmode before going rmode). I just don't see the point in doing the work to get v86 working just to set a video mode, all that extra code in your kernel. I know it's not that huge of a difference, i just don't see a point in doing it personally.
can you post yuor code? I'm very interested to it.
Well, here is my entire RealModeInt assembly file for your viewing...

A real mode interrupt is like normal except one thing, the top 16-bits of eax are used to store the interrupt # to call, the lower bits of eax (ax) are used to store the contents of whatever you want for ax :) It returns the value into ax upon completion, so you can still check statuses.

Code: Select all

[bits	32]
;Used to perform a real-mode interrupt
RealModeInt:
	mov		[InitialESP],	esp				;Store this for later
	mov		[RealModeAX],	ax				;Store ax for later	
	shr		eax,	16
	mov		[RealModeINT],	al				;Grab int number!

;Now we can hack eax :)
	call	disable_irqs

	lidt	[idt_real]						;Load the real mode IDT
;Now lets load real mode interrupt table and jump to 16-bit pmode!
	jmp		0x28:do_16bpmode-0xEFFF8000		;Physical address
[bits 16]
do_16bpmode:
	mov		ax,		0x30					;16-bit data selector
	mov		ds,		ax
	mov		es,		ax
	mov		fs,		ax
	mov		gs,		ax
	mov		ss,		ax

;Currently in 16-bit pmode
	mov		eax,	cr0
	and		eax,	0xfe
	mov		cr0,	eax

	jmp		0x0000:do_realmode+0x8000
do_realmode:

	xor		ax,		ax
	mov		ds,		ax
	mov		fs,		ax
	mov		gs,		ax
	mov		ss,		ax
	mov		esp,	0x1000

	mov		ax,		0x2F00						;0x2F000 pmode :)
	mov		es,		ax

	mov		al,		[RealModeINT+0x8000]		;Grab our interrupt #
	mov		[IntPlace+0x8001],	al					;Write in the interrupt #

	mov		ax,		[RealModeAX+0x8000]			;Restore AX...

IntPlace:
	int		0x00

	mov		[RealModeAX+0x8000],	ax			;Store ax...

	mov		eax,	cr0
	or		eax,	1 + 0x80000000				;Enable pmode AND paging :)
	mov		cr0,	eax

	jmp		0x08:do_32bpmode+0x8000				;Back to 32-bit @ 0x8000
[bits 32]
do_32bpmode:
	jmp		0x08:do_kern
do_kern:

	mov		ax,		0x10
	mov		ds,		ax
	mov		es,		ax
	mov		fs,		ax
	mov		gs,		ax
	mov		ss,		ax						;Restore SS

	mov		ax,		[RealModeAX]			;Restore contents of AX
	lidt	[idt_desc]						;Load standard IDT
	mov		esp,	[InitialESP]			;Store this for later


	call	enable_irqs
	ret

[bits 32]

InitialESP
	dd		0
RealModeAX
	dw		0
RealModeINT
	db		0
	
idt_real:
	dw	0x3ff
	dd	0x0000
bewing wrote:Hmmmm. It sounds to me like real mode is excellent for basic development ... but I suspect that Brendan is right that it is not sufficient when it comes to supporting multiple video boards/monitors.
It works fine for me, but I don't change my video resolution to often while logging my serial port, so i haven't whitnessed any issues thus far.
Brendan wrote::Of course there are other problems, like being unable to use this method to support multiple video cards.
I dunno, i'll let you know when it fails though. So far it's worked fine on my desktop with nvideo 6800gt, old laptop with intel onboard pos, bochs, qemu, and a few other desktop pc's with various bios/video card combo's. If I have to change later because of some incompatibilities, then it's not that much of my code that I have to remove/change, and I am still going to try to put as much of it in my driver as possible, since it won't be used unless a valid driver for your video card is not found or supported. I just don't see my resolution changing often enough to cause a huge interrupt in whatever it is you're doing in the background, but maybe in the future I will run into a problem and require changing it, who knows, that's what development is all about, trying things and fixing them if you run into problems. As newer video cards are supporting vesa3 (16-bit pmode interface), that will probably be my primary means of fall-back vesa driver, with the drop to real-mode and use 16-bit rmode int's as the fall-back's fall-back, and if that fails, you might want to look at a new PC. My OS isn't for the masses yet, it's for me, and for fun and it works on all my hardware so far, i am not going to start fixing driver 'bugs' that aren't even a problem until I have my OS much further along and start implementing more finalized drivers. I do appreciate the info about latencies, and if/when i start running into issues, I may very well scrap this and imlpement v86 mode. You would think since there haven't been any r-mode OS's since DOS there would be a standard p-mode interface by now for something that is required on all desktop pc's, a video card, but hardware and software don't always agree unfortunately![/code]

Posted: Tue Jan 08, 2008 7:27 am
by Brendan
Hi,
Ready4Dis wrote:
Brendan wrote::Of course there are other problems, like being unable to use this method to support multiple video cards.
I dunno, i'll let you know when it fails though. So far it's worked fine on my desktop with nvideo 6800gt, old laptop with intel onboard pos, bochs, qemu, and a few other desktop pc's with various bios/video card combo's.
By "multiple video cards" I meant multiple video cards in the same computer at the same time. For example, my Windows machine has onboard Intel graphics and a GeForce video card and 2 monitors, where the BIOS completely ignores the GeForce video card (but Windows doesn't).
Ready4Dis wrote:It works fine for me, but I don't change my video resolution to often while logging my serial port, so i haven't whitnessed any issues thus far.
In this case you won't have issues caused be interrupt latency - if you're sending data to the serial port (or parallel port, or network card) in the background you'd only get reduced bandwidth, not lost data.

Receiving data (with serial, parallel or network card) is where the problem is (data loss because the IRQ handler isn't executed fast enough to avoid buffer overflows). I'd also expect problems with sound cards (potentially, an audible glitch as the buffer for the digitized sound being played isn't filled quickly enough and problems if you're receiving data from the microphone). For hard disks, floppy drives, CDs, flash memory, etc you'd only get reduced bandwidth (no data loss or other problems). Depending on your OS you may also get system timer clock drift due to missed PIT IRQs. I'm not sure about PS/2 keyboard and mouse (AFAIK the PS/2 controller does flow control while the device buffers data until it can be sent to the controller, so reliability probably depends on the quality of the device and how much data it can fit in it's buffer), or USB devices.
Ready4Dis wrote:If I have to change later because of some incompatibilities, then it's not that much of my code that I have to remove/change, and I am still going to try to put as much of it in my driver as possible, since it won't be used unless a valid driver for your video card is not found or supported. I just don't see my resolution changing often enough to cause a huge interrupt in whatever it is you're doing in the background, but maybe in the future I will run into a problem and require changing it, who knows, that's what development is all about, trying things and fixing them if you run into problems.
I'm fussy - in general, for me "minor" problems like system clock drift and reduced bandwidth are unacceptable, and potential data loss (not just proven data loss) is intolerable. If I have to change my implementation later then I've stuffed up and wasted my time. If I also have to change my design because of bad descisions based on bad implementation, then I've stuffed up in the worst possible way. For me, "sort of working for now" is the same as "it never worked at all", and "write once use forever" is my idea of perfect. Disabling interrupts and switching back to real mode isn't acceptable (and neither are the protected mode VBE interfaces and virtual8086, but for performance reasons that don't apply to most people).


Cheers,

Brendan

Posted: Tue Jan 08, 2008 1:52 pm
by Dex
Brendan wrote:If I have to change my implementation later then I've stuffed up and wasted my time. If I also have to change my design because of bad descisions based on bad implementation, then I've stuffed up in the worst possible way. For me, "sort of working for now" is the same as "it never worked at all", and "write once use forever" is my idea of perfect.
From my understand of your OS, you have done a number of rewrites.

Posted: Wed Jan 09, 2008 8:48 am
by Ready4Dis
Brendan wrote:Hi,
Ready4Dis wrote:
Brendan wrote::Of course there are other problems, like being unable to use this method to support multiple video cards.
I dunno, i'll let you know when it fails though. So far it's worked fine on my desktop with nvideo 6800gt, old laptop with intel onboard pos, bochs, qemu, and a few other desktop pc's with various bios/video card combo's.
By "multiple video cards" I meant multiple video cards in the same computer at the same time. For example, my Windows machine has onboard Intel graphics and a GeForce video card and 2 monitors, where the BIOS completely ignores the GeForce video card (but Windows doesn't).
I don't understand how that affects the discussion, whether you use the bios in real-mode or v86, you are still using the bios so it won't support multiple video cards in one system, a proper driver is required, in which case you won't be using either of the above mentioned implentations.
Ready4Dis wrote:It works fine for me, but I don't change my video resolution to often while logging my serial port, so i haven't whitnessed any issues thus far.
Brendan wrote: In this case you won't have issues caused be interrupt latency - if you're sending data to the serial port (or parallel port, or network card) in the background you'd only get reduced bandwidth, not lost data.

Receiving data (with serial, parallel or network card) is where the problem is (data loss because the IRQ handler isn't executed fast enough to avoid buffer overflows). I'd also expect problems with sound cards (potentially, an audible glitch as the buffer for the digitized sound being played isn't filled quickly enough and problems if you're receiving data from the microphone). For hard disks, floppy drives, CDs, flash memory, etc you'd only get reduced bandwidth (no data loss or other problems). Depending on your OS you may also get system timer clock drift due to missed PIT IRQs. I'm not sure about PS/2 keyboard and mouse (AFAIK the PS/2 controller does flow control while the device buffers data until it can be sent to the controller, so reliability probably depends on the quality of the device and how much data it can fit in it's buffer), or USB devices.
I meant, I don't log data coming to my serial port while changing resolutions, so it's not an issue for me. I am aware that something *may* get interrupted, but if they don't have a video card with vbe3 or a supported video driver, they should be glad it works at all instead of reverting to VGA ;). Ok, that's not really true, but I maybe implement v86 if it really does become a problem, but it's not on my todo list right now since what I have works.
Ready4Dis wrote:If I have to change later because of some incompatibilities, then it's not that much of my code that I have to remove/change, and I am still going to try to put as much of it in my driver as possible, since it won't be used unless a valid driver for your video card is not found or supported. I just don't see my resolution changing often enough to cause a huge interrupt in whatever it is you're doing in the background, but maybe in the future I will run into a problem and require changing it, who knows, that's what development is all about, trying things and fixing them if you run into problems.
Brendan wrote: I'm fussy - in general, for me "minor" problems like system clock drift and reduced bandwidth are unacceptable, and potential data loss (not just proven data loss) is intolerable. If I have to change my implementation later then I've stuffed up and wasted my time. If I also have to change my design because of bad descisions based on bad implementation, then I've stuffed up in the worst possible way. For me, "sort of working for now" is the same as "it never worked at all", and "write once use forever" is my idea of perfect. Disabling interrupts and switching back to real mode isn't acceptable (and neither are the protected mode VBE interfaces and virtual8086, but for performance reasons that don't apply to most people).


Cheers,

Brendan
Well, I don't have system clock drift because that's not how my OS handles time, but that is a good point for others to watch. If I get reduced bandwidth for 100ms while changing my video resolution, to me that is acceptable, especially if you are running without proper drivers. My OS is a work in progress, I don't like to think about every single possible problem that may arise in the future, and never implement it until i'm 100% sure it will be perfect, because then i would never get anywhere, i think about the problem, come up with a solution, think of a few potential problems and the cost to re-write something, and implement. My video driver took me about 30 minutes to write, while you just thinking about and benchmarking, etc took much longer. I can re-implement it using vm86 very easily if needed. While I may have wasted 30 minutes on a working driver that may have potential issues in a very few select cases, it is still better (to me anyways, you don't have to agree with me) than wasting 4 days trying to come up with the 'perfect' fall-back driver. Most things with OS design aren't perfect, and never will be, there are always compromises and things that come up un-planned no matter how long you think about the it and try to make it perfect. A new hardware device might come out that breaks your mold, who knows. I've run into many implementation problems, that's mostly how I learn. I am currently re-writing my OS anyways, and plan to re-implement my vesa driver using the same r-mode fall-back. And, what's wrong with the vbe3 p-mode interface? There is no reason to even leave pmode and go to anything else, no extra exceptions to handle, etc. Only problem I know of is it's lack of support (not sure if it's gotten better since last time I checked). vbe2's pmode interface is rediculous, unless you are double-buffering in video memory, that'd be about the only time it'd be somewhat useful, but probably not worth using in most cases.

Anyways, like I said, I don't mind writing code more than once if I come into a problem, sometimes it's easier to find limitations by implementing, than it is to come up with them by dreaming up situations. You say it never worked, that's fine, I say it worked for what I needed it for at the time. That's why I have all my modules loaded dynamically, I can replace any one of them without breaking other things. Want to replace my video driver, sure point kernel to a different file on disk or replace the current file, it's really not that huge of a deal for me. Now something a bit more complex, like my IPC that would require a major re-write I would be a bit more concerned about, pretty much anything that changes an interface for user apps/drivers I would consider a bit more major to re-write because I then have to fix all the drivers and re-compile everything, but I only have a handful, once i get further along, all that will be worked out, while 'minor' things like a specific drivers implementation that can be replaced very easily, isn't a biggy from my view.