Page 2 of 4

Re: Video BIOS reprograms PIT chip

Posted: Thu May 23, 2013 9:52 pm
by Brendan
Hi,
rdos wrote:
Brendan wrote:I'm not sure why you think anything I've said has anything to do with any windowing API. Typically the monitor says what its preferred resolution is (to avoid scaling) and the video mode should be set that exact same resolution. No other software (applications, GUI, whatever) should have any reason to care what that video mode might be - software just tells the video driver to draw lines, rectangles, textures, etc using a (virtual) coordinate system (and hopes that one day it will all be hardware accelerated).
That implies the OS can get the preferred resolution of the monitor, which might not be supported (my case), or might not be supported for the particular monitor.
That's easy to work around - if the BIOS and/or UEFI doesn't support the corresponding "get EDID" service you either assume the monitor is too old (and that only safe video modes like VGA 640*480 are safe) or allow the user to provide the EDID as a file.

In my experience the BIOS and/or UEFI "get EDID" service does work in almost all cases; excluding a few cases where a dodgy equipment (e.g. KVM) is between the computer and monitor, and most emulators (where there isn't an actual monitor).
rdos wrote:The OS can decide to always set the same resolution when it knows about the preferred resolution. The resolution and bit-organisation that the application asks for is only a guide.

But even disregarding that, text mode still requires switching between video modes when there are applications running in both text mode and graphics mode.
Text mode should've been banned 20 years ago (doesn't support Unicode/internationalisation, italics, bold, different font styles and sizes, anti-aliasing, etc). The right way is to for the application to tell the video driver what to draw. For example, application could tell the video driver to draw the string "Hello world" at virtual coords (x, y) using a mono-space font, and for the video driver can ask the font engine to convert it into an alpha mask for the video driver to display (and maybe cache for next time). No video mode switches are needed (and if anyone writes a native driver the font data could be cached in the video card's RAM and drawn with fast "vram to vram bit blit" without requiring changes to the application).
rdos wrote:
Brendan wrote:Only for less popular OS's that are *designed* to be less popular forever. I'd be willing to bet that if someone wanted to write a native video driver for your OS (complete with all the acceleration, MPEG decoder, multi-monitor, GPGPU, etc) and asked for documentation for any/all APIs they need, you'd have no choice but to tell them those APIs don't exist and that it's a waste of time because your OS and applications can't support any of it anyway.
Part of it, but not all. The graphics API was not built for games or visual design. It was built for simple embedded uses, and is modeled around PicoGUI.
From PicoGUI's web site:
PicoGUI wrote:
  • PicoGUI's theme engine isn't there just to look pretty, it's there to enforce a separation of content and presentation. This way the application only specifies high-level commands, and it's completely up to the theme to specify which pixel goes where. This allows the same application binary to run on a true color display, 160x16 grayscale LCD, or an ASCII device
With PicoGUI (or something like it) built into the video driver the application wouldn't need to know or care which video mode the video driver felt like using. Application just sends "high-level commands" (and I'd expect that a native video driver could use hardware acceleration to do the rendering of those high level commands).

If the video driver isn't responsible for drawing (e.g. rendering graphics based on high-level commands) then the abstraction provided by the video driver is not adequate - the rendering can't use hardware acceleration and the application has to care about things like which video mode is being used.

If the abstraction provided by the video driver is adequate (hardware acceleration is possible and applications don't care about things like which video mode is being used); then there is no need to switch video modes after boot (and no need for virtual 8086 mode and/or emulation and/or code that you can't guarantee won't break on different systems and/or code that will break in future UEFI systems).


Cheers,

Brendan

Re: Video BIOS reprograms PIT chip

Posted: Fri May 24, 2013 1:01 am
by rdos
Brendan wrote: That's easy to work around - if the BIOS and/or UEFI doesn't support the corresponding "get EDID" service you either assume the monitor is too old (and that only safe video modes like VGA 640*480 are safe) or allow the user to provide the EDID as a file.

In my experience the BIOS and/or UEFI "get EDID" service does work in almost all cases; excluding a few cases where a dodgy equipment (e.g. KVM) is between the computer and monitor, and most emulators (where there isn't an actual monitor).
I had no idea about this service, but I'll look into it.
Brendan wrote: Text mode should've been banned 20 years ago (doesn't support Unicode/internationalisation, italics, bold, different font styles and sizes, anti-aliasing, etc). The right way is to for the application to tell the video driver what to draw. For example, application could tell the video driver to draw the string "Hello world" at virtual coords (x, y) using a mono-space font, and for the video driver can ask the font engine to convert it into an alpha mask for the video driver to display (and maybe cache for next time). No video mode switches are needed (and if anyone writes a native driver the font data could be cached in the video card's RAM and drawn with fast "vram to vram bit blit" without requiring changes to the application).
Not so. Text-mode is much faster than graphics mode, no matter the level of hardware acceleration. Also, editors for code (or configuration files) should not care or support unicode, fonts, italics, bold and all the rest. Shells will not need unicode, and I don't intend to support shells in different languages (always using english is adequate). It is not end-users that uses the shell, but people configuring the system.
Brendan wrote: From PicoGUI's web site:
PicoGUI wrote:
  • PicoGUI's theme engine isn't there just to look pretty, it's there to enforce a separation of content and presentation. This way the application only specifies high-level commands, and it's completely up to the theme to specify which pixel goes where. This allows the same application binary to run on a true color display, 160x16 grayscale LCD, or an ASCII device
With PicoGUI (or something like it) built into the video driver the application wouldn't need to know or care which video mode the video driver felt like using. Application just sends "high-level commands" (and I'd expect that a native video driver could use hardware acceleration to do the rendering of those high level commands).

If the video driver isn't responsible for drawing (e.g. rendering graphics based on high-level commands) then the abstraction provided by the video driver is not adequate - the rendering can't use hardware acceleration and the application has to care about things like which video mode is being used.
I only ported the engine (or rather used the ideas as my engine is written in assembler), not the commands and rendering part. But since the basic API is the PiciGUI basic API, it also means you could build a PicoGUI like system on top of RDOS today if you want.

However, I've built a widget library instead, which I use to build typical GUI applications that run outside of an Windowing environment. That's what I need, not complex rendering libraries.

Re: Video BIOS reprograms PIT chip

Posted: Fri May 24, 2013 10:18 am
by Kazinsal
rdos wrote:Not so. Text-mode is much faster than graphics mode, no matter the level of hardware acceleration. Also, editors for code (or configuration files) should not care or support unicode, fonts, italics, bold and all the rest. Shells will not need unicode, and I don't intend to support shells in different languages (always using english is adequate). It is not end-users that uses the shell, but people configuring the system.
A Bugatti Veyron will always be faster than a Dodge Caravan, no matter he much you tweak the caravan. That doesn't make the Veyron a better general purpose car for your average person. In fact most people would find the Veyron useless and would trade it in for a more common vehicle.

Don't be a Bugatti. Be a Dodge.

Re: Video BIOS reprograms PIT chip

Posted: Fri May 24, 2013 11:13 am
by Brendan
Hi,
rdos wrote:
Brendan wrote:Text mode should've been banned 20 years ago (doesn't support Unicode/internationalisation, italics, bold, different font styles and sizes, anti-aliasing, etc). The right way is to for the application to tell the video driver what to draw. For example, application could tell the video driver to draw the string "Hello world" at virtual coords (x, y) using a mono-space font, and for the video driver can ask the font engine to convert it into an alpha mask for the video driver to display (and maybe cache for next time). No video mode switches are needed (and if anyone writes a native driver the font data could be cached in the video card's RAM and drawn with fast "vram to vram bit blit" without requiring changes to the application).
Not so. Text-mode is much faster than graphics mode, no matter the level of hardware acceleration.
That's not true. Most things take an infinite amount of time in text mode (displaying pictures, drawing icons, smooth shading, etc). The only case where text mode is faster is if someone actually wants to see really crappy low resolution fonts without anti-aliasing or kerning (and nobody actually wants to see that).
rdos wrote:Also, editors for code (or configuration files) should not care or support unicode, fonts, italics, bold and all the rest.
..or syntax highlighting (where keywords are in bold), or spell checking (with a little red squiggly underline), or special little marks so you can tell the difference between space characters and a tab (or when a line has wrapped around), or a menu system in a smaller font so opening it doesn't consume 80% of the screen.
rdos wrote:Shells will not need unicode, and I don't intend to support shells in different languages (always using english is adequate). It is not end-users that uses the shell, but people configuring the system.
Once upon a time (a long long time ago) someone invented GUIs. Since then some OSs used dialog boxes for configuration to make things easier for users. These operating systems currently dominate market share. There were also so stupid OSs that suck that continued to use text for configuration, but only an insignificant percentage of people exist that remember these OSs (system administrators, programmers and OS developers, but not billions of actual users).
rdos wrote:
Brendan wrote:If the video driver isn't responsible for drawing (e.g. rendering graphics based on high-level commands) then the abstraction provided by the video driver is not adequate - the rendering can't use hardware acceleration and the application has to care about things like which video mode is being used.
I only ported the engine (or rather used the ideas as my engine is written in assembler), not the commands and rendering part. But since the basic API is the PiciGUI basic API, it also means you could build a PicoGUI like system on top of RDOS today if you want.

However, I've built a widget library instead, which I use to build typical GUI applications that run outside of an Windowing environment. That's what I need, not complex rendering libraries.
So you're saying that it's designed to make it impossible for anything to use any hardware acceleration for anything ever (even though hardware acceleration is as ancient as VGA)?


Cheers,

Brendan

Re: Video BIOS reprograms PIT chip

Posted: Sat May 25, 2013 8:03 pm
by ~
I see that UEFI and graphics techniques/hardware characteristics as a burden to developing minor operating systems. Of course, the industry doesn't care about that.

We must be realistic and see that virtually anybody will be able to come up with a system that supports "modern" features effortlessly.

I don't know how much people specifically here exactly know about graphics, but I have been writing a GIF viewer. It might seem simple, and now it is simpler for me, but the amount of things it requires, and also the error cases required to render GIFs with certain problems correctly, is just a shadow of the things anyone (individual or group) would need to do to render more complex graphics elements (like antialiasing and vector graphics/fonts).

Of course, those things aren't the problem really. The real problem is that there is limited information to learn them appropriately, and the available ones have very limited quality.

Just to say that to learn about GIF I had to look up for a bunch of references, and graphics books like the "Encyclopedia of Graphics File Formats" and "Compresses Image File Formats". In other words, the relevant knowledge of that caliber has been relegated almost exclusively to older content, and no more found in modern references. And the same for graphics algorithms (Xiaolin Wu's antialising, the actual accelerated/optimized algorithms used in graphics cards, etc.)


__________________________________________
__________________________________________
__________________________________________
__________________________________________
Frankly, I have never ever so far used a graphics but a pure text mode, 80x25 (or at the very least an accurately-emulated text-mode window), to look at low-level stuff, like binary code, disassembly, and inspecting binary files.

Frankly, using square ASCII fonts is much clearer than using the same version under Windows or Linux/X, where special characters are represented with just an ugly empty square, or a dot, making it difficulto to distinguish bytes right away.

At least text modes are much more efficient memory-wise, do not require a stand-alone OS or graphics algorithms to be rendered, and are adequate to make tests of an "embedded" level, even though we are talking about the PC.

Of course, as I said I am not against complex graphics stuff, but it is just that the fact here is that these things aren't taught adequately to add them to a program (and if an OS plus the GPU have them all, why would you do anything other than just using them, given that they run better than a program, and who really understands all of the best graphics algorithms nowadays, other than the very people who works closely in making GPUs and the like?).

From the very few things I have done about graphics, I see that it would really take a lifetime to achieve them all, all the eye-candy people wants. It would also make it impossible to fully achieve learning about OS development and graphics (there is also sound, compression, network protocols and encryption, and some artificial intelligence, to make something really interesting and useful).

The point is that just as things are right now in the industry, they are sad for most of us the smaller players. There are so many things that they couldn't be achieved even for the smartest developer in a human's lifetime. They require a corporate-sized, industry-grade team.

At least, I have found that I could learn a lot of things if I experiment with graphics algorithms and file formats, and I see that I can also escape the burden of things like UEFI, if I dedicate some time to write a PC emulator. Then I can add anything I like and actually make it 100% standard in all of its components.

Anyway I can always use a finished OS to do my job, like everyone else, and so the objective of having fun while making a little bit of technology clearer is always achieved.

By the way, I think that the problem of the industry is very severe, but it has so many resources that it doesn't care about wasting them.

For instance, why are we still using so many different file formats, compression algorithms and protocols? This is NOT necessary, and the right thing to do would be to create a single file format for graphics, audio, documents, as well as only a single algorithm for things like compression, and have them be the best. Once they get obsolete, the whole World would have to get rid of the older algorithms and use only the newer one.

Why are there so many brands and models of GPUs, sound cards, network cards, hard disks, chipsets, and similar peripherals? It they all basically do the same things, with more or less capability, the right thing would be to come up with one single global type of peripheral, standarize it and have all manufacturers work exclusively on that.


The point here is that it isn't really the fault of the developer to achieve less. The problem is having so many different things that it makes it pointless to worry too much about them, and it is better to learn about how things work as much as possible.

In the end, it is the Hardware the one that is bringing the real progresses (more speed, more memory, graphics and compression algorithms, all implemented in the chips). It is the hardware which is using Electronics, Math and a lot of different algorithms to achieve what it does, and software didn't contribute anything to that. So for smaller players, the ones who only have the possibility to use existing hardware to run their software (which use more hardware capabilities than what they really contribute to their functionality and technology), the only thing that makes sense is to learn about high-level and low-level stuff that is important and that does NOT depend at all on the machine being run, while still learning about how a 100% standard machine would be (making an emulator, and probably later building the machine, if somebody has the resources?).

Re: Video BIOS reprograms PIT chip

Posted: Sun May 26, 2013 4:09 am
by Griwes
~ wrote:By the way, I think that the problem of the industry is very severe, but it has so many resources that it doesn't care about wasting them.

For instance, why are we still using so many different file formats, compression algorithms and protocols? This is NOT necessary, and the right thing to do would be to create a single file format for graphics, audio, documents, as well as only a single algorithm for things like compression, and have them be the best. Once they get obsolete, the whole World would have to get rid of the older algorithms and use only the newer one.

Why are there so many brands and models of GPUs, sound cards, network cards, hard disks, chipsets, and similar peripherals? It they all basically do the same things, with more or less capability, the right thing would be to come up with one single global type of peripheral, standarize it and have all manufacturers work exclusively on that.
There are two things wrong in this part. First one is: only in utopia you can have single variant of everything, and, as you probably know, utopias can not exist. Plus it limits user's choice, which is a Bad Thing. Second one is quite well summarized by this well known XKCD:

Image

Re: Video BIOS reprograms PIT chip

Posted: Sun May 26, 2013 4:33 am
by rdos
There are already standards in many areas (USB, HD Audio, IDE/AHCI). This is because not even major players like Microsoft are able to provide 15 high-quality implementations of complex standards.

In fact, the VBE standard is one such standard, which have been thrashed by UEFI into something unacceptable and badly thought-out.

Re: Video BIOS reprograms PIT chip

Posted: Sun May 26, 2013 4:50 am
by Griwes
USB itself comes in 3 versions and defines four different host controller standards. Plus, those three are very different from GPUs or chipsets. VBE is, being GPU interface, totally different, not really fully supported, and a quick look at EFI_GRAPHICS_OUTPUT_PROTOCOL doesn't really tell me how UEFI version of VBE is any less acceptable than VBE itself.

Re: Video BIOS reprograms PIT chip

Posted: Sun May 26, 2013 5:06 am
by rdos
Griwes wrote:USB itself comes in 3 versions and defines four different host controller standards. Plus, those three are very different from GPUs or chipsets. VBE is, being GPU interface, totally different, not really fully supported, and a quick look at EFI_GRAPHICS_OUTPUT_PROTOCOL doesn't really tell me how UEFI version of VBE is any less acceptable than VBE itself.
I think I already explained this. The UEFI mode set cannot be run at any time, but requires the OS to decide video mode at boot time. This limitation is not part of VBE, that allows mode changes at any time. Therefore, UEFI is an unacceptable down-grade in functionality compared to VBE.

Re: Video BIOS reprograms PIT chip

Posted: Sun May 26, 2013 5:38 am
by Combuster
rdos wrote:This limitation is not part of VBE
...that has known hacks to change video modes outside it's defined operational range that might work - see your own other thread.

Re: Video BIOS reprograms PIT chip

Posted: Sun May 26, 2013 5:39 am
by Griwes
Not relevant in sane designs; sane designs don't touch BIOS after boot.

Plus, what Brendan tried to explain to (IIRC) you a while ago: either you take total responsibility and write an OS with drivers, or you don't and you just write an UEFI application.

Re: Video BIOS reprograms PIT chip

Posted: Sun May 26, 2013 8:37 am
by Antti
It would be nice to have a run-time service (UEFI) for video configuration. However, I am quite happy that I can get the LFB at all without writing vendor-specific video drivers. It is more than enough and serves well in generic use. All the real features of video hardware are called into play with native drivers and this makes the difference between "generic" and "native" clear enough. Without any poor man's intermediate form.

Besides, I still do not get why anyone would like to change video modes all the time. It is required only when you change monitors on the fly or configure an external monitor or video projector etc. However, these cases aside, most of the time I understood that this "anyone" wants to pump modes with the same single monitor. Applications should live with the resources they have.

Re: Video BIOS reprograms PIT chip

Posted: Sun May 26, 2013 9:28 am
by Owen
rdos wrote:The UEFI really doesn't solve this problem, rather says that if you try to do anything after calling ExitBootServices, we are not responsible for the results. Which basically is the same thing as calling the real mode BIOS, since it also is not guaranteed to work after you have reprogrammed the hardware. Personally, I would never call ExitBootServices at all, just in case the UEFI BIOS trashes something so later calls to it doesn't work, or provides a mechanism to deny these calls based on calling ExitBootServices. Then I would call everything from protected mode user-mode, and with an IO permission bitmap that disallowed common hardware usage, much like in the BIOS case. If the UEFI really insists, as a last resort, I could take a memory image of UEFI prior to calling ExitBootServices, make a copy of it, call ExitBootServices in the copy, discard the copy, and later use the pre-call memory image. An OS always has some ways to circumvent such broken behavior if it can execute the unsafe code in user mode.
Good luck with that. Things EFI assumes before ExitBootServices is called:
  • It is in control of memory management
  • It is in control of paging
  • It is in control of interrupts
  • It is in control of all devices
  • It is in control of the ACPI hardware
Things that will happen if you don't call ExitBootServices:
  • Random interrupts from the machines devices
  • The machines devices will continue to DMA to the locations UEFI allocated their buffers*
  • Because UEFI expects it is in control of the interrupt table, you cannot switch to your own interrupt table and drivers without breaking it
  • Because you haven't called ExitBootServices, you are still stuck with the 1:1 Physical:Virtual mapping UEFI requires
Things that will happen if you try to use the GRAPHICS_OUTPUT_PROTOCOL after ignoring the above and trying anyway:
  • UEFI will try to wait for an interrupt which is never coming and hang
  • UEFI will try to talk to some device you don't understand (e.g. the ACPI Embedded Controller) and you won't be able to emulate it (or safely pass it through without breaking your own ACPI environment and/or suffering horrendous race conditions)
  • UEFI will try to allocate a framebuffer from its' own allocator and choose memory which you have used for something else
  • UEFI will otherwise hang, crash, or kill your kitten
The solution to all these problems:
  • Pick a graphics mode at boot time
If that isn't a viable option,
  • Write modesetting code for your graphics chip. Modesetting really isn't that hard (It's not acceleration...).
* In at least one case (some MacBooks) the firmware has (had, I think; IIRC this is now patched) a bug which means it fails to turn off the WLAN chip before handing over control. Work around: enumerate the PCI bus and disable every device before reclaiming any UEFI memory

Re: Video BIOS reprograms PIT chip

Posted: Sun May 26, 2013 2:39 pm
by rdos
For the moment there is a perfectly useful solution to the UEFI problem: Don't bother with UEFI and use legacy booting and VBE even when computer has an UEFI BIOS. Works on all UEFI BIOSes tested this far.

Re: Video BIOS reprograms PIT chip

Posted: Sun May 26, 2013 5:15 pm
by ~
Griwes wrote:There are two things wrong in this part. First one is: only in utopia you can have single variant of everything, and, as you probably know, utopias can not exist. Plus it limits user's choice, which is a Bad Thing. Second one is quite well summarized by this well known XKCD:

Image
I don't see how having for example a single kind of network card that has all of the capabilities of the existing ones would limit the user's choice. It would actually increase them if we assume that the same driver (a single driver) could be used for decades, and it would be fully known how to program (which means it would be from possible to easy to do, unlike the current situation).

But I know that manufacturers won't do this unless there is a very special reason to do so (such as ATA/SATA hard disks, which are similar enough to be considered a truly successful standard, with differences that at least can be listed and cared for).

In any case, one realizes that nobody can be able in a lifetime to master all standards and all that there is to commercial/major OSes plus its application programs, games and state-of-the-art entertainment content.

So one can as well start writing an emulator, and do what people like Solar said (write application code before or along OS code). Then one can add the things one understands, and would learn in a clear way, being able to see and decide what to do and learn next.

In this way, one can indeed get to understand the current standards as much as possible (BIOS, CPU, legacy devices, etc.), see if it's possible to use emerging concepts (UEFI, graphics algorithms), but at the same time do something useable to put the aforementioned things to real, useful work (like understanding compression algorithms and formats, image formats, sound formats), and actually use it in things like server applications or browser extensions (obviously, with knowledge that few have, and the power that comes from that, along with the possibility to for example write books about those different interesting topics and make a living to keep the hobby as something more valuable).

There is not very much use setting modes (manually or not) and being able to use two to four graphics algorithms for instance, and also being able to read and write a filesystem, and being able to drive a sound card, if one cannot actually interpret file formats to render graphics like GIF/PNG/BMP/JPG, documents like PDF and create/play videos, at least like AVI (audio and graphics).

So one might as well learn here and there, a little bit of low-level, and a little bit of high-level stuff to have something to test the OS code with; and in the end there is always something worthy to get out from all that effort, with hardware that for the most part has partial or no programming information on our way, or not. And with emulation, it will matter much less, while learning the very same things (it would be the OS developer who would be in control of defining emulated peripherals that at least are possible to write the same for decades, and if somebody has enough resources, could implement those devices in real circuitry).

Remember that a good PC emulator would always be valuable for things like non-x86 PCs and tablets; and if somebody does such an emulator at the very least in JavaScript/HTML5, that would be interesting as well as extremely portable, the x86 code, starting with 16-bit code would also become immensely portable and would have a huge renewed value (and this would useful to learn mundane things like client-side web applications, which have immediate application in the real world, but obviously doing considerably more complicated things than any regular client-side web page/web application).