tom9876543 wrote:VESA is another example of short term thinking, badly planned PC infrastructure.
I quite agree, which is why I stand by Brendan in recommending that OS devs avoid it wherever feasible (boot loaders are an edge case for this, as it's hard to select a driver to use if you don't yet know what hardware you need to support). VESA came out at a time when things were developing too fast for a meaningful standard to be possible at all. Each of the three iterations of it ended up more than a year out of date by the time they were proposed, and even more once they were published. By the time VESA 3.0 was out, the entire concept of user software interfacing directly with the BIOS was irrelevant, and there was simply no way to devise a common set of BIOSes that would be useful for a 32-bit protected mode OS.
For most OS devs, trying to support VESA in their OS (as opposed to the boot loader, as the OP seems to be doing) is likely to be a pointless diversion from developing more robust video drivers, at least for those GPUs whose interface details are published. And honestly, I am pretty sure that you'll find that implementing drivers that support the basic functions on the Intel and AMD GPUs is actually no harder than implemented VESA support would be (more advanced features are another matter, but you'd at least be in a position to approach them).
Especially since most GPU vendors have been quietly
dropping support for VBE over time, with only the bare minimum set of routines needed to support some older software still being provided. Most of VESA 3.0 was never widely implemented in the first place.
tom9876543 wrote:VBE 2.0 should have been 16bit dual mode code.
That is code that works in both x86 16bit protected mode and 16bit real mode.
Probably not, as 16 bit p-mode was pretty much a dead topic already by 1994. Most people then agreed with Bill Gates' notorious initial assessment that the 80286 was a 'loser' of a chip design and jumped on the 80386 with both feet as soon as possible.
Getting the software moved to 32-bit p-mode was another story, but no one wanted to write code for 16-bit p-mode if they could avoid it, modulo a certain OS dev who still posts here about it from time to time (and even he has been using 32-bit p-mode since around 1992, albeit with more use of segments than most others care to).
While
Windows 3.1 (1992) still focused on 16-bit p-mode (have only just ditched real mode), it introduced 386 Enhanced Mode as well (with
Win32s on the way but not quite ready then, meaning that actual 32-bit user applications for mainstream Windows was a bit delayed by several months), and Microsoft had already announced that the next major version would only support 32-bit p-mode. Also, there were relatively few AT (286) class systems in use compared to either XT (8088) or AT/386 systems, and while new XTs were still being sold in fair numbers as late as 1991, 80286 systems had fallen off the map by then; by 1992 most newer systems were 386s or 486s, and by 1994, Pentium systems were picking up momentum.
To repeat a point I have made many times: through most of the history of small computers, sales figures by microarchitecture roughly paralleled Moore's Law (XTs were something of an anomaly, as were the Apple II and Commodore 64 lines, but not as much as you'd think). By 1994, the 80286-based AT style of systems was four system generations old. There were far more systems capable of running both 16-bit and 32-bit p-mode in use in 1994 than ones which could run 16-bit p-mode but not 32-bit.
Since Windows had its own drivers and didn't use VESA (never really did; by the time Windows was using screen modes above 640x480;4b regularly, starting with Win95, card makers were well past the capabilities of even VESA 3.0, which was still being developed at the time), and the rising popularity of 32-bit DOS extenders meant no DOS games were being written for the now-defunct 286 market (there were still many real mode games being made, right up until the end of MS-DOS as a whole, but anything that needed more leapfrogged directly to 32-bit), having a 16-bit p-mode VESA extension would have been pointless.
In any case, by late 1995 2D and 3D accelerated cards were starting to appear, and card designers were already looking towards what would eventually become GPUs (though it would be about five years before they hit the market). While VESA 3.0 did include some basic standards for 2D acceleration, few card manufacturers implemented them, as a) there was no real common subset of features supported by all cards, b) all the card already out had shipped with drivers for 32-bit DPMI and Windows 95 (but not Windows 3.1 for the most part), and most of all, c) programs would need to have card-specific drivers for the 3D acceleration features anyway, and since Windows
WinG (the original accelerated graphics API for Windows 3.1 and still retained initially by Windows 95) had been a flop and it looked as if its replacement, DirectX, would be too at that point (I guess they'd heard that Alex St. Jackass was running the project...), no one was thinking in terms of Windows 3D programming yet.
Note that as far as I know,
no 3D acceleration cards had 16-bit BIOS hooks, real or p-mode. For 3D, it was "32-bit or GTFO".
So basically, even when VESA 2.0 came out, VESA was irrelevant, and a 16-bit p-mode for it would have been doubly so. The former wasn't clear until much later, but the latter was abundantly so by then, and no one was putting any more money into 16-bit p-mode software even in 1992 when VESA 2.0 was first being debated.
Of course, no one, then or now, was thinking of what would be needed for supporting independent OS development.
It is hard to explain to those who didn't see it how much Linux blindsided people, even with things like Jolix and Minix already around. Even though it never really had much real impact outside of server farms, the fact that someone did it at all was like lighting a fire under the community, for those interested in OS development.
But no one making hardware had any reason to be interested in accommodating small OS projects, as it would be just an expensive waste for the overwhelming majority of users, so they didn't. That is just as true today, perhaps moreso if anything; it is easier just to special-case a set of blob drivers for Linux, or publish the documentation needed for the community to write a driver (or both, which often means giving just enough information to support most functions while withholding some of the more proprietary aspects), and many companies can't be bothered to do that even much. Linux is a diversion, not a market, and that's even truer for what we are doing.