OS Graphics

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
User avatar
Owen
Member
Member
Posts: 1700
Joined: Fri Jun 13, 2008 3:21 pm
Location: Cambridge, United Kingdom
Contact:

Re: OS Graphics

Post by Owen »

Brendan wrote:
Owen wrote:
Brendan wrote:If the application says there's only ambient light and no light sources, then there can't be any shadows. For thermal vision you'd use textures with different colours (e.g. red/yellow/white). For night vision you could do the same (different textures) but the video driver could provide a standard shader for that (e.g. find the sum of the 3 primary colours and convert to green/white) to avoid different textures.
Nobody implements thermal vision like that nor wants to do so. Thermal vision is normally implemented by using a single channel texture with the "heat" of the object embedded within it. The shader looks that up on a colour ramp to determine the actual display colour - after doing things like adjusting for distance which can't be efficiently done under your model.
To me, the only important thing is whether or not I care. I do care a little, but not enough to turn a clean/easy API into a painful mess just for sake of a few special cases that are irrelevant 99% of the time for 99% of people.
"Non-traditional" rendering is hardly a rare case.

What you're saying is that many common features of games are going to be impossible to implement to work in realtime on your OS. That isn't an improvement.
Brendan wrote:
Owen wrote:NOTE: I don't happen to be a fan of the current shader distribution model. I think the source based system is a bit crap, and we would be better off with a standard bytecode and bytecode distribution. 3D engines like Unreal 3 and Ogre3D (which I'm presently working on) implement systems which build shaders based upon defined material properties. In particular, Unreal lets artists create reasonably well optimized shaders by snapping together premade blocks in a graph (--but still allows the developer to add their own blocks for their needs, or write the shader from scratch). I'd much rather we got a bytecode to work with - it would make implementing our runtime generated shader system so much easier, and would mean we weren't generating text for the compiler to immediately parse. However bytecode shaders > text shaders > no shaders
Portable byte-code shaders seems to make sense; but it forces a specific method onto the rendering pipeline. For example, I can't imagine the same portable byte-code shader working correctly for rasterisation and ray tracing, or working correctly for both 2D and 3D displays. It's only a partial solution to the "future proofing" problem, which isn't enough.
I don't even know how one would express, say, a first person shooter on a volumetric display.

N.B. note the distinction between a 3D display, which is a flat thing which displays two different images to our eyes, and a volumetric display, in which the "display canvas" is actually a volume in space and you can walk around the projected object

A first person shooter cares about 2D and 3D displays (with VR goggle type technology with head tracking being the "perfect" display type for this use case). Games on volumetric displays... I don't think many of the games we currently develop are viable for them. Things like board games come to mind; but most other game types expect the ability to pan around the world somewhat, not the ability to look arbitrarily around some subsection of it.

I therefore think designing for volumetric displays - something which remain science fiction at this point in time - is a pointless endeavor. For all we know volumetric displays will be voxel based, and current polygon mesh geometries therefore completely useless for them.

As for portable shaders between rasterization and ray tracing? Well, it is trivial to see how it can be done: by breaking up, say, the "fragment shader" into a "material shader" (which can modify the pre-lighting appearance of a fragment) and a "postprocessing shader" (which can modify the post-lighting appearance of a fragment).

You'd then confine yourself to a fixed function lighting model, though, so I imagine that your rendering system would just fall behind everyone else's over time as new features were introduced elsewhere and your design prevented people from taking advantage of them.

Brendan wrote:
Owen wrote:Then theres culling. Older game engines (Mostly Quake derived) use binary space partitioning. Its' conceptually a very simple system, and back when overdraw was expensive was perfect (BSP has no overdraw); but today its' inappropriate (overdraw is less expensive than walking the BSP tree) and it requires heavy pre-processing (so isn't usable for dynamic objects). Some engines use Octrees. They have the best scaling of all the methods usable for 3D graphics in terms of the number of objects in the scene, yet actually in pratice they, and all tree based systems, suck for realistic scene sizes: yes, they scale up, but they don't scale down.

So systems have been moving from heirachial culling to the case where many games on massive open worlds today just use a simple array of every object in the scene (e.g. Just Cause 2)

You're essentially implying your OS/graphics driver becomes the 3D engine without it having any understanding of the actual needs of the scene type underneath.
No; what I'm saying is that the applications describe what they want, and renderers draw what has been described; such that the renderer (e.g. a video driver) can do software rendering or binary space partitioning or rasterisation with overdraw or ray tracing or use "painters algorithm" or use z-buffers or do whatever else the render's creator thinks is the best method for the specific piece of hardware involved without any other piece of software needing to care how rendering happened to be implemented.

Basically; if application/game developers need to care how the renderer works, then the graphics API is a failure.
If the software managing culling of the actual geometry (in your case the graphics driver) has no idea of the approach to use to cull it (and it won't, because you don't want the developer to care about how the renderer works), its' performance will be a failure.

You can't e.g. use a BSP over an arbitrary polygon soup geometry. BSP requires things like closed hulls, and you'd need to know which bits of geometry comprised the hull. You would then need to compute the potentially visible set (in order to dice up the geometry into a BSP tree), a process which can take around a minute of preprocessing time for realistic maps (This is getting lower over time... but map sizes are growing)

Then what about lighting. Do you support static lighting? If so, how do the developers specify the light maps? How do they render the light maps?

Note that rendering a global illumination light map can take significant quantities of CPU (and, if hardware accelerated, GPU) time.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: OS Graphics

Post by Brendan »

Hi,
Owen wrote:
Brendan wrote:
Owen wrote:Nobody implements thermal vision like that nor wants to do so. Thermal vision is normally implemented by using a single channel texture with the "heat" of the object embedded within it. The shader looks that up on a colour ramp to determine the actual display colour - after doing things like adjusting for distance which can't be efficiently done under your model.
To me, the only important thing is whether or not I care. I do care a little, but not enough to turn a clean/easy API into a painful mess just for sake of a few special cases that are irrelevant 99% of the time for 99% of people.
"Non-traditional" rendering is hardly a rare case.

What you're saying is that many common features of games are going to be impossible to implement to work in realtime on your OS. That isn't an improvement.
Citation needed.

Common features needed by applications and common features needed by 98% of games will be fine. For half of the remainder you'd just find some other way to do it that's "close enough" (like using normal textures for thermal vision). That leaves 1% of games (and 0% of applications).

So the question is, do I want to make things 10 times worse (more complicated) for 99% of software developers (100% of application developers, 98% of game developers and 100% of video driver programmers) just because of that remaining 1%? Definitely not.

Of course there is a completely different issue that doesn't really have anything to do with graphics at all. This second issue is developers that won't like my OS simply because it's different to what they're used to. Fortunately this problem will solve itself (these people will stick with Windows/Linux, and I won't have to worry too much about making sure they don't go anywhere near my OS or worry about them turning my OS into a festering pile of puke).
Owen wrote:
Brendan wrote:
Owen wrote:NOTE: I don't happen to be a fan of the current shader distribution model. I think the source based system is a bit crap, and we would be better off with a standard bytecode and bytecode distribution. 3D engines like Unreal 3 and Ogre3D (which I'm presently working on) implement systems which build shaders based upon defined material properties. In particular, Unreal lets artists create reasonably well optimized shaders by snapping together premade blocks in a graph (--but still allows the developer to add their own blocks for their needs, or write the shader from scratch). I'd much rather we got a bytecode to work with - it would make implementing our runtime generated shader system so much easier, and would mean we weren't generating text for the compiler to immediately parse. However bytecode shaders > text shaders > no shaders
Portable byte-code shaders seems to make sense; but it forces a specific method onto the rendering pipeline. For example, I can't imagine the same portable byte-code shader working correctly for rasterisation and ray tracing, or working correctly for both 2D and 3D displays. It's only a partial solution to the "future proofing" problem, which isn't enough.
I don't even know how one would express, say, a first person shooter on a volumetric display.

N.B. note the distinction between a 3D display, which is a flat thing which displays two different images to our eyes, and a volumetric display, in which the "display canvas" is actually a volume in space and you can walk around the projected object

A first person shooter cares about 2D and 3D displays (with VR goggle type technology with head tracking being the "perfect" display type for this use case). Games on volumetric displays... I don't think many of the games we currently develop are viable for them. Things like board games come to mind; but most other game types expect the ability to pan around the world somewhat, not the ability to look arbitrarily around some subsection of it.

I therefore think designing for volumetric displays - something which remain science fiction at this point in time - is a pointless endeavor. For all we know volumetric displays will be voxel based, and current polygon mesh geometries therefore completely useless for them.
They aren't science fiction (they've existed for decades) they just aren't commercially viable yet.

If your application/game describes a cube (or many complex polygon meshes or whatever) why would that "description of what you want" be useless for a volumetric display? You'd only need a suitable driver to convert the description into whatever format the volumetric display needs; and (for all the possibilities of "whatever the volumetric display needs" that I can think of) this conversion seems relatively easy to me.
Owen wrote:
Brendan wrote:Basically; if application/game developers need to care how the renderer works, then the graphics API is a failure.
If the software managing culling of the actual geometry (in your case the graphics driver) has no idea of the approach to use to cull it (and it won't, because you don't want the developer to care about how the renderer works), its' performance will be a failure.
Um, "if the video driver has no idea how the renderer/video driver works, then..."? How about we just assume that the video driver does know how the video driver works.
Owen wrote:Then what about lighting. Do you support static lighting? If so, how do the developers specify the light maps? How do they render the light maps?
Static lighting is supported. The developers take an object's light map, the original textures and the object's mesh and combine them to produce a new mesh and new textures with the static lighting built into it. Then the pre-modified textures and meshes (without any light map) are sent to the renderer for rendering. Of course the renderer doesn't need to know or care that the textures/meshes have static lighting applied.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Gigasoft
Member
Member
Posts: 855
Joined: Sat Nov 21, 2009 5:11 pm

Re: OS Graphics

Post by Gigasoft »

To me, the only important thing is whether or not I care. I do care a little, but not enough to turn a clean/easy API into a painful mess just for sake of a few special cases that are irrelevant 99% of the time for 99% of people.
The problem with this is that every person has their own list of special things they want to do that are irrelevant to most other people. Every game developer will most certainly want to do things that no one else has done. So, rather than making your own implementation of every possible effect anyone will ever "reasonably" want, it's better to just let programmers do whatever they want and not get in their way.

When you are a game developer, you often have an idea and you want the result to look a certain way. You can implement the algorithm that produces the image you want entirely on the CPU, but let's say the user has bought a shiny new card for his computer which is specifically designed for this purpose. Operating system X gladly lets the programmer borrow the card to perform any computation he wants, while operating system Y tells him, "hey, what are you doing? What's this supposed to be, you obviously don't know a thing about art! Leave this to a pro! Erm, sorry, I haven't really learned how to draw water yet, hope this is okay..." Which operating system sounds better to you?

Or if you are in a taxi, let's say you ask him to drive you to Paris. "What are you going to do there?" he asks you. "Well, the first thing I'm going to do is get something to eat at McDonalds," you say. "Why would anyone want that junk? Oh, I know this great Italian restaurant in Stockholm!" he exclaims proudly, "let's go there instead, I want to eat too and I am really in the mood for a filetto con salsa di funghi! We'll just take the ferry from Riga, Latvia! People who eat hamburgers are losers who deserve to walk, that way they'll get rid of their fat!" Then he lets out a loud laugh. Well, you get the idea. Such a taxi driver would be out of a job pretty quickly.

The point is of course that there is such a thing as too much abstraction. The job of an operating system is simply to let users operate their computer, through their programs. It provides a way for components from different manufacturers to communicate. So, it is more of a facilitator than a judge of how things ought to be done.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: OS Graphics

Post by Brendan »

Hi,
Gigasoft wrote:
To me, the only important thing is whether or not I care. I do care a little, but not enough to turn a clean/easy API into a painful mess just for sake of a few special cases that are irrelevant 99% of the time for 99% of people.
The problem with this is that every person has their own list of special things they want to do that are irrelevant to most other people. Every game developer will most certainly want to do things that no one else has done. So, rather than making your own implementation of every possible effect anyone will ever "reasonably" want, it's better to just let programmers do whatever they want and not get in their way.
Of all the 3D games I've played over the last 10 years, I can't think of a single one that looked different because of different shaders. They've all got different textures, different meshes, different game mechanics, etc; and they look and feel very different because of this, not because one game has a slightly different shader than another. The only possibly exception to this is Minecraft (due to it's "retro" crappy graphics), but if it used a normal/generic shader it'd still look unique (despite being slightly less ugly) and nobody would care.

Basically; I think you're confusing "game developers" with "a few weenies that write game engines". A game developer wants to put something together that's entertaining (or profitable, or both), and if they can't get the graphics in an exact specific way, well who really cares it'll be just as entertaining (or profitable, or both) anyway. The "few weenies that write game engines" are entirely different. They're only interested in retarded "my shader is slightly fancier than yours" pissing contests.
Gigasoft wrote:The point is of course that there is such a thing as too much abstraction. The job of an operating system is simply to let users operate their computer, through their programs. It provides a way for components from different manufacturers to communicate. So, it is more of a facilitator than a judge of how things ought to be done.
But there is such a thing as too little abstraction.

What if it was file IO; and instead of a nice clean "open, read, write, close" abstraction (where applications don't need to care what the underlying file system or storage device is), some fool decided it'd be nice if applications were forced to control how free sectors are chosen, if the data got compressed or not, if it's cached in a "write back" way or a "write through" way, when the data is sent to disk, etc. Imagine that it was so bad that even after all the hassle of dealing with this (and six layers of libraries to cope with the brain damage), if the user updates their hardware half the applications break because they were written before SSD (or blue-ray or ext5 or whatever) became common. You can guarantee there'd be some retard somewhere saying "You have to do this! Programmers need the extra flexibility!".

Now my example above is (intentionally) silly; but it's very close to what you're advocating for video. The only difference is that (for 3D video) existing OSs are currently stupid and have too little abstraction, so you think it's "normal".


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
XanClic
Member
Member
Posts: 138
Joined: Wed Feb 13, 2008 9:38 am

Re: OS Graphics

Post by XanClic »

Brendan wrote:
XanClic wrote:
Brendan wrote:and for real-time rendering you skip a lot of things to save time (e.g. severely limit reflection and ignore refraction)
As I've said, you don't skip them, you implement a totally different model. Rasterization is just that: Rastering something which is not already rastered, in this case, this means using linear transformations in order to transform 3D to 2D coordinates and raster the space in between. Shading is just used to modify the transformation and to specify exactly how that space should be filled (the latter, fragment/pixel shading is what is commonly referred to when talking about shading in general). This model on its own is incapable of generating shadows, reflections or refractions; they are created using certain tricks (hard shadows: Shadow Volumes or Shadow Mapping; reflection: plain negative scaling at plane mirrors or environment maps; refraction: also environment maps) which all require one or more additional renderings, since a rasterizer only cares about one polygon at a time.
This is just being pointlessly pedantic. It's the same basic model of light that you're using as the basis for the renderer, regardless of how you implement the renderer (e.g. with ray tracing or rasterisation) and regardless of how much your implementation fails to model (e.g. refraction).
As Owen has told you already (which you "answered" to with the same phrase), this is just wrong. Read again:
XanClic wrote:in this case, this means using linear transformations in order to transform 3D to 2D coordinates and raster the space in between.
Rasterization itself has nothing to do with light at all. It just projects 3D onto 2D space.
Brendan wrote:
XanClic wrote:All in all I'd propose you don't even care about rasterizing. It's a process from a time where parallel computations weren't as common as they're today (basic rasterization is impossible to parallelize); thus, imho, ray tracing is the future, also for consumer GPUs (which we're already seeing through nvidia's libs for CUDA and some OpenCL implementations).
This is the sort of "awesome logic" you get from people that only ever call libraries/code that other people have implemented. It's trivial to do rasterization in parallel (e.g. split the screen into X sections and do all X sections in parallel).
Thanks for using the "sort of" statement, since I've actually once implemented a partly OpenGL 2 compatible software rasterizer (including parallelization). But do you have any idea why rasterization was used in the first place? Have you yourself ever actually tried writing a parallelized software rasterizer? Rasterization was invented to render scenes on single-thread CPUs because it is such a nice algorithm which doesn't require (nor allow, for that matter) parallelization. In the most basic form, you get some vertices, apply a linear transformation to them, clip the result to the screen space and fill it. You can't process multiple polygons at once because the order is important and you can't easily split the screen since every object may be drawn across any portion of the screen (in fact, you can check this only after you've done the whole transformation anyway). Rasterization itself is a pretty hard task to parallelize (okay, I exaggerated – it's not impossible, but it certainly isn't trivial either).
However, today there are shaders which are called many times per polygon: Vertex shaders once per vertex (they do the transformation), geometry/tessellation shaders perhaps a couple of times and fragment shaders once per fragment (pixel) drawn. Now, these are things which can be run in parallel (perhaps more important: they are most often even SIMD processes). That's why GPUs started featuring many cores only after shaders had been introduced.
Brendan wrote:Also note that I couldn't care less how the video driver actually implements rendering (e.g. if they use rasterization, ray tracing, ray casting or something else). That's the video driver programmer's problem and has nothing to do with anything important (e.g. the graphics API that effects all applications, GUIs and video drivers).
Hmk. So all your video drivers have an extremely high-level interface comparable to any modern 3D-engine to have the same abstraction layer for both rasterization and ray tracing? Sounds like a great deal of fun – the Gallium graphics stack (an inappropriate term, but you get what I mean) now in fact tries to rip the OpenGL interface from the video drivers itself, since it's apparently too much work to implement OpenGL in every single video driver; just for reference how other people (who are actually working on the stuff we're talking about here) are handling this. Please be aware that I'll utter a cry if I see you answering to this with something in the lines of “that's only because they're used to the old system and too dumb to invent greatness” or “sticking to the current models is only limiting and I will never accept that decades of research didn't result in legacy only but also in some pretty valuable results which are used in today's systems”.
Brendan wrote:
XanClic wrote:
Brendan wrote:only slightly better than "2D failing to pretend to be 3D" like modern GUIs
They try to pretend that? I never noticed. I always thought the shadows were mainly used for enhanced contrast without actually having to use thick borders.
Most have since Windows95 or before (e.g. with "baked on" lighting/shadow). Don't take my word for it though - do a google search for "ok button images" if you don't have any OS with any GUI that you can look at. Note: There is one exception that I can think of - Microsoft's "Metro".
I talked about enhancing contrast. Metro having no shadows whatsoever (at least visible ones) is a reason why it's sometimes pretty hard to discern windows there.
Brendan wrote:Of course you're more interested in trolling than discussing the merits of an alternative approach
Oh, you got me there. Yep, all I've been saying so far is not my honest opinion but just an attempt to steal your time so you can't bring out your OS already.
Brendan wrote:so I doubt you'd willingly admit that.
I would, but aliens are controlling my mind and will! They're making me utter incoherent and redundant noise which isn't making any sense to any sane mind! Help! D:

(Actually, it's not aliens but probably professors from my university who are incapable of embracing innovation; or even myself. Hell, most of the work I do still involves a terminal. How unprogressive is that! Ugh, I'm even using vim!)
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: OS Graphics

Post by Brendan »

Hi,
XanClic wrote:Rasterization itself has nothing to do with light at all. It just projects 3D onto 2D space.
It just projects 3D onto 2D space, in order to approximate the physical model of light. It's just pedantic word games that I'm not interested in wasting my time on.
XanClic wrote:
Brendan wrote:
XanClic wrote:All in all I'd propose you don't even care about rasterizing. It's a process from a time where parallel computations weren't as common as they're today (basic rasterization is impossible to parallelize); thus, imho, ray tracing is the future, also for consumer GPUs (which we're already seeing through nvidia's libs for CUDA and some OpenCL implementations).
This is the sort of "awesome logic" you get from people that only ever call libraries/code that other people have implemented. It's trivial to do rasterization in parallel (e.g. split the screen into X sections and do all X sections in parallel).
Thanks for using the "sort of" statement, since I've actually once implemented a partly OpenGL 2 compatible software rasterizer (including parallelization). But do you have any idea why rasterization was used in the first place? Have you yourself ever actually tried writing a parallelized software rasterizer? Rasterization was invented to render scenes on single-thread CPUs because it is such a nice algorithm which doesn't require (nor allow, for that matter) parallelization. In the most basic form, you get some vertices, apply a linear transformation to them, clip the result to the screen space and fill it. You can't process multiple polygons at once because the order is important and you can't easily split the screen since every object may be drawn across any portion of the screen (in fact, you can check this only after you've done the whole transformation anyway). Rasterization itself is a pretty hard task to parallelize (okay, I exaggerated – it's not impossible, but it certainly isn't trivial either).
Um, so according to you, "rasterization is impossible to parallelize" and you have "implemented a partly OpenGL 2 compatible software rasterizer (including parallelization)"?
XanClic wrote:
Brendan wrote:Also note that I couldn't care less how the video driver actually implements rendering (e.g. if they use rasterization, ray tracing, ray casting or something else). That's the video driver programmer's problem and has nothing to do with anything important (e.g. the graphics API that effects all applications, GUIs and video drivers).
Hmk. So all your video drivers have an extremely high-level interface comparable to any modern 3D-engine to have the same abstraction layer for both rasterization and ray tracing? Sounds like a great deal of fun – the Gallium graphics stack (an inappropriate term, but you get what I mean) now in fact tries to rip the OpenGL interface from the video drivers itself, since it's apparently too much work to implement OpenGL in every single video driver; just for reference how other people (who are actually working on the stuff we're talking about here) are handling this. Please be aware that I'll utter a cry if I see you answering to this with something in the lines of “that's only because they're used to the old system and too dumb to invent greatness” or “sticking to the current models is only limiting and I will never accept that decades of research didn't result in legacy only but also in some pretty valuable results which are used in today's systems”.
All Gallium does is move code that would be duplicated in lots of video drivers into a common place; which makes sense for a monolithic kernel. I'm not too what your point is though - did you have one?
XanClic wrote:
Brendan wrote:Of course you're more interested in trolling than discussing the merits of an alternative approach
Oh, you got me there. Yep, all I've been saying so far is not my honest opinion but just an attempt to steal your time so you can't bring out your OS already.
Nice of you to admit it. Thanks.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Owen
Member
Member
Posts: 1700
Joined: Fri Jun 13, 2008 3:21 pm
Location: Cambridge, United Kingdom
Contact:

Re: OS Graphics

Post by Owen »

Brendan wrote:The problem with this is that every person has their own list of special things they want to do that are irrelevant to most other people. Every game developer will most certainly want to do things that no one else has done. So, rather than making your own implementation of every possible effect anyone will ever "reasonably" want, it's better to just let programmers do whatever they want and not get in their way.
Of all the 3D games I've played over the last 10 years, I can't think of a single one that looked different because of different shaders. They've all got different textures, different meshes, different game mechanics, etc; and they look and feel very different because of this, not because one game has a slightly different shader than another. The only possibly exception to this is Minecraft (due to it's "retro" crappy graphics), but if it used a normal/generic shader it'd still look unique (despite being slightly less ugly) and nobody would care.

Basically; I think you're confusing "game developers" with "a few weenies that write game engines". A game developer wants to put something together that's entertaining (or profitable, or both), and if they can't get the graphics in an exact specific way, well who really cares it'll be just as entertaining (or profitable, or both) anyway. The "few weenies that write game engines" are entirely different. They're only interested in retarded "my shader is slightly fancier than yours" pissing contests[/quote]

How would you know what the shaders in an individual game are doing? Would you notice if somebody was doing realtime tessellation or not? Do you know what normal, displacement and bump mapping look like? Most shaders aren't things which just jump out! For the majority of games (i.e. those which aim at the photorealistic), shaders which jump out at you are a Bad Thing: they're ruining the verisimilitude of the world and thus the immersion!

Its' kind of funny. You're exactly demonstrating a common complaint from game developers: that reviewers don't understand what make graphics good, and always just say a game has "good textures."

A good example of the uses of shaders developers are finding today include converting bezier patches into triangle meshes on the GPU in real time - normally in response to the objects' size and distance from the camera such that each polygon's screen space size is approximately the same. Similar things are now being done for terrain height maps. Let us not forget the now age old use of bump, displacement and normal mapping such that textures can have some, well, texture. Additionally, shaders are now being deployed such that things like wires hanging through the sky anti-alias properly without the need for massive oversampling.

Then there are postprocessing effects. Edge detection, boekh, HDR, bloom, colour grading, FXAA, etc. Some of these were developed by the GPU manufacturers... others game from researchers elsewhere, and even developers working inside game teams.

But, OK, you want an example of a game where the appearance is completely and utterly dominated by shaders? I present you WipEout HD's Zone Mode. The appearance of the track there is completely fragment shader controlled.

Yeah, a lot of games just license an engine, and for some of them not a single shader is written.

For others? Even if they license an engine, it may be modified; and there will be at least one or two people on the team writing the shaders to give the game the appearance they want and functionality.
Brendan wrote:
XanClic wrote:Rasterization itself has nothing to do with light at all. It just projects 3D onto 2D space.
It just projects 3D onto 2D space, in order to approximate the physical model of light. It's just pedantic word games that I'm not interested in wasting my time on.
Nothing at all to do with the physical model of light...

Projecting 3D onto 2D space has nothing to do with the physical model of light, in the same way as projecting 4D onto 3D space has nothing to do with the physical model of light.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: OS Graphics

Post by Brendan »

Hi,
Owen wrote:
Brendan wrote:
Owen wrote:The problem with this is that every person has their own list of special things they want to do that are irrelevant to most other people. Every game developer will most certainly want to do things that no one else has done. So, rather than making your own implementation of every possible effect anyone will ever "reasonably" want, it's better to just let programmers do whatever they want and not get in their way.
Of all the 3D games I've played over the last 10 years, I can't think of a single one that looked different because of different shaders. They've all got different textures, different meshes, different game mechanics, etc; and they look and feel very different because of this, not because one game has a slightly different shader than another. The only possibly exception to this is Minecraft (due to it's "retro" crappy graphics), but if it used a normal/generic shader it'd still look unique (despite being slightly less ugly) and nobody would care.

Basically; I think you're confusing "game developers" with "a few weenies that write game engines". A game developer wants to put something together that's entertaining (or profitable, or both), and if they can't get the graphics in an exact specific way, well who really cares it'll be just as entertaining (or profitable, or both) anyway. The "few weenies that write game engines" are entirely different. They're only interested in retarded "my shader is slightly fancier than yours" pissing contests
How would you know what the shaders in an individual game are doing? Would you notice if somebody was doing realtime tessellation or not? Do you know what normal, displacement and bump mapping look like? Most shaders aren't things which just jump out! For the majority of games (i.e. those which aim at the photorealistic), shaders which jump out at you are a Bad Thing: they're ruining the verisimilitude of the world and thus the immersion!
I didn't imply that there were no shaders being used. Most of the games may have had completely different shaders to handle things like lighting/shadow, fog, water, whatever; and nobody would've been able to tell the difference if they all used exactly the same standardised shaders for lighting/shadow, fog, water, whatever.
Owen wrote:Its' kind of funny. You're exactly demonstrating a common complaint from game developers: that reviewers don't understand what make graphics good, and always just say a game has "good textures."
It's funny that I'm not the only one that doesn't notice any difference between different shaders?

If I got an old game (with crappy shaders) showing a scene containing a beach and palm trees, and put it side by side with a new game (with much better shaders) with the exact same scene (same beach, same palm trees, same textures, etc); then yes, I probably would notice a difference between the different shaders; and I would wish that I could use the new shaders with the old game. This is what I want - if the shaders were built into the video card's driver then better shaders (and better video cards capable of better shaders) will make all games better.
Owen wrote:A good example of the uses of shaders developers are finding today include converting bezier patches into triangle meshes on the GPU in real time - normally in response to the objects' size and distance from the camera such that each polygon's screen space size is approximately the same. Similar things are now being done for terrain height maps. Let us not forget the now age old use of bump, displacement and normal mapping such that textures can have some, well, texture. Additionally, shaders are now being deployed such that things like wires hanging through the sky anti-alias properly without the need for massive oversampling.
I was thinking of letting applications say (in their meshes) "the edge between these 2 polygons is curved". A crappy video driver might not do the curve at all, another video driver might take the curve into account when doing lighting, a third video driver might split the larger polygons into hundreds of tiny polygons, and the software renderer used by the printer driver might use ray tracing techniques (and 6 hours of processing time) to draw the curve perfectly. The application developer just says "the edge is curved" and all the different renderers do the best job they can using whichever method/s they like in the amount of time they have to do it.

I was thinking of just having textures with "RGB+height" pixels. A crappy video driver might ignore the height, another video driver might use it for lighting/shadows only, a third might...

I was thinking that an application just says "there's wires here" and let the video driver/renderer do whatever anti-aliasing it can (including using shaders for very thin polygons to avoid the need for massive oversampling if that's what the video driver feels like using shaders for).

These are all examples of things where the applications doesn't need to provide shaders at all, and the video driver can/should use shaders.
Owen wrote:Then there are postprocessing effects. Edge detection, boekh, HDR, bloom, colour grading, FXAA, etc. Some of these were developed by the GPU manufacturers... others game from researchers elsewhere, and even developers working inside game teams.
Ok. For "boekh" (which seems to be adding blur to simulate focus) the application can just say "the focal point is <depth>" and let the renderer/video driver worry about it. HDR would be used by default (and I don't see a need to disable it) and video driver can do bloom without the application providing anything or asking for it. FXAA is anti-aliasing which is the renderer/video driver's problem (they can use FXAA if they want without the application caring).

For colour grading, I'm not sure what you mean (I googled and found a page about using the wrong colours and then replacing them with the right colours in post-processing, but I couldn't figure out why you'd use the wrong colours to begin).

For edge detection, I assume you mean edge detection followed by displaying detected edges differently in some way (e.g. for a cartoon effect). To be honest I don't like the idea of deliberately making graphics crappy (the "cartoon" style) and have never played any game that does it anyway; so I'm not convinced that I should bother adding commands/attributes to support this. In any case, if I cared about it enough it'd be trivial to add a "posterise this scene" command and a "do all edges as <colur, thickness>" command without needing every different application to provide their own different "cartoon effect" shader.
Owen wrote:But, OK, you want an example of a game where the appearance is completely and utterly dominated by shaders? I present you WipEout HD's Zone Mode. The appearance of the track there is completely fragment shader controlled.
The use of shaders for HDR and reflections is obvious (and obviously ugly - it's excessive). I couldn't see anything (with the lighting or the track) that couldn't be done with a generic shader that supports HDR, reflections and curves.
Owen wrote:
Brendan wrote:
XanClic wrote:Rasterization itself has nothing to do with light at all. It just projects 3D onto 2D space.
It just projects 3D onto 2D space, in order to approximate the physical model of light. It's just pedantic word games that I'm not interested in wasting my time on.
Nothing at all to do with the physical model of light...

Projecting 3D onto 2D space has nothing to do with the physical model of light, in the same way as projecting 4D onto 3D space has nothing to do with the physical model of light.
So an application describes a 4 meter square room with solid blue walls, a vase on a table, and a light in the middle of the ceiling; and instead of attempting to mimic the results of the physical model of light (where it looks like light hit the vase, table and walls and is reflected back to the camera, where the vase and table cause a shadow on the floor); the renderer ignores the physical model of light completely and has light bending in strange directions with abstract colour effects for no reason and a shadow on the ceiling next to the light, and light passing through the solid walls so that you can see a herd of flying pigs passing by? Suggesting that any renderer that actually works correctly doesn't attempt to mimic the results of the physical model of light is completely and utterly ridiculous.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
XanClic
Member
Member
Posts: 138
Joined: Wed Feb 13, 2008 9:38 am

Re: OS Graphics

Post by XanClic »

Brendan wrote:
XanClic wrote:Rasterization itself has nothing to do with light at all. It just projects 3D onto 2D space.
It just projects 3D onto 2D space, in order to approximate the physical model of light. It's just pedantic word games that I'm not interested in wasting my time on.
As I've admitted (much to your satisfaction) all my posts are about stealing your time. Therefore, I'm telling you again (as Owen has also done again) that you still don't grasp the concept of the projection of vector spaces from higher into lower dimensions - knowing you've spent again too much time reading this part of my reply than it is actually worth (from your POV).
Brendan wrote:Um, so according to you, "rasterization is impossible to parallelize" and you have "implemented a partly OpenGL 2 compatible software rasterizer (including parallelization)"?
Note the "OpenGL 2". OpenGL 2 is more than just rasterization, it does include shading. And as I've written, shading (especially fragment shading) is suited pretty well for parallelization (which is what I have done).
Actually, the whole story is: I was naive when I began writing the library, thinking rasterization would be pretty easy to parallelize since "GPUs must have so many cores for a reason". I then even tried implementing parallelization the way you suggested it (splitting the screen) but quickly noticed that it simply doesn't work that way. Eventually, I ran the shading functions in parallel, just as a real GPU would do it.
Brendan wrote:
XanClic wrote:
Brendan wrote:Also note that I couldn't care less how the video driver actually implements rendering (e.g. if they use rasterization, ray tracing, ray casting or something else). That's the video driver programmer's problem and has nothing to do with anything important (e.g. the graphics API that effects all applications, GUIs and video drivers).
Hmk. So all your video drivers have an extremely high-level interface comparable to any modern 3D-engine to have the same abstraction layer for both rasterization and ray tracing? Sounds like a great deal of fun – the Gallium graphics stack (an inappropriate term, but you get what I mean) now in fact tries to rip the OpenGL interface from the video drivers itself, since it's apparently too much work to implement OpenGL in every single video driver; just for reference how other people (who are actually working on the stuff we're talking about here) are handling this. Please be aware that I'll utter a cry if I see you answering to this with something in the lines of “that's only because they're used to the old system and too dumb to invent greatness” or “sticking to the current models is only limiting and I will never accept that decades of research didn't result in legacy only but also in some pretty valuable results which are used in today's systems”.
All Gallium does is move code that would be duplicated in lots of video drivers into a common place; which makes sense for a monolithic kernel. I'm not too what your point is though - did you have one?
No. Why would my statements make any sense?

Seriously, though (I'll try to structure my train of thought a bit better this time): You say the video drivers on your OS should implement whatever method they find fits them best for the task of rendering 3D space. This implies you're willing to have a very high-level interface to every video driver, since both rasterization and ray tracing (still very different methods) must then be abstracted by this single common interface. This in turn results in a lot of code duplication, since every driver implementing e.g. rasterization has to execute that interface through rasterization - which means using pretty much the same algorithms for every rasterizing driver (i.e., code duplication).

I then noted that even mapping an interface such as OpenGL, which has been designed only for rasterization in the first place, to the "low-level" rasterizing interface often still results in enough code duplication to be worth outsourcing into a common "library" (which should strengthen my point since an even more abstract interface should obviously result in even more code duplication).

My final point was directed directly at you, for you seem to answer to many arguments which involve "everyone's doing this" in the same way, i.e., "everyone's a dumbass then". Interestingly, you somehow did it again, since you explicitly mentioned monolithic kernels in a way which seemed somehow denouncing to me, implying this point (code duplication is bad) wouldn't apply to more modern kernel concepts; which I can't actually comprehend. Code duplication is always bad, since someone has to write, or at the very least, copy, that redundant code (and update every copy once someone finds an improvement to it); even if eventually there is only one copy loaded on any system at any given time.

To summarize the points I tried to make:
  • Too abstract interfaces lead to code duplication
  • Code duplication should always be avoided
  • Not all people except you are dumb, Brendan (I may very well be, but I don't accept it for the majority of the people having done CS research in the past decades)
  • I get tired of you seemingly trying to dispute that last point at every opportunity
You may now tell me what an idiot I am. I won't dispute that, for it is probably true given the ratio of idiots to the whole population.

You may also tell me that you don't think the majority of CS researches to be idiots. I'd be very glad to hear that I was wrong implying you might actually believe that.

You may also tell me what an arrogant ******* I am, since reading my last few lines is giving me exactly that impression. Yes, you're right, I probably am and should do something about it. But not at half past two in the morning.
Brendan wrote:
XanClic wrote:
Brendan wrote:Of course you're more interested in trolling than discussing the merits of an alternative approach
Oh, you got me there. Yep, all I've been saying so far is not my honest opinion but just an attempt to steal your time so you can't bring out your OS already.
Nice of you to admit it. Thanks.
I'm just here to make you happy. ♥
User avatar
XanClic
Member
Member
Posts: 138
Joined: Wed Feb 13, 2008 9:38 am

Re: OS Graphics

Post by XanClic »

Brendan wrote:So an application describes a 4 meter square room with solid blue walls, a vase on a table, and a light in the middle of the ceiling; and instead of attempting to mimic the results of the physical model of light (where it looks like light hit the vase, table and walls and is reflected back to the camera, where the vase and table cause a shadow on the floor); the renderer ignores the physical model of light completely and has light bending in strange directions with abstract colour effects for no reason and a shadow on the ceiling next to the light, and light passing through the solid walls so that you can see a herd of flying pigs passing by? Suggesting that any renderer that actually works correctly doesn't attempt to mimic the results of the physical model of light is completely and utterly ridiculous.
But simply true. You're obviously confusing light with fundamental principles of euclidean spaces.

EDIT: Also, rasterization doesn't include any shading on its own. So there simply is no light in rasterization itself. That, and rasterization also requires a depth buffer in order to be able to make objects appear opaque regardless of drawing order. So your example of the image of a herd of flying pigs passing by shining through the walls is actually pretty much spot-on (if you don't include a depth buffer which is already an extension to rasterization, as are shaders).

Rasterization doesn't mimic the physical properties of light, it's a linear projection of 3D (4D, to be more precise) onto 2D space using only fundamental principles of euclidean spaces, which have nothing to do with light whatsoever (indeed, real light is the thing behaving kind of strange in regard to these plain transformations). Shading is the part of current real-time rendering systems which tries to mimic the actual behavior of light (though mimicing lighting (or "shading") is just one thing shaders can be used for).
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: OS Graphics

Post by Brendan »

Hi,
XanClic wrote:
Brendan wrote:
XanClic wrote:Rasterization itself has nothing to do with light at all. It just projects 3D onto 2D space.
It just projects 3D onto 2D space, in order to approximate the physical model of light. It's just pedantic word games that I'm not interested in wasting my time on.
As I've admitted (much to your satisfaction) all my posts are about stealing your time. Therefore, I'm telling you again (as Owen has also done again) that you still don't grasp the concept of the projection of vector spaces from higher into lower dimensions - knowing you've spent again too much time reading this part of my reply than it is actually worth (from your POV).
No, what has happened (at least from my perspective) is that I tried to say something obvious (e.g. "the goal of the render is to generate a picture as if the physical model of light was followed; regardless of whether the physics of light actually were followed or not, and regardless of how any specific renderer is implemented"), and for some reason you've failed to understand what I was saying, used your failure to understand as an excuse to assume I'm some moron that has no idea how the implementation of (e.g.) a rasterizer and a ray tracer is very different, and then (regardless of many times I attempt to explain my original extremely obvious point) you've continued to ignore what I'm actually saying and chosen to be patronising instead.
XanClic wrote:
Brendan wrote:Um, so according to you, "rasterization is impossible to parallelize" and you have "implemented a partly OpenGL 2 compatible software rasterizer (including parallelization)"?
Note the "OpenGL 2". OpenGL 2 is more than just rasterization, it does include shading. And as I've written, shading (especially fragment shading) is suited pretty well for parallelization (which is what I have done).
Actually, the whole story is: I was naive when I began writing the library, thinking rasterization would be pretty easy to parallelize since "GPUs must have so many cores for a reason". I then even tried implementing parallelization the way you suggested it (splitting the screen) but quickly noticed that it simply doesn't work that way. Eventually, I ran the shading functions in parallel, just as a real GPU would do it.
A very long time ago I wanted to learn/gain experience with the 80x86's floating point instructions; so I decided I'd write a 3D rasteriser. It was relatively simple (no textures) and didn't do anything in parallel (it was running on an old Pentium CPU and multi-core didn't exist back then). I thought about it and came up with a way to do "infinite anti-aliasing" in the horizontal direction (e.g. so that "not quite vertical" diagonal edges were anti-aliased perfectly, but "not quite horizontal" diagonal edges still sucked) that avoided the need for a z-buffer. Each mesh was a list of vertexes, then a list of polygons that referred to "vertex numbers" (e.g. a triangle would be "vertex number 1, vertex number 2, vertex number 3, colour"). I calculated vertexes and built meshes of polygons by hand (pen, paper, calculator and "myShip: dd 1.0, 3.3, 5.3, 76.9, 7.4, 8.2, 13.8,...") to create 2 different space ships.

The basic steps were:
  • Part A:
    • for each object:
      • use the camera's position and angle and each object's position and angle to create a transformation matrix
      • use the transformation matrix to convert the vertexes for each object into "screen space"
      • do "vertex lookup" to create a list of polygons using 3D co-ords instead of vertex numbers
      • add the final list of polygons to a master list
    Part B:
    • for each polygon in the master list:
      • clip it to the edges of the screen
      • walk down the left and right edges, and for each Y:
        • find floating point "starting X, Z and colour" and "ending X, Z and colour" that describe a line fragment
        • insert the line fragment into a (sorted in order of X) list of line fragments for Y; while doing "hidden line fragment" removal (e.g. including finding the point where one line fragment intersects with another and splitting them)
    Part C:
    • for each screen line:
      • convert the list of line fragments for that screen line into pixel data and store it in a buffer (including handling line fragments that don't start/end on a pixel boundary, and line fragments that are much smaller than a pixel)
    Part D:
    • blit the buffer to display memory:
None of this was designed with parallel CPUs in mind; however...

For Part A it would've been trivial for each CPU to do each "Nth" object in parallel, adding the final polygons to a per CPU list; where each CPUs separate list is combined into a single list after they've all finished Part A. For Part B and Part C, it would've been trivial to divide the screen into N horizontal bands where each CPU does one horizontal band; with per CPU buffers. For Part D you could still do the "N horizontal bands" thing, but you're going to be limited by bus bandwidth so there's probably no point.

Note 1: To fix the lack of anti-aliasing in the vertical direction I modified it to generate twice as many horizontal lines and merge them. I also added shading but I'm not sure how I did that (I think it was based on the angle between the polygon's normal and the camera); and was thinking of adding support for texture but never got around to it. From what I remember the frame rate wasn't very good (something like 16 frames per second with 100 polygons) but none of the code was optimised (e.g. I used "double" floating point for everything) and the CPU was a 166 MHz Pentium (slow clock speed, no SSE or anything fancy).

Note 2: I'm not suggesting this was a good way to do rendering - it was just an experiment I slapped together to get experience with the FPU. The only reason I'm describing it is to show how easy it'd be to do it in parallel, even for something that wasn't designed for multiple CPUs.
XanClic wrote:Seriously, though (I'll try to structure my train of thought a bit better this time): You say the video drivers on your OS should implement whatever method they find fits them best for the task of rendering 3D space. This implies you're willing to have a very high-level interface to every video driver, since both rasterization and ray tracing (still very different methods) must then be abstracted by this single common interface. This in turn results in a lot of code duplication, since every driver implementing e.g. rasterization has to execute that interface through rasterization - which means using pretty much the same algorithms for every rasterizing driver (i.e., code duplication).

I then noted that even mapping an interface such as OpenGL, which has been designed only for rasterization in the first place, to the "low-level" rasterizing interface often still results in enough code duplication to be worth outsourcing into a common "library" (which should strengthen my point since an even more abstract interface should obviously result in even more code duplication).

My final point was directed directly at you, for you seem to answer to many arguments which involve "everyone's doing this" in the same way, i.e., "everyone's a dumbass then". Interestingly, you somehow did it again, since you explicitly mentioned monolithic kernels in a way which seemed somehow denouncing to me, implying this point (code duplication is bad) wouldn't apply to more modern kernel concepts; which I can't actually comprehend. Code duplication is always bad, since someone has to write, or at the very least, copy, that redundant code (and update every copy once someone finds an improvement to it); even if eventually there is only one copy loaded on any system at any given time.

To summarize the points I tried to make:
  • Too abstract interfaces lead to code duplication
  • Code duplication should always be avoided
  • Not all people except you are dumb, Brendan (I may very well be, but I don't accept it for the majority of the people having done CS research in the past decades)
  • I get tired of you seemingly trying to dispute that last point at every opportunity
Too abstract interfaces lead to code duplication; which can be solved in many ways (e.g. static libraries, shared libraries, services, etc) that don't include using a lower level abstraction for the video driver's API.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: OS Graphics

Post by Brendan »

Hi,
Brendan wrote:A very long time ago I wanted to learn/gain experience with the 80x86's floating point instructions; so I decided I'd write a 3D rasteriser....
I got all sentimental and went looking through my archives to see if I could find this again after all these years, and I found it!

Well, I actually found 2 different versions - one in real mode assembly for DOS that does use a z-buffer (which is my FPU practice), and a second version in C that doesn't use a z-buffer. Somehow I completely forgot I wrote 2 versions and got the details mixed up between them. In my defense, the C version uses Allegro version 4.0 (released in 2001) so it must've been over 10 years ago.

Anyway, after some messing about (finding an old DLL, and figuring out it needs to run in "Win95 compatibility mode" if you don't want it to crash), I took some screen shots. :)

Looking from the cockpit of a fighter towards a "space bus":
ss2.png
And looking from the back seat of a space bus at the fighter:
ss1.png
Running 640*480 (32-bpp) full screen on a 2.8 GHz AMD Phenom II, the frame rate jumps around rapidly between 100 FPS and "1.#J" (some sort of overflow I guess). If it used all 4 of cores it might have got 800 FPS at 640*480, or 80 FPS at 1920*1600.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
AndrewAPrice
Member
Member
Posts: 2299
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: OS Graphics

Post by AndrewAPrice »

Hi Brendan,

This post has been getting off the topic, I will try to get back to what I interpret your original post is about - rather than a graphics buffer being a set of pixels, it contains a list of drawing commands, on the hope that this will result in a lot less data having to be transferred to the parent. I think it has the potential to be a great idea.

I have some topics I'd like to bring up, not all bad:

1) You can do vector graphics!
You don't need to issue a redraw event when you to resize a window! Some applications may still want to do this of course (you resize a Firefox window, Firefox will want to use want to use that extra space to show more of the webpage, rather than make everything scale larger.)

2) Asynchronous rendering
A really simple example - I am writing an Emacs like text edit that is using Lua (was using Google's v8 Javascript but I ran into overhead issues switching between sandboxed instances) - these high level languages are very slow at bit twiddling, so that's something I have to avoid all together. Instead I use Cairo. When it comes time to redraw the window, I have a loop that goes over each frame (displaed document) and says (pseudocode):

Code: Select all

foreach(Frame f):
   if(!f.redraw && !screen.fullredraw) continue; // skip

   cairo_clip(f.x, f.y, f.width, f.height);
   lua_call(f.draw_func);
I wrote a C wrapper around Cairo, so inside of each frame's draw_func, LUA calls the high level Cairo instead (select pen, draw line) and it works pretty well.

(I must note, that I also wrap Cairo_Clip in LUA, but in the wrapper I must do additional bounds checking, to ensure the 'sub'-clip stays within frame, otherwise it would be possible for a frame to draw in arbitrary positions all over the screen!)

This may not work for general purpose operating system graphics - it requires all graphics to be synchronous.. You need to wait for the lower level to finish drawing before the higher level can use the results. One bad application can make the entire system unresponsive if it takes too long to draw.

If you do asynchronous graphics (everything running at it's own speed) then there can be problems if you try to access the results while the child is half way through drawing. To see this in action, take any low level rendering framework, and clear the screen red, and draw a photo on top:

Code: Select all

while(!keyDown(key_esc)):
   draw_rect(0, 0, width, width, RED);
   draw_image(0, 0, width, width, loaded_photo);
You will see the image flash between red and the photo. That's because the graphics driver or window manager is trying to read the result at arbitrary positions of execution (if you have components that never overlap, such as rendering a single image or video to the screen, this issue does not occur). As a result, most modern window managers have a "Double Buffer" flag you can set - otherwise GUI components will be flickering all over the place as you resize it. (You have a "drawing" buffer you draw to, and a "rendering buffer" the graphics driver reads from, and when you finish rendering, you copy the "drawing" buffer to the "rendering" buffer, and it'll prevent flickering overlaps. You actually have to copy the buffers, you can't do a simple pointer swap - if you think about this, you will figure out why. You can optimize it with a pointer swap, at the expense of some memory, if you do triple buffering.)

You will need to think about how you will implement double buffering. With pure pixels, you can do a lock-less copy because each pixel can fit in a 32-bit word, and on most systems, copying a memory aligned word is atomic.

In 99% of applications, it doesn't matter if your old buffer is half way through copying into the new buffer when the screen updates (hard-core 3D gamers will sometimes notice tearing - if you rotate a camera fast, the next half of frame is on the top part of the screen, bottom part contains the previous frame - you need to synchronize the buffer copying to avoid this - look up "v-sync") - however if your buffer contains a "command list" like you're proposing, instead of pixels, if you have half of the old command list mixed in with half of the new command list, you won't see half of the previous frame with half of the next frame, no, you'll see a completely different final image, as the renderer's state could have changed in a way that could dramatically change how the rest of the buffer will be executed.

To avoid both of this issues (drawing a half written buffer to the screen, and having to lock while copying), I would recommend a triple buffer approach. You will need 3 'command list' buffers:
1. Currently Drawing Into
2. Frame Ready To Render
3. Currently Drawing Into The Screen

Now, you can have two loops running completely asynchronous. In your application:

Code: Select all

// real-time, we want to keep updating the screen (video game, playing media, etc.)
while(running):
   Draw Into 1;
   Atomically Swap Pointers 1 And 2;

// or event based.. (only update when something has changed.)
on button.click:
    label.text = "Clicked";
    Draw Into 1;
    Atomically Swap Pointers 1 And 2;
And in your window manager/graphics driver/anything reading the buffer:

Code: Select all

while(running):
    Draw 3 To The Screen;
    Atomically Swap Pointers 2 And 3;
You still have to perform a lock, but only to safely swap the pointers (and depending on your architecture you may have an atomic "swap" instruction) - which is arguably a lot faster than locking a buffer for the entire duration of copying to it, or drawing to the screen.

2. Dynamic allocation/memory usage
You'll have to consider that the buffers may rapidly change size between frames. If you add a new GUI component, that may be an extra 10 commands. It's unlikely that an application will know how many commands it is about to draw, until it draws them, so likely you'll have to dynamically expand the buffer as you write to it.

In an event-based GUI system, I would consider it a non-issue, since you only need to redraw when something changes (which happens when the user provides input, for example), but in a real time application like a video-game, every time a character enters the screen, that may mean a dozen new draw calls, and you might end up resizing the command buffer every frame.

In a pixel-based system, your buffers have a fixed size (width*height) unless the user resizes the window.

3. Performance
Your main concern is to prevent the copying of pixels. Above, I discussed how we would need to do triple buffering (at the expense of some memory) to avoid very locking while copying between your drawing and screen buffers.

You can even triple buffer pixel buffers to avoid copying them, until the final moment when you want to compose everything on the screen into one image to send to your video card.

At some point, you will have to execute your command list. If your whole OS graphics stack is abstracted away from the pixel, then this will be done at the final stage in your graphics driver. Will you be able to execute this command buffer fast enough during the v-sync period, to avoid the 'flickering' I mentioned early about having a semi-drawn image displayed on the screen?

If not, you will have to double buffer in the graphics driver (one buffer that you're executing the command list in to, one buffer that's ready for rendering.) So you'll be copying pixels during this final double buffer.

If you using traditional pixel buffers, and can triple buffer all the way from your application into the graphics driver using pointer swaps, you don't have to do any pixel copying, until the final moment when you compose it into a single buffer, then copy this buffer into the final graphics card buffer (and you can skip this double buffering if you don't have any overlapping windows, such as a single full screen application.)

Next thing - copying pixels is blazingly fast.. Will executing your command list also be blazingly fast? If the command lists are executed in the graphics driver, does that mean the graphics driver will have to redraw all of the fonts, redraw all of the components on the webpage, redraw all of that video game that you have paused on the side of the screen?

Rather than executing the command buffer in the graphics driver, and rendering everything on the screen, and recalculating the final colour of every pixel, everytime you want to update the screen - would it not be better performance to let each application have their own pixel buffer, with the final colour of every pixel already precalculated (that the application was able to do in its own time), all you have to do is copy the pre-calculated pixels onto the screen?

I think this is more obvious in 3D. Calculating a pixel in 3D means performing lighting equations, calculating shadows that could have been projected on to it from multiple sources, applying refraction/reflection textures depending on the material, doing texture lookups.

It's often desirable to let the program choose when to perform these calculations.

In a 3D modelling program (Blender, Autodesk's tools, Google Sketchup, etc.) you can work with complicated models containing millions of vertices and extremely detailed textures. However, the UI of those tools stay responsive, because it only redraws the 3D scene when something changes (the camera moves, or we apply a change to the model), not while the user is clicking on toolbars and working in sub-windows. You can have multiple 3D modelling programs open side by side or on different monitors, and the entire system is fully responsive, because only the modelling program you're working with is updating its 3D scene.

The system is able to do this optimisation because all the video card driver cares about is receiving the final pixels from each window. The application is intelligent enough to know not to redraw the 3D scene unless something has changed in it.

If we're actually sending the 3D scene as a command list to the graphics driver to draw for us, it may not know that it has not changed, and will try to redraw every 3D scene every time the monitor refreshes.

There are also times when sending individual pixels is desirable. A custom video decoder? Sending millions of "SetPixel(x,y,colour)" commands would be significantly slower than just letting it write the pixels itself. When I worked on VLC, I saw that they do a lot of low level assembly optimization for unpacking pixels in bulk. You could take the OpenGL route and say "Yes, you can draw pixels into a texture, then send a command to draw that texture to the screen" - but is that not just causing more pixel copying - the very thing you're trying to avoid?

As a note to what I said above - if you have a full screen application, you can often triple buffer straight from the application into the graphic card's buffer without dealing with the window manager. This is called 'exclusive mode' in the Windows-world and is often used for full screen applications. Starting in Windows Vista, all window composting is done on the GPU, but in earlier versions - full-screen 'exclusive mode' (where Direct3D surface would render directly to the screen) was significantly faster than full-screen or window 'non-exclusive mode' (the Direct3D surface would be drawn in the GPU, then was copied back into main memory, which the CPU would then perform the window composting (other windows, popups, etc.) and have to re-upload the final screen back to the GPU.) Gamers will probably remember these days, because popup notifications would force fullscreen games to minimize. If you're worried about raw pixel-pushing speed, consider having an 'exclusive mode' for media players, video games, etc.

4. Multi-monitor rendering
I think this is unrelated to command lists vs. pixels. Every multi-monitor machine I've used has allowed me to have windows span more than one monitor.

5. Shaders
This is unrelated. I would see your system as more favorable to shaders, if your command list is executed on the GPU.
Brendan wrote:The physics of light do not change. The only thing that does change is the "scene"; and this is the only thing an application needs to describe.
It is true, the physics of light do not change. However, computers are yet to reach the stage when we are able to simulate, even just a room, at a fine grain atomic level in real time. Real-time ray tracing is barely reaching that level, real-time photon tracing is still way off. Rendering graphics is all about using approximations that give an effect that is "good enough" while maintaining acceptable performance.

Yes, the programmable pipeline (the name given to programmable shader-based rendering) is harder to learn than the fixed-function pipeline (non-shader-based rendering), because GPUs have become significantly more complicated. But, if you're working on a game that requires 3D graphics, there are libraries out there to help you - like Ogre 3D and Irrlicht. With these frameworks, you don't have to worry about shaders (but can if you want to) - you just create a scene, load in a model, attach a material, set up some lights, and away you go. You don't have to worry about shadow projection, morphing bones to animate characters, occlusion culling (octrees and anti-portals), writing custom shaders.

Shaders have taken off because they're fast. Modern graphics cards are able to run pixel shaders on the order of billions per second, using specialized units that run in parallel and are designed specifically for those sorts of operations.

What are some effects that are hard to do without shaders? Rim lighting, cell shading, water (particles as metaballs that are calculated in screen space, foam and mist), soft particles, ambient occlusion, motion blur - and custom lighting pipelines like deferred lighting and pre-pass lighting (which would require a render pass per light in traditional rendering) that is best suited to a particular application. Vertex shaders let you calculate do animation on the GPU (so you're not always having to upload a new vertices with new positions each frame) - flapping flags, water waves, skeletal animation. Tessellation shaders allow you to insert vertices at runtime, purely on the GPU, based on dynamic parameters such as textures and the camera position, preventing you from having to a) continuously stream new vertices to the GPU, and b) having to upload large geometry to the GPU, when you can you just upload a flat plane to the GPU and let the tessellation shader do the rest.

Conclusion
A call-based graphics stack surely is interesting, and I applaud your innovative ideas.

It really depends on your specific application. If you're developing an industrial OS that uses specific 'standardized' GUI components, it may be better to let your application send those GUI components directly to the OS.

However, a general purpose operating system is likely to want to support all kinds of dynamic content, including media, web pages, video games - in which case, after thinking deeply into it, I still think you would gain better performance using traditional pixel-based buffers implementing a fast lockless triple buffering system that prevents any copying until the final image is composed on to the screen.

As far as vector drawing APIs to draw resolution independent user interfaces, and scene-based APIs to simplify 3D graphics - I think these would be better suited as libraries that wrap around your low-level graphics stack - be it a pixel buffer, or GPU-accelerated OpenGL calls.

Windows wraps around these with their own APIs like GDI, WPF, Direct2D, and DirectWrite. Mac OS X offers Quartz 2D. Cairo and Skia are cross platform solutions that can optionally use an OpenGL backend or write directly to a pixel buffer. As far as 3D goes, there are hundreds of cross-platform scene APIs out - Irrlicht offers a software backend that doesn't require GPU acceleration.

Personally, if I were you, I'd first focus on optimizing pushing the pixel buffer from your application to your screen in as least calls as possible. Then focus on your high-level vector drawing or scene-based 3d as a user-library.
My OS is Perception.
Gigasoft
Member
Member
Posts: 855
Joined: Sat Nov 21, 2009 5:11 pm

Re: OS Graphics

Post by Gigasoft »

The original post actually mixes up many unrelated problems.

- Should 2D GUIs be drawn completely into a rectangular buffer and then clipped, or should clipping be applied while drawing? Obviously, most existing windowing systems do the latter. The notion that "most existing systems" draw the entire thing to a buffer and then work with the resulting pixels is simply a lie.
- How about 3D games? Of course you can do the same for 3D with very minor changes to existing code, but most games don't bother, as they are designed to run in the foreground. With software vertex processing, you can avoid uploading textures that are not used. Some cards may also issue an IRQ to load textures when needed when using hardware vertex processing.
- When pixels are being generated to be used as an input for a later drawing operation, should only pixels that are going to be used be generated? Yes, if possible. But then we need to know in advance which pixels will be used. The application knows this, but the system does not. If this is handled automatically by the system, all drawing operations must be specified before drawing can begin. If handled by the application, there is no problem.
- Should the drawing commands be stored for later use? Possibly, if they are expensive to prepare. But for a continuous animation, there is of course no point.
- Should video card drivers implement game engines? Doubtfully, as game engines are not video card specific. A generic game engine should be able to work with all video cards, provided that their interface is standardized. There is no reason for a particular video card to display a game differently apart from having different capabilities, which can just as well be reported in a standardized way to the game engine.
User avatar
Owen
Member
Member
Posts: 1700
Joined: Fri Jun 13, 2008 3:21 pm
Location: Cambridge, United Kingdom
Contact:

Re: OS Graphics

Post by Owen »

Gigasoft wrote:The original post actually mixes up many unrelated problems.

- Should 2D GUIs be drawn completely into a rectangular buffer and then clipped, or should clipping be applied while drawing? Obviously, most existing windowing systems do the latter. The notion that "most existing systems" draw the entire thing to a buffer and then work with the resulting pixels is simply a lie.
Most existing crappy graphics stacks (e.g. Windows with DWM disabled, X11 without Composite, Mac OS 9) apply clipping while drawing.

More modern graphics stacks render the whole window and don't bother with clipping it (because they want to be able to do things like realtime previews - observe what happens when you hover over a taskbar icon on Windows Vista/7/8: a live preview of the window can be seen in the popup
Gigasoft wrote:- How about 3D games? Of course you can do the same for 3D with very minor changes to existing code, but most games don't bother, as they are designed to run in the foreground. With software vertex processing, you can avoid uploading textures that are not used. Some cards may also issue an IRQ to load textures when needed when using hardware vertex processing.
Software vertex processing went out of fashion years ago as just too slow...

In general, loading resources "when needed" is a great way to cause horrible stutter and drop frames.

...though with regards to "as needed" texture loading, search for "megatexture" and "partially resident textures"
Gigasoft wrote:- When pixels are being generated to be used as an input for a later drawing operation, should only pixels that are going to be used be generated? Yes, if possible. But then we need to know in advance which pixels will be used. The application knows this, but the system does not. If this is handled automatically by the system, all drawing operations must be specified before drawing can begin. If handled by the application, there is no problem.
- is the cost of this optimization greater than the cost of generating all the pixels?
Gigasoft wrote:- Should the drawing commands be stored for later use? Possibly, if they are expensive to prepare. But for a continuous animation, there is of course no point.
- Should video card drivers implement game engines? Doubtfully, as game engines are not video card specific. A generic game engine should be able to work with all video cards, provided that their interface is standardized. There is no reason for a particular video card to display a game differently apart from having different capabilities, which can just as well be reported in a standardized way to the game engine.
Game engine != Graphics engine.

But there are significant differences in the internal structure of various games' graphics engines, so...
Post Reply