OS Graphics

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: OS Graphics

Post by Brendan »

Hi,
rdos wrote:I'd follow in Brendan's trails if it wasn't because I know it's not worthwhile pursuing. IOW, I agree with his idea that the IT world has gone bankrupt by bad design and code, and that also includes most open-source projects. It is end users that cannot make a simple program, and sometimes cannot even use a simple program that determines the decisions of the software industry. In addition to that, big software companies have produced so many conflicting standards that they effectively have locked-up software programmers even if they are not closed anymore.

In essence, being a programmer on "modern" tools means you need to dedicate most of your time learning crap that big companies made in order to tie programmers up on their solutions so they have no time for inventing their own.
Exactly.

And the single largest problem is that nobody has the ability to effectively deprecate anything. We're still using languages from the 1970's and supporting graphics file formats from the 1980's and coping with web standards from the 1980s; and every year they add more extensions, more standards, more file formats, more languages, etc on top of "hassle mountain"; and you know that mountain is just going to keep growing and growing and growing because they're shovelling stuff onto it 20 times faster than old stuff rots away.

Sooner or later someone has to demolish "hassle mountain" and fix the cause of the problem. That is what I'm trying to do.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
rdos
Member
Member
Posts: 3276
Joined: Wed Oct 01, 2008 1:55 pm

Re: OS Graphics

Post by rdos »

Brendan wrote: And the single largest problem is that nobody has the ability to effectively deprecate anything. We're still using languages from the 1970's and supporting graphics file formats from the 1980's and coping with web standards from the 1980s; and every year they add more extensions, more standards, more file formats, more languages, etc on top of "hassle mountain"; and you know that mountain is just going to keep growing and growing and growing because they're shovelling stuff onto it 20 times faster than old stuff rots away.
Exactly, and the big software companies actually lives on this and don't want it to change because they already have all the legacy code which hinders new competitors from appearing on the market, and they have hardware companies that will write drivers for their OSes.
Brendan wrote: Sooner or later someone has to demolish "hassle mountain" and fix the cause of the problem. That is what I'm trying to do.
I get the impression that you want to support all this diversity in your own way, which would make your designs incompatible and also extremely complex.

I go about this problem in a different way. I select a reasonable standard and then I ignore other alternatives. For example I have decided to support IDE and AHCI, but not SCSI or anything propietary. For the moment I have decided not to support UEFI because it is a mess written for shitty designs (but I might have to .reconsider that if it becomes common enough). I support TCP/IP and common network cards, but won't bother with shitty WiFi designs that totally lack a reasonable standard. I don't support the shitty MS file sharing mess (probably never will), and instead force users to use FTP. I don't support any database standard since those are designed to gain monopoly. I won't implement PHP, .NET, but possibly Java since the former standards are terrible. Instead I provide users with an interface so they can create dynamic content with C++.

So I don't invent new standards, rather ignore the one's I really dislike as much as possible, and in the end narrow down the mountain. I think that is a more useful approach.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: OS Graphics

Post by Brendan »

Hi,
XanClic wrote:
Brendan wrote:Please understand that there's a massive difference between "realistic rendering" and "reality". For example, you can have a game where the player is a yellow sphere with a slit for a mouth who runs (without arms or legs) through corridors filled with pills that float in mid-air (un-effected by gravity) while trying to avoid ghosts; and regardless of how realistic the graphics are for this game, if you turn the computer off and go outside to "reality" you won't see yellow spherical people running around, or pills floating in mid-air, or ghosts. Trying to suggest that "realistic rendering" has anything to do with "reality" is silly.
Of course realistic rendering is linked to reality. The name is the first evidence. Second, I was talking about photorealistic rendering on purpose. The very definition of that is that you try to create an image which looks like something a camera might have taken. I'm very well aware of the fact that the process in which that image is generated on today's GPUs has very little to do with the process "invoked" by reality. However, the purpose still is to copy the results of reality, thus it is obviously linked to reality. You took my statements about "photorealistic rendering" for statements about "realistic rendering" - so you seem to imply they're both the same. I'd agree with you on that one. But if they are the same, realistic rendering is also very much linked to reality - its result, not the process used (actually, the process is undefined).
So when you're saying one can use (photo-)realistic rendering to generate non-realistic pictures, this is by definition just wrong.
To me, "photo-realistic rendering" means using as much processing time as you want to generate the highest quality image possible (typically using ray tracing, and possibly using a render farm). What we're talking about I'd call "real-time rendering" where you have to sacrifice image quality due to time constraints (e.g. because you want 60 frames per second rather than 1 frame every 4 hours).

In both cases you try to implement a renderer that tries to follow the physical model of light. For photo-realistic rendering you create a very accurate model of light; and for real-time rendering you skip a lot of things to save time (e.g. severely limit reflection and ignore refraction). Then you apply your renderer to a scene; and that scene may be realistic or entirely unrealistic.

Basically, "photo-realistic rendering of reality" (what you seemed to be talking about) is very different to "real-time rendering of realistic or unrealistic scenes" (what games do).
XanClic wrote:
Brendan wrote:I'd just add a "load_shader" command (that tells the video card to load a shader that uses some sort of standardised byte-code for shader language, like GLSL)
First off, GLSL is no byte code (OpenGL does in fact specify a very vague format for compiled shaders, but this isn't portable between different GPUs). Second, you do know that shaders aren't just some trivial stuff which you can throw into any 3D engine and the result will always be the same? Shaders heavily depend on the engine architecture involved. The shaders would have to be written specifically for your rendering architecture - but I guess you'll just ignore this since you ignore everything which isn't written specifically for your system. Also, the interface you provide to the shaders would have to be stable, but that problem is actually somehow solvable.

Third, shaders aren't everything. Although today's rendering pipelines heavily depend on shaders, there is still a whole lot which happens outside of them. But I guess you'd then be free to specify input data to the shaders as well as textures on your system so you'd actually be able to use an API equivalent to, e.g., OpenGL (although you could just provide that directly, then, but I'll come back to that point).
You're right - it's worse than I thought (which was bad enough), and I really should do everything possible to make sure applications/games don't need to provide their own shaders.
XanClic wrote:
Brendan wrote:If someone was giving you a free Ferrari and told you the car is capable of doing 340 km/h, would you tell them that you don't want the free Ferrari because sometimes you want to drive slower? If someone gave you a free OS that's capable of supporting 3D GUIs well, why would you immediately assume that every GUI for that OS must use all the 3D features all the time (and fail to understand that a GUI for that OS could also be made to look as crappy as some monochrome 2D GUI from 3 decades ago)?
OK, you're right. If I can use your GUI in the plain old 2D mode, I'm fine. Although I then don't understand why I should prefer your GUI over any other, since I can hardly imagine a practical use for the additional features.
That's not quite what I meant. Applications generate 3D windows (e.g. with raised buttons, recessed text boxes, etc; or with aliens trying to conquer the world), and the GUI can put those windows parallel to the screen (and all at the same distance from the screen) with no rotation or anything, and use ambient lighting only (no shadows or anything); so that it looks like a boring old 2D GUI (despite the fact that all of it would actually be 3D).

Of course just because this would be possible doesn't mean that it makes sense (it was just a silly example to satisfy someone being argumentative). You would want to use lighting and shadows and place windows at different distances from the screen so that it looks more modern (e.g. only slightly better than "2D failing to pretend to be 3D" like modern GUIs). However, because application's windows actually would be 3D (and not just 2D pretending to be 3D) it would look better (e.g. the edge of a shadow would follow the contours of the window it falls on) and you can do other things (e.g. like the "Aero Flip 3D" effect and other 3D compositing effects) properly instead of just making it obvious that "2D failing to pretend to be 3D" windows really are flat pieces of crap.

Basically, it does everything that all current modern GUIs are trying to do, but it actually does it properly; but it is not limited to that.

Of course you're used to seeing "2D failing to pretend to be 3D windows that really are flat pieces of crap", and you're suffering from an inability to imagine anything beyond what you've already seen. To help with this, imagine you've got this without any of the GUI's shadows:
g1ns.png
g1ns.png (2.11 KiB) Viewed 5750 times
And then the GUI adds shadows, like this:
g1s.png
g1s.png (2.4 KiB) Viewed 5750 times
And the shadow from the top window follows the contours inside the game's window.
XanClic wrote:
Brendan wrote:Rather than thinking about how different pieces of software communicate (e.g. how applications communicate with GUIs, what sort of video driver API to use, etc) and trying to make the interfaces between these pieces better (e.g. more flexible); I should just gather random pieces of code that other people have written for completely different purposes and stitch them together in strange ways (e.g. slap code from the unreal developer's kit into things like text editors, widgets, GUIs, etc), and then hope that the result won't be a hideous abomination?
Um, no? You should just do what everyone else is doing, i.e., implement a "low-level" interface such as OpenGL and then use libraries for the rest? But I guess that's no solution to you, since you hate the idea of libraries. I love it. I guess that pretty much sums up why I'm very sceptical about a whole 3D/physics engine as part of the OS and I rather prefer the ability to choose from a variety of libraries to achieve my goal (or even using the low-level interface in the first place) and why you are of such a different opinion.
Because the best way to go beyond the limitations of current technology is to do exactly the same thing with exactly the same limitations?


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: OS Graphics

Post by Brendan »

Hi,
rdos wrote:
Brendan wrote:Sooner or later someone has to demolish "hassle mountain" and fix the cause of the problem. That is what I'm trying to do.
I get the impression that you want to support all this diversity in your own way, which would make your designs incompatible and also extremely complex.
For things like "20 different file formats for graphics images" my way is actually less complex (or less bloated) because (for the majority of the OS) I only need to care about one standard file format for each purpose instead of lots of them for each purpose (and don't need libraries to deal with the pointless complexity of "lots of file formats").

Also, most people have forgotten about my "automatic file format converters", which would be used to convert files from unsupported file formats (e.g. PNG, "plain text", etc) into the corresponding native file format; so even though the majority of the OS doesn't need to support all the mess it doesn't necessarily make the OS or its applications incompatible with anything.
rdos wrote:I go about this problem in a different way. I select a reasonable standard and then I ignore other alternatives. For example I have decided to support IDE and AHCI, but not SCSI or anything propietary. For the moment I have decided not to support UEFI because it is a mess written for shitty designs (but I might have to .reconsider that if it becomes common enough). I support TCP/IP and common network cards, but won't bother with shitty WiFi designs that totally lack a reasonable standard. I don't support the shitty MS file sharing mess (probably never will), and instead force users to use FTP. I don't support any database standard since those are designed to gain monopoly. I won't implement PHP, .NET, but possibly Java since the former standards are terrible. Instead I provide users with an interface so they can create dynamic content with C++.
I don't want to prevent people from writing device drivers that support (e.g.) SCSI, WiFi or proprietary devices - I can't influence the decisions hardware manufacturers make or influence the decisions software developers for other OSs make; but I can have complete control over how things are done in my OS.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
XanClic
Member
Member
Posts: 138
Joined: Wed Feb 13, 2008 9:38 am

Re: OS Graphics

Post by XanClic »

Brendan wrote:To me, "photo-realistic rendering" means using as much processing time as you want to generate the highest quality image possible (typically using ray tracing, and possibly using a render farm). What we're talking about I'd call "real-time rendering" where you have to sacrifice image quality due to time constraints (e.g. because you want 60 frames per second rather than 1 frame every 4 hours).
Okay, if you want to. I'm referring to both as PRR, because to me, it's specifies the intended result, but okay.
Brendan wrote:In both cases you try to implement a renderer that tries to follow the physical model of light.
No, you don't. Rasterization has very little to do with the physical model of light. That's also why it's so hard to implement correct shadows, reflection and refraction using rasterization. Ray tracing is also pretty different, since it inverses the physical model of light, although that is obviously much more related.

The only thing all 3D rendering processes have in common with reality is that they try solving the Rendering equation.
Brendan wrote:For photo-realistic rendering you create a very accurate model of light
Basic ray tracing isn't what I'd call a very accurate model; only extensions such as Photon Mapping seem like they're pretty much directly derived from the physical model.
Brendan wrote:and for real-time rendering you skip a lot of things to save time (e.g. severely limit reflection and ignore refraction)
As I've said, you don't skip them, you implement a totally different model. Rasterization is just that: Rastering something which is not already rastered, in this case, this means using linear transformations in order to transform 3D to 2D coordinates and raster the space in between. Shading is just used to modify the transformation and to specify exactly how that space should be filled (the latter, fragment/pixel shading is what is commonly referred to when talking about shading in general). This model on its own is incapable of generating shadows, reflections or refractions; they are created using certain tricks (hard shadows: Shadow Volumes or Shadow Mapping; reflection: plain negative scaling at plane mirrors or environment maps; refraction: also environment maps) which all require one or more additional renderings, since a rasterizer only cares about one polygon at a time.

All in all I'd propose you don't even care about rasterizing. It's a process from a time where parallel computations weren't as common as they're today (basic rasterization is impossible to parallelize); thus, imho, ray tracing is the future, also for consumer GPUs (which we're already seeing through nvidia's libs for CUDA and some OpenCL implementations).
Brendan wrote:Basically, "photo-realistic rendering of reality" (what you seemed to be talking about) is very different to "real-time rendering of realistic or unrealistic scenes" (what games do).
As I've said, when I'm talking about PRR, I'm referring to the result, not the process.
Brendan wrote:You're right - it's worse than I thought (which was bad enough), and I really should do everything possible to make sure applications/games don't need to provide their own shaders.
I'd vote that -1 for Flamebait, but my posts probably aren't that much better.
Brendan wrote:only slightly better than "2D failing to pretend to be 3D" like modern GUIs
They try to pretend that? I never noticed. I always thought the shadows were mainly used for enhanced contrast without actually having to use thick borders.
Brendan wrote:it would look better (e.g. the edge of a shadow would follow the contours of the window it falls on)
Pretty cool, but I still fail to see the benefits in regard to the work required.
Brendan wrote:making it obvious that "2D failing to pretend to be 3D" windows really are flat pieces of crap.
Of course they are. Why would I ever want an actually raised button? But I guess that's just something I don't like and you do like. And as I've said, if I can always go back to my petty 2D windows, I'm okay with it.
Brendan wrote:Because the best way to go beyond the limitations of current technology is to do exactly the same thing with exactly the same limitations?
The only limitation I do see is that you can't have an integrated user experience including every application (e.g., shadows falling into a 3D applications from the outside). And as I've said, this is a limitation I'm very glad about. I don't want my applications to unite, I want the windows which represent different applications to be completely seperated.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: OS Graphics

Post by Brendan »

Hi,
XanClic wrote:
Brendan wrote:To me, "photo-realistic rendering" means using as much processing time as you want to generate the highest quality image possible (typically using ray tracing, and possibly using a render farm). What we're talking about I'd call "real-time rendering" where you have to sacrifice image quality due to time constraints (e.g. because you want 60 frames per second rather than 1 frame every 4 hours).
Okay, if you want to. I'm referring to both as PRR, because to me, it's specifies the intended result, but okay.
Brendan wrote:In both cases you try to implement a renderer that tries to follow the physical model of light.
No, you don't. Rasterization has very little to do with the physical model of light. That's also why it's so hard to implement correct shadows, reflection and refraction using rasterization. Ray tracing is also pretty different, since it inverses the physical model of light, although that is obviously much more related.

The only thing all 3D rendering processes have in common with reality is that they try solving the Rendering equation.
Brendan wrote:For photo-realistic rendering you create a very accurate model of light
Basic ray tracing isn't what I'd call a very accurate model; only extensions such as Photon Mapping seem like they're pretty much directly derived from the physical model.
Brendan wrote:and for real-time rendering you skip a lot of things to save time (e.g. severely limit reflection and ignore refraction)
As I've said, you don't skip them, you implement a totally different model. Rasterization is just that: Rastering something which is not already rastered, in this case, this means using linear transformations in order to transform 3D to 2D coordinates and raster the space in between. Shading is just used to modify the transformation and to specify exactly how that space should be filled (the latter, fragment/pixel shading is what is commonly referred to when talking about shading in general). This model on its own is incapable of generating shadows, reflections or refractions; they are created using certain tricks (hard shadows: Shadow Volumes or Shadow Mapping; reflection: plain negative scaling at plane mirrors or environment maps; refraction: also environment maps) which all require one or more additional renderings, since a rasterizer only cares about one polygon at a time.
This is just being pointlessly pedantic. It's the same basic model of light that you're using as the basis for the renderer, regardless of how you implement the renderer (e.g. with ray tracing or rasterisation) and regardless of how much your implementation fails to model (e.g. refraction).
XanClic wrote:All in all I'd propose you don't even care about rasterizing. It's a process from a time where parallel computations weren't as common as they're today (basic rasterization is impossible to parallelize); thus, imho, ray tracing is the future, also for consumer GPUs (which we're already seeing through nvidia's libs for CUDA and some OpenCL implementations).
This is the sort of "awesome logic" you get from people that only ever call libraries/code that other people have implemented. It's trivial to do rasterization in parallel (e.g. split the screen into X sections and do all X sections in parallel).

Also note that I couldn't care less how the video driver actually implements rendering (e.g. if they use rasterization, ray tracing, ray casting or something else). That's the video driver programmer's problem and has nothing to do with anything important (e.g. the graphics API that effects all applications, GUIs and video drivers).
XanClic wrote:
Brendan wrote:only slightly better than "2D failing to pretend to be 3D" like modern GUIs
They try to pretend that? I never noticed. I always thought the shadows were mainly used for enhanced contrast without actually having to use thick borders.
Most have since Windows95 or before (e.g. with "baked on" lighting/shadow). Don't take my word for it though - do a google search for "ok button images" if you don't have any OS with any GUI that you can look at. Note: There is one exception that I can think of - Microsoft's "Metro".
XanClic wrote:
Brendan wrote:it would look better (e.g. the edge of a shadow would follow the contours of the window it falls on)
Pretty cool, but I still fail to see the benefits in regard to the work required.
Brendan wrote:making it obvious that "2D failing to pretend to be 3D" windows really are flat pieces of crap.
Of course they are. Why would I ever want an actually raised button? But I guess that's just something I don't like and you do like. And as I've said, if I can always go back to my petty 2D windows, I'm okay with it.
As a user you're currently happy with 2D windows, and if you saw it working you'd be even more happy with 3D windows. Of course you're more interested in trolling than discussing the merits of an alternative approach, so I doubt you'd willingly admit that.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Owen
Member
Member
Posts: 1700
Joined: Fri Jun 13, 2008 3:21 pm
Location: Cambridge, United Kingdom
Contact:

Re: OS Graphics

Post by Owen »

Brendan wrote:Hi,
XanClic wrote:
Brendan wrote:To me, "photo-realistic rendering" means using as much processing time as you want to generate the highest quality image possible (typically using ray tracing, and possibly using a render farm). What we're talking about I'd call "real-time rendering" where you have to sacrifice image quality due to time constraints (e.g. because you want 60 frames per second rather than 1 frame every 4 hours).
Okay, if you want to. I'm referring to both as PRR, because to me, it's specifies the intended result, but okay.
Brendan wrote:In both cases you try to implement a renderer that tries to follow the physical model of light.
No, you don't. Rasterization has very little to do with the physical model of light. That's also why it's so hard to implement correct shadows, reflection and refraction using rasterization. Ray tracing is also pretty different, since it inverses the physical model of light, although that is obviously much more related.

The only thing all 3D rendering processes have in common with reality is that they try solving the Rendering equation.
Brendan wrote:For photo-realistic rendering you create a very accurate model of light
Basic ray tracing isn't what I'd call a very accurate model; only extensions such as Photon Mapping seem like they're pretty much directly derived from the physical model.
Brendan wrote:and for real-time rendering you skip a lot of things to save time (e.g. severely limit reflection and ignore refraction)
As I've said, you don't skip them, you implement a totally different model. Rasterization is just that: Rastering something which is not already rastered, in this case, this means using linear transformations in order to transform 3D to 2D coordinates and raster the space in between. Shading is just used to modify the transformation and to specify exactly how that space should be filled (the latter, fragment/pixel shading is what is commonly referred to when talking about shading in general). This model on its own is incapable of generating shadows, reflections or refractions; they are created using certain tricks (hard shadows: Shadow Volumes or Shadow Mapping; reflection: plain negative scaling at plane mirrors or environment maps; refraction: also environment maps) which all require one or more additional renderings, since a rasterizer only cares about one polygon at a time.
This is just being pointlessly pedantic. It's the same basic model of light that you're using as the basis for the renderer, regardless of how you implement the renderer (e.g. with ray tracing or rasterisation) and regardless of how much your implementation fails to model (e.g. refraction).
Except for lots of cases rasterization doesn't even attempt to approximate the model of light - just entirely fake it using unrelated methods.

It's completely bonkers to say that e.g. cube mapping (used a lot for reflections from static objects) has anything to do with the model of light. What does doing an inverse projection from a point onto the surface of a virtual cube have to do with light?

Brendan wrote:
XanClic wrote:All in all I'd propose you don't even care about rasterizing. It's a process from a time where parallel computations weren't as common as they're today (basic rasterization is impossible to parallelize); thus, imho, ray tracing is the future, also for consumer GPUs (which we're already seeing through nvidia's libs for CUDA and some OpenCL implementations).
This is the sort of "awesome logic" you get from people that only ever call libraries/code that other people have implemented. It's trivial to do rasterization in parallel (e.g. split the screen into X sections and do all X sections in parallel).

Also note that I couldn't care less how the video driver actually implements rendering (e.g. if they use rasterization, ray tracing, ray casting or something else). That's the video driver programmer's problem and has nothing to do with anything important (e.g. the graphics API that effects all applications, GUIs and video drivers).
XanClic wrote:
Brendan wrote:only slightly better than "2D failing to pretend to be 3D" like modern GUIs
They try to pretend that? I never noticed. I always thought the shadows were mainly used for enhanced contrast without actually having to use thick borders.
Most have since Windows95 or before (e.g. with "baked on" lighting/shadow). Don't take my word for it though - do a google search for "ok button images" if you don't have any OS with any GUI that you can look at. Note: There is one exception that I can think of - Microsoft's "Metro".
The baked on lighting and shadow is a depth hint. A technique used by artists to hint at depth in 2D shapes. They are used in order to give our brain an appropriate context as to what an image represents. As an example of why, compare Metro's edit box and button and try and determine which is which without external clues (E.G. size and shape) to see why this is done.

Its' all about context, not about actual depth. Actual depth brings no productivity advantages (see Sun's 3D desktop as an example)
Brendan wrote:
XanClic wrote:
Brendan wrote:it would look better (e.g. the edge of a shadow would follow the contours of the window it falls on)
Pretty cool, but I still fail to see the benefits in regard to the work required.
Brendan wrote:making it obvious that "2D failing to pretend to be 3D" windows really are flat pieces of crap.
Of course they are. Why would I ever want an actually raised button? But I guess that's just something I don't like and you do like. And as I've said, if I can always go back to my petty 2D windows, I'm okay with it.
As a user you're currently happy with 2D windows, and if you saw it working you'd be even more happy with 3D windows. Of course you're more interested in trolling than discussing the merits of an alternative approach, so I doubt you'd willingly admit that.
You're the one who seems to be unable to convey the merits of an actually 3D desktop, particularly in an age where designs are heading away from skeukomorphic designs towards more "authentically digital" styling, and unnecessary 3D flourishes are being curtailed.

This is especially evident in Windows: From XP onwards, the interface has become more and more flat. XP-7, edit boxes are no longer sunken, and buttons only hint at being raised because of a gradient overlay. The content panes of the Vista+ file manager are no longer visibly sunken in the way they were in XP and earlier.

--

Brendan, I have a question for you: Without shaders, how does a developer implement things like night and thermal vision? Also, lets say there's some kind of overlay on the screen (e.g. the grain of the NV goggles), do shadows still fall into the scene?

Does anybody want shadows falling from windows into their games?
Gigasoft
Member
Member
Posts: 855
Joined: Sat Nov 21, 2009 5:11 pm

Re: OS Graphics

Post by Gigasoft »

Brendan wrote:For cartoon shading create "cartoon shaded textures" (or just use solid colour polygons instead of textures), and if you don't want lighting and shadows use ambient light and no other light sources.
Do you even know what cartoon shading is? Cartoon shading is not the same as making something look flat. It means that you are lighting something using a restricted colour palette. Although cartoon shading uses a texture, you can't just make "cartoon shaded textures" because shading depends on lighting. An object may rotate relative to the light source, lighting up different parts of it. Now, it is possible to implement cartoon shading with a traditional fixed function pipeline, but this requires that the application performs all vertex processing on the CPU. You would then actually use the calculated intensity as a coordinate for a texture that holds the cartoon colors. If you want to use hardware vertex processing, a shader program is required.
Brendan wrote:Artists (and not programmers) write shaders do they?
Yes. Art is not just textures and models. Shaders are part of the description of how things should look. Whether the artist tells a programmer "I want this object to be lit according to this formula here" or if he writes the shader himself in HLSL doesn't matter. No matter who writes it, a shader is the result of some artistic decision. You can't easily separate shaders from a game and change them blindly. Shader programs are basically a way to provide a developer with a means of coming up with some new interesting effect and telling the GPU how he wants it done. Most of the things a game does to display its output is specifically designed for that particular game. Want objects to cast shadows? Then you may use a shadow buffer, whose required resolution depends on the relative position of light sources, shadow casting objects and objects that receive shadows. Or you could use volumetric shadows, constructed using specially designed meshes that try to approximate the shadow of some object. All these tasks are best done by a human who possesses common sense and can see and evaluate the result. Sure, you can come up with a library that provides shaders for some common cases, or even constructs shaders intelligently according to some scheme, but the end result should not be the inability of a programmer to use programmable shaders for the purpose they were invented for, or the swapping of an effect with something else without the end user or the developer's knowledge.
Brendan wrote:For your system, if 10 different game developers all want a fancy shader for water then 10 different game engine developers need to write 10 different shaders (for 10 different video cards); while for my system the shaders would be part of the video driver and there'd only be 10 instead of 100 of them.
Wrong. Shader models abstract away the details of the GPU microcode instructions, so that if a shader is written for a specific shader model, it will work on all cards that support that shader model or higher. The game developer will then often only need to write a shader for the earliest shader model possible. The cases where multiple different shaders are needed depending on the video card arise from having to do something different on less advanced cards, and you can not easily foresee them. This is typically the result of doing something new and innovative, not from implementing another variant of phong/lambert/etc. shading.
Brendan wrote:For your system, once a game is released development stops and the end user is screwed. For my system the user benefits from any future improvements to the shaders built into their video driver and can upgrade to a completely different video card in 10 years time and still get the latest and greatest shader improvements
It is nice to have graphical enhancements, but they may not always be compatible with what the developer intended, so you have to be able to turn them off. For example, if an application uses unfiltered textures, you can't just go behind the application's back and decide to use filtered textures instead. Often it will turn out fine, but in many cases you'll see pixels the developers didn't intend for you to see, and everything will look much uglier than before.
Brendan wrote:This was an important benefit when the industry was young and evolving rapidly; but it's not as important now.
Innovation never gets old. Game design is not a copy and paste job. Would you take the rock texture you photographed 5 years ago and use it everywhere in all your games? Probably not. One day you might find come up with a really cool looking wooden floor texture. But you wouldn't go back to your old games and replace all instances of your rock with the new wooden floor. The same goes for shaders. They are not a "hassle" that you need to quickly get out of the programmer's way any more than textures are a hassle or models are a hassle.
Brendan wrote:As an alternative to your work-arounds for unspecified problems, I offer unspecified solutions and claim that my unspecified solutions are a superior way to solve your unspecified problems.
The first problem in question is edge marking, and it has to function exactly as I described it as I am emulating another system. The depth values used are the same as in the depth buffer, but at different locations. (And if I could use the depth buffer directly, which most video cards can't do due to how depth buffers are stored, it would not change the actual shader). The other problem is range fog. Focal widths are stored per pixel because the only way I can guess the focal width is by the projection matrix set by the emulated application, which can change multiple times during a frame. Of course, most other applications would never need to solve these problems, because they are different applications that need to solve different problems. To provide solutions for all these problems you'd need to write a generic problem solving oracle.
Gigasoft
Member
Member
Posts: 855
Joined: Sat Nov 21, 2009 5:11 pm

Re: OS Graphics

Post by Gigasoft »

Brendan wrote:However, because application's windows actually would be 3D (and not just 2D pretending to be 3D) it would look better (e.g. the edge of a shadow would follow the contours of the window it falls on)
When I use a GUI, I like to be able to think in 2D. There is a reason that TV remotes are mostly flat and not shaped like staircases or christmas trees or vases. I don't want to have to think about judging distances and surface directions and having to move a window because some text is occluded behind a tall thing that is in the way.

The essential properties I like a button to have are: a clearly defined outline so I know where it begins and ends, easily readable text, some indication that it is supposed to be a button, and an indication of whether I am currently clicking on it or not. As an application designer, I also need to know that what the user sees is always readable, no matter how the window is positioned on the screen and even if a very bright explosion is simultaneously happening in a game on the same screen. These conditions are fulfilled just fine with standard 2D GUIs of today, no need to change it.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: OS Graphics

Post by Brendan »

Hi,
Owen wrote:
Brendan wrote:This is just being pointlessly pedantic. It's the same basic model of light that you're using as the basis for the renderer, regardless of how you implement the renderer (e.g. with ray tracing or rasterisation) and regardless of how much your implementation fails to model (e.g. refraction).
Except for lots of cases rasterization doesn't even attempt to approximate the model of light - just entirely fake it using unrelated methods.
That's still pointless/pedantic word games. The goal is still to make it look like (warning: the following is an example only) light goes in straight lines, is reflected by "things", and is blocked by "things". How this is implemented (and whether or not the actual implementation bears any similarity to the model it's trying to emulate) is completely and utterly irrelevant.
Owen wrote:
Brendan wrote:
XanClic wrote:They try to pretend that? I never noticed. I always thought the shadows were mainly used for enhanced contrast without actually having to use thick borders.
Most have since Windows95 or before (e.g. with "baked on" lighting/shadow). Don't take my word for it though - do a google search for "ok button images" if you don't have any OS with any GUI that you can look at. Note: There is one exception that I can think of - Microsoft's "Metro".
The baked on lighting and shadow is a depth hint. A technique used by artists to hint at depth in 2D shapes. They are used in order to give our brain an appropriate context as to what an image represents. As an example of why, compare Metro's edit box and button and try and determine which is which without external clues (E.G. size and shape) to see why this is done.
Sure, it's a technique used by artists to hint at depth in 2D shapes because it's important but the graphics APIs used by applications suck and artists can't do it properly (in ways that maintain a consistent illusion of depth when the window isn't parallel to the screen, or when there are shadows, or when the light source isn't where the artist expected, or...).
Owen wrote:Its' all about context, not about actual depth. Actual depth brings no productivity advantages (see Sun's 3D desktop as an example)
Sun's Project Looking Glass, and Microsoft's "Aero Flip 3D", just about everything Compiz does, and every attempt at "3D GUI effects" I've seen; all look worse than they should because the graphics they're working on (application's windows) are created as "2D pixels". This isn't limited to just the lack of actual depth (as opposed to "painted on depth") - it causes other problems related to scaling (e.g. where things like thin horizontal lines that look nice in the original window end up looking like crap when that window isn't parallel to the screen).

Now; time for a history lesson. Once upon a time applications displayed (mostly) ASCII text, and people thought it was efficient (it was) and I'm sure there were some people that thought it was good enough (who am I kidding - there's *still* people that think it's good enough). Then some crazy radical said "Hey, let's try 2D graphics for applications!". When that happened people invented all sorts of things that weren't possible with ASCII (icons, widgets, pictures, WYSIWYG word processors, etc) and these new things were so useful that it'd be hard to imagine living without them now that they've become so common. Then more crazy radicals said "Hey, let's try 3D graphics!" and some of them tried retro-fitting 3D to GUIs, and it sucks because they're only looking at the GUI and aren't changing the "2D graphics" API that applications use. Because they're not changing the API that applications use it doesn't look as good as it should (which is only a minor problem really), but more importantly, no applications developers are able to invent new uses for 3D that couldn't have existed before (as applications are still stuck with 2D APIs) so nobody can see the point of bothering.

If applications did use a 3D API; then what sort of things might people invent? How about spreadsheet software that generates 3D charts, like this:
Image
But so the user can grab the chart and rotate it around?

How about an image editor where you use something like this to select a colour (and actually *can* select a colour from the middle!):
Image

How about a (word processor, PDF, powerpoint) document with a "picture" of the side of a car embedded in it; where you can grab that car, rotate it around and look at it from the front?

Now think about what happens when we're using 3D display technology instead of 2D monitors (it will happen, it's just that nobody knows when).

Ok, that's the end of the history lesson. Let's have "story time". Bob and Jim are building a picket fence. They've got the posts in the ground, bolted rails onto them, worked out the spacing for the vertical slats, and they've just started nailing them onto the rails. Bob is working at one end and Jim is working at the other, and every time Bob hammers in a nail he slaps his face. Jim watches this for a while and eventually his curiosity starts getting to him; so he asks Bob why he keeps slapping his face. Bob thinks about it and says "I don't know, that's just how my father taught me to hammer nails.". Now they're both thinking about it. Bob's father arrives to see how they're going with the fence and they ask Bob's father about it. Bob's father replies "I don't know either, that's just how Bob's grandfather taught me how to hammer nails". They decide to go and see Bob's grandfather. When they ask Bob's grandfather about it he nearly dies laughing. Bob's grandfather explains to them that he taught Bob's father how to hammer nails when they were fixing a boat shed on a warm evening in the middle of summer; and there were mosquitoes everywhere!

The "modern" graphics APIs that applications use only do 2D because older graphics APIs only did 2D, because really old graphics APIs only did 2D, because computers were a lot less powerful in 1980. Note: I know I'm exaggerating - there's other reasons for it too, like not wanting to break compatibility, and leaky abstractions (shaders, etc) that make 3D a huge hassle for normal applications.
Owen wrote:Brendan, I have a question for you: Without shaders, how does a developer implement things like night and thermal vision? Also, lets say there's some kind of overlay on the screen (e.g. the grain of the NV goggles), do shadows still fall into the scene?
If the application says there's only ambient light and no light sources, then there can't be any shadows. For thermal vision you'd use textures with different colours (e.g. red/yellow/white). For night vision you could do the same (different textures) but the video driver could provide a standard shader for that (e.g. find the sum of the 3 primary colours and convert to green/white) to avoid different textures.

For whether or not shadows from outside the application/game fall into the application/game's scene; first understand that it'd actually be light falling in from outside - that example picture I slapped together earlier is wrong (I didn't think about it much until after I'd created those pictures, and was too lazy to go back and make the "before GUI does shadows" picture into a darker "before GUI does lighting" picture).

Next, I've been simplifying a lot. In one of my earlier posts I said more than I was intending to (thankfully nobody noticed) and since then I've just been calling everything "textures", because (to be perfectly honest) I'm worried that if I start trying to describe yet another thing that is "different to what every other OS does" it's just going to confuse everyone more. What I said was this:

"I expect to create a standard set of commands for describing the contents of (2D) textures and (3D) volumes; where the commands say where (relative to the origin of the texture/volume being described) different things (primitive shapes, other textures, other volumes, lights, text, etc) should be, and some more commands set attributes (e.g. ambient light, etc). Applications create these lists of commands (but do none of the rendering)."

The key part of this is "(3D) volumes".

For the graphics that you are all familiar with, the renderer only ever converts a scene into a 2D texture. Mine is different - the same "list of commands" could be converted into a 2D texture or converted into a 3D volume. For example; an application might create a list of commands describing its window's contents and convert it into a 3D volume, and the GUI creates a list of commands that says where to put the application's 3D volume within its scene.

To convert a list of commands into a 3D volume, the renderer rotates/translates/scales "world co-ords" according to the camera's position and sets the clipping planes for the volume. It doesn't draw anything. When the volume is used by another list of commands, all the renderer does is merge the volume's list of commands with the parent's list of commands. Nothing is actually drawn until/unless a list of commands is converted into a 2D texture (e.g. for the screen). That's how you end up with light from outside the application's window effecting the application's scene. It also means that (e.g.) if the application creates a 3D volume showing the front of a sailing ship, then the GUI can place that volume in its scene rotated around so that the user sees the back of the sailing ship.

However; the same "list of commands" could be converted into a 2D texture or converted into a 3D volume; which means that the application's "list of commands" could be converted into a 2D texture. In this case the GUI only sees a 2D texture, light from the GUI can't effect anything inside that 2D texture, and if the GUI tries to rotate the texture around you'll never see the back of the sailing ship. Normal applications have no reason to do this; and a normal application's "list of commands" would be a 3D volume.

For 3D games it depends on the nature of the game. Something like Minecraft probably should be a 3D volume; but for something like a 1st person shooter it'd allow the player to cheat (e.g. rotate the game's window to see around a corner without getting shot at) and these games should be rendered as a texture.

Of course a 3D game could create a list of commands describing a scene that is (eventually) converted into a 2D texture, and then create another list of commands that puts that 2D texture into a 3D volume. That way the game can have (e.g.) a 3D "heads up display" in front.

Also note that these 3D volumes can be used in multiple times. For example, you a game might have a list of commands describing a car, then have another list of commands that places the 3D volume containing the car in 20 different places.

Now if you're a little slow you might be thinking that all this sounds fantastic, or too hard/complicated, or whatever. If you're a little smarter than that you'll recognise it for what it is (hint: OpenGL display lists). ;)
Owen wrote:Does anybody want shadows falling from windows into their games?
After reading the 2 paragraphs immediately above this one, you should be able to figure out the answer to that question yourself. 8)


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: OS Graphics

Post by Brendan »

Hi,
Gigasoft wrote:
Brendan wrote:For cartoon shading create "cartoon shaded textures" (or just use solid colour polygons instead of textures), and if you don't want lighting and shadows use ambient light and no other light sources.
Do you even know what cartoon shading is? Cartoon shading is not the same as making something look flat. It means that you are lighting something using a restricted colour palette. Although cartoon shading uses a texture, you can't just make "cartoon shaded textures" because shading depends on lighting. An object may rotate relative to the light source, lighting up different parts of it. Now, it is possible to implement cartoon shading with a traditional fixed function pipeline, but this requires that the application performs all vertex processing on the CPU. You would then actually use the calculated intensity as a coordinate for a texture that holds the cartoon colors. If you want to use hardware vertex processing, a shader program is required.
For "cartoon shaded textures" I was thinking more like static shadows built into the scene.

I guess (if I must) I could provide a standard shader for it and have a generic "how much to posterize" command. That'd double the number of shaders though (night vision and now cartoon). :)
Gigasoft wrote:
Brendan wrote:Artists (and not programmers) write shaders do they?
Yes. Art is not just textures and models. Shaders are part of the description of how things should look. Whether the artist tells a programmer "I want this object to be lit according to this formula here" or if he writes the shader himself in HLSL doesn't matter. No matter who writes it, a shader is the result of some artistic decision. You can't easily separate shaders from a game and change them blindly. Shader programs are basically a way to provide a developer with a means of coming up with some new interesting effect and telling the GPU how he wants it done. Most of the things a game does to display its output is specifically designed for that particular game. Want objects to cast shadows? Then you may use a shadow buffer, whose required resolution depends on the relative position of light sources, shadow casting objects and objects that receive shadows. Or you could use volumetric shadows, constructed using specially designed meshes that try to approximate the shadow of some object. All these tasks are best done by a human who possesses common sense and can see and evaluate the result. Sure, you can come up with a library that provides shaders for some common cases, or even constructs shaders intelligently according to some scheme, but the end result should not be the inability of a programmer to use programmable shaders for the purpose they were invented for, or the swapping of an effect with something else without the end user or the developer's knowledge.
Ok, now I'm convinced - I'll provide standard shaders in the video drivers and anyone that's not happy with that can just go and get fuchsia bread to eat while they find a crappy OS full of idiotic complexities and compatibility problems.
Gigasoft wrote:
Brendan wrote:As an alternative to your work-arounds for unspecified problems, I offer unspecified solutions and claim that my unspecified solutions are a superior way to solve your unspecified problems.
The first problem in question is edge marking, and it has to function exactly as I described it as I am emulating another system. The depth values used are the same as in the depth buffer, but at different locations. (And if I could use the depth buffer directly, which most video cards can't do due to how depth buffers are stored, it would not change the actual shader).
Ok, I won't be emulating other systems (I lean more towards "let the world burn!" ;) ).
Gigasoft wrote:The other problem is range fog. Focal widths are stored per pixel because the only way I can guess the focal width is by the projection matrix set by the emulated application, which can change multiple times during a frame. Of course, most other applications would never need to solve these problems, because they are different applications that need to solve different problems. To provide solutions for all these problems you'd need to write a generic problem solving oracle.
I wasn't sure what you meant by "range fog" (I thought almost all games have been doing it for as long as I can remember as a way to limit viewing distance), so I googled it. The first link I clicked on was this discussion. I read it and learnt the following things:
  • OpenGL supports it but does it wrong (uses "distance to screen" rather than "distance to camera"? I don't know)
  • DirectX supports it and does it right with no shader needed at all
  • The person asking how to do it found out how ugly it is and gave up (decided to just use the fixed function pipeline)

Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
dozniak
Member
Member
Posts: 723
Joined: Thu Jul 12, 2012 7:29 am
Location: Tallinn, Estonia

Re: OS Graphics

Post by dozniak »

Brendan wrote:I guess (if I must) I could provide a standard shader for it and have a generic "how much to posterize" command. That'd double the number of shaders though (night vision and now cartoon). :)
It's gonna triple and then quadruple pretty soon(ish).
Learn to read.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: OS Graphics

Post by Brendan »

Hi,
dozniak wrote:
Brendan wrote:I guess (if I must) I could provide a standard shader for it and have a generic "how much to posterize" command. That'd double the number of shaders though (night vision and now cartoon). :)
It's gonna triple and then quadruple pretty soon(ish).
More realistically, it's going to take 10 years before the OS does basic software rendered graphics, and 15 years or more before anyone writes any native/accelerated video driver, and in 20 years it might have 3 shaders (if I'm lucky). After that maybe I'll add a new type of shader to the standard every 3 years.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Owen
Member
Member
Posts: 1700
Joined: Fri Jun 13, 2008 3:21 pm
Location: Cambridge, United Kingdom
Contact:

Re: OS Graphics

Post by Owen »

Brendan wrote:
Owen wrote:
Brendan wrote:Most have since Windows95 or before (e.g. with "baked on" lighting/shadow). Don't take my word for it though - do a google search for "ok button images" if you don't have any OS with any GUI that you can look at. Note: There is one exception that I can think of - Microsoft's "Metro".
The baked on lighting and shadow is a depth hint. A technique used by artists to hint at depth in 2D shapes. They are used in order to give our brain an appropriate context as to what an image represents. As an example of why, compare Metro's edit box and button and try and determine which is which without external clues (E.G. size and shape) to see why this is done.
Sure, it's a technique used by artists to hint at depth in 2D shapes because it's important but the graphics APIs used by applications suck and artists can't do it properly (in ways that maintain a consistent illusion of depth when the window isn't parallel to the screen, or when there are shadows, or when the light source isn't where the artist expected, or...).
But they don't want a "consistent illusion of depth". If they did, things like buttons would still look like they did in the "Windows Classic" theme - which is about the last time buttons actually looked raised as oppposed to these shaded things which have no real illusion of depth.

Most of the depth being "painted on" is vestigial. Look at Android Holo, for example; I only see two widgets with any hints at it (the button and the action bar)... and Holo is perhaps the most skeukomorphic of the smartphone platforms these days (considering both iOS 7 and Windows Phone are very much in the minimalist flat digital camp)
Brendan wrote:
Owen wrote:Its' all about context, not about actual depth. Actual depth brings no productivity advantages (see Sun's 3D desktop as an example)
Sun's Project Looking Glass, and Microsoft's "Aero Flip 3D", just about everything Compiz does, and every attempt at "3D GUI effects" I've seen; all look worse than they should because the graphics they're working on (application's windows) are created as "2D pixels". This isn't limited to just the lack of actual depth (as opposed to "painted on depth") - it causes other problems related to scaling (e.g. where things like thin horizontal lines that look nice in the original window end up looking like crap when that window isn't parallel to the screen).

Now; time for a history lesson. Once upon a time applications displayed (mostly) ASCII text, and people thought it was efficient (it was) and I'm sure there were some people that thought it was good enough (who am I kidding - there's *still* people that think it's good enough). Then some crazy radical said "Hey, let's try 2D graphics for applications!". When that happened people invented all sorts of things that weren't possible with ASCII (icons, widgets, pictures, WYSIWYG word processors, etc) and these new things were so useful that it'd be hard to imagine living without them now that they've become so common. Then more crazy radicals said "Hey, let's try 3D graphics!" and some of them tried retro-fitting 3D to GUIs, and it sucks because they're only looking at the GUI and aren't changing the "2D graphics" API that applications use. Because they're not changing the API that applications use it doesn't look as good as it should (which is only a minor problem really), but more importantly, no applications developers are able to invent new uses for 3D that couldn't have existed before (as applications are still stuck with 2D APIs) so nobody can see the point of bothering.
Hey, i don't disagree about the utility of easy 3D APIs. I do completely agree that we are presently stuck in a bit of a rut where the only GUI toolkit I can think of with any kind of easy 3D API is Qt.
Brendan wrote:If applications did use a 3D API; then what sort of things might people invent? How about spreadsheet software that generates 3D charts, like this:
Image
But so the user can grab the chart and rotate it around?

How about an image editor where you use something like this to select a colour (and actually *can* select a colour from the middle!):
Image

How about a (word processor, PDF, powerpoint) document with a "picture" of the side of a car embedded in it; where you can grab that car, rotate it around and look at it from the front?
If the APIs were there, people would be doing this today. Hell, we are already seeing this on the web, where people are using simple scene graph libraries that people have implemented in JavaScript to support simple 3D scenes on top of OpenGL.
Brendan wrote:Now think about what happens when we're using 3D display technology instead of 2D monitors (it will happen, it's just that nobody knows when).
Thinking about how best to support volumetric display technology will be best done when we have some experience with it. Note that most interfaces on real, 3 dimensional products are 2D. When they are 3D, its' mostly for tactile reasons (e.g. buttons on the side of phones).
Brendan wrote:
Owen wrote:Brendan, I have a question for you: Without shaders, how does a developer implement things like night and thermal vision? Also, lets say there's some kind of overlay on the screen (e.g. the grain of the NV goggles), do shadows still fall into the scene?
If the application says there's only ambient light and no light sources, then there can't be any shadows. For thermal vision you'd use textures with different colours (e.g. red/yellow/white). For night vision you could do the same (different textures) but the video driver could provide a standard shader for that (e.g. find the sum of the 3 primary colours and convert to green/white) to avoid different textures.
Nobody implements thermal vision like that nor wants to do so. Thermal vision is normally implemented by using a single channel texture with the "heat" of the object embedded within it. The shader looks that up on a colour ramp to determine the actual display colour - after doing things like adjusting for distance which can't be efficiently done under your model.

It might then want to add things like noise.

For toon shading: you forgot the outlining shader.

NOTE: I don't happen to be a fan of the current shader distribution model. I think the source based system is a bit crap, and we would be better off with a standard bytecode and bytecode distribution. 3D engines like Unreal 3 and Ogre3D (which I'm presently working on) implement systems which build shaders based upon defined material properties. In particular, Unreal lets artists create reasonably well optimized shaders by snapping together premade blocks in a graph (--but still allows the developer to add their own blocks for their needs, or write the shader from scratch). I'd much rather we got a bytecode to work with - it would make implementing our runtime generated shader system so much easier, and would mean we weren't generating text for the compiler to immediately parse. However bytecode shaders > text shaders > no shaders
Brendan wrote:For whether or not shadows from outside the application/game fall into the application/game's scene; first understand that it'd actually be light falling in from outside - that example picture I slapped together earlier is wrong (I didn't think about it much until after I'd created those pictures, and was too lazy to go back and make the "before GUI does shadows" picture into a darker "before GUI does lighting" picture).

Next, I've been simplifying a lot. In one of my earlier posts I said more than I was intending to (thankfully nobody noticed) and since then I've just been calling everything "textures", because (to be perfectly honest) I'm worried that if I start trying to describe yet another thing that is "different to what every other OS does" it's just going to confuse everyone more. What I said was this:

"I expect to create a standard set of commands for describing the contents of (2D) textures and (3D) volumes; where the commands say where (relative to the origin of the texture/volume being described) different things (primitive shapes, other textures, other volumes, lights, text, etc) should be, and some more commands set attributes (e.g. ambient light, etc). Applications create these lists of commands (but do none of the rendering)."

The key part of this is "(3D) volumes".

For the graphics that you are all familiar with, the renderer only ever converts a scene into a 2D texture. Mine is different - the same "list of commands" could be converted into a 2D texture or converted into a 3D volume. For example; an application might create a list of commands describing its window's contents and convert it into a 3D volume, and the GUI creates a list of commands that says where to put the application's 3D volume within its scene.

To convert a list of commands into a 3D volume, the renderer rotates/translates/scales "world co-ords" according to the camera's position and sets the clipping planes for the volume. It doesn't draw anything. When the volume is used by another list of commands, all the renderer does is merge the volume's list of commands with the parent's list of commands. Nothing is actually drawn until/unless a list of commands is converted into a 2D texture (e.g. for the screen). That's how you end up with light from outside the application's window effecting the application's scene. It also means that (e.g.) if the application creates a 3D volume showing the front of a sailing ship, then the GUI can place that volume in its scene rotated around so that the user sees the back of the sailing ship.

However; the same "list of commands" could be converted into a 2D texture or converted into a 3D volume; which means that the application's "list of commands" could be converted into a 2D texture. In this case the GUI only sees a 2D texture, light from the GUI can't effect anything inside that 2D texture, and if the GUI tries to rotate the texture around you'll never see the back of the sailing ship. Normal applications have no reason to do this; and a normal application's "list of commands" would be a 3D volume.

For 3D games it depends on the nature of the game. Something like Minecraft probably should be a 3D volume; but for something like a 1st person shooter it'd allow the player to cheat (e.g. rotate the game's window to see around a corner without getting shot at) and these games should be rendered as a texture.

Of course a 3D game could create a list of commands describing a scene that is (eventually) converted into a 2D texture, and then create another list of commands that puts that 2D texture into a 3D volume. That way the game can have (e.g.) a 3D "heads up display" in front.

Also note that these 3D volumes can be used in multiple times. For example, you a game might have a list of commands describing a car, then have another list of commands that places the 3D volume containing the car in 20 different places.

Now if you're a little slow you might be thinking that all this sounds fantastic, or too hard/complicated, or whatever. If you're a little smarter than that you'll recognise it for what it is (hint: OpenGL display lists). ;)
No, what you're describing are not OpenGL display lists.

Display lists are very context dependent (and generally crappy, because they're so old and were designed before many extensions were added to OpenGL. In particular, they date back to when 3D acceleration looked nothing like it does today).

Display lists in concept are just a pre-baked set of commands that can just be immediately fed to the GPU (In practice they're not, because they're too high level - they were intended to be replayed by the rendering server back when OpenGL worked like that). Yes, you can bake the commands to render an object or set thereof into a display list, and using the fixed function pipeline rotate it and set the appropriate transformations for the camera (this is more difficult using the modern prorgammable pipeline - indeed, its' not supported by the hardware at all).

In fact, OpenGL does let you build a "hierarchical set" of display lists (this is one of the reasons why they suck). So what you're describing is indeed possible with OpenGL display lists... but if you actually did this, you'd find that the performance sucked, because the GPU tries to render everything, even the things are invisible.

So what you're describing is actually a scene graph (because thats what heirachial display lists turn into, and what you end up with when you try to cull portions of them). Whether a scene graph is appropriate depends a lot on the type of game.

FPSes, for example, are normally implemented using a loose scene graph - you might have one or two layers of nesting for example (e.g. weapon is attached to player).

But its' wholly inappropriate for one of your examples - Minecraft: How do you describe such a world as a series of objects? Do you make every block an object? (The answer is no, because one chunk would be 2 million of them). So games like Minecraft use more specialized approaches (Minecraft bakes most of a chunks' objects into one mesh, for example).

Then theres culling. Older game engines (Mostly Quake derived) use binary space partitioning. Its' conceptually a very simple system, and back when overdraw was expensive was perfect (BSP has no overdraw); but today its' inappropriate (overdraw is less expensive than walking the BSP tree) and it requires heavy pre-processing (so isn't usable for dynamic objects). Some engines use Octrees. They have the best scaling of all the methods usable for 3D graphics in terms of the number of objects in the scene, yet actually in pratice they, and all tree based systems, suck for realistic scene sizes: yes, they scale up, but they don't scale down.

So systems have been moving from heirachial culling to the case where many games on massive open worlds today just use a simple array of every object in the scene (e.g. Just Cause 2)

You're essentially implying your OS/graphics driver becomes the 3D engine without it having any understanding of the actual needs of the scene type underneath.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: OS Graphics

Post by Brendan »

Hi,
Owen wrote:
Brendan wrote:Sure, it's a technique used by artists to hint at depth in 2D shapes because it's important but the graphics APIs used by applications suck and artists can't do it properly (in ways that maintain a consistent illusion of depth when the window isn't parallel to the screen, or when there are shadows, or when the light source isn't where the artist expected, or...).
But they don't want a "consistent illusion of depth". If they did, things like buttons would still look like they did in the "Windows Classic" theme - which is about the last time buttons actually looked raised as oppposed to these shaded things which have no real illusion of depth.
I see these 3D hints almost everywhere (but I haven't used a mobile phone for about 10 years). The important thing is that I like the 3D hints in GIUs (raised buttons that sink when they're pressed, scroll bars that look like a little grip sliding in a groove/track, text boxes that are recessed, etc); and if anyone's doing 3D hints, then I want them to be actual 3D that works properly when windows are at arbitrary angles (not parallel to the screen) or when there's dynamic lighting/shadow, etc. I do realise that some people are currently doing "flat shaded" (and there's no reason a GUI for my system can't do "flat shaded"), but nobody can guarantee that it'll still be fashionable next year or the year after (and I honestly hope it's not - I don't like it and think (e.g.) Metro looks like crap).
Owen wrote:
Brendan wrote:If the application says there's only ambient light and no light sources, then there can't be any shadows. For thermal vision you'd use textures with different colours (e.g. red/yellow/white). For night vision you could do the same (different textures) but the video driver could provide a standard shader for that (e.g. find the sum of the 3 primary colours and convert to green/white) to avoid different textures.
Nobody implements thermal vision like that nor wants to do so. Thermal vision is normally implemented by using a single channel texture with the "heat" of the object embedded within it. The shader looks that up on a colour ramp to determine the actual display colour - after doing things like adjusting for distance which can't be efficiently done under your model.
To me, the only important thing is whether or not I care. I do care a little, but not enough to turn a clean/easy API into a painful mess just for sake of a few special cases that are irrelevant 99% of the time for 99% of people.
Owen wrote:NOTE: I don't happen to be a fan of the current shader distribution model. I think the source based system is a bit crap, and we would be better off with a standard bytecode and bytecode distribution. 3D engines like Unreal 3 and Ogre3D (which I'm presently working on) implement systems which build shaders based upon defined material properties. In particular, Unreal lets artists create reasonably well optimized shaders by snapping together premade blocks in a graph (--but still allows the developer to add their own blocks for their needs, or write the shader from scratch). I'd much rather we got a bytecode to work with - it would make implementing our runtime generated shader system so much easier, and would mean we weren't generating text for the compiler to immediately parse. However bytecode shaders > text shaders > no shaders
Portable byte-code shaders seems to make sense; but it forces a specific method onto the rendering pipeline. For example, I can't imagine the same portable byte-code shader working correctly for rasterisation and ray tracing, or working correctly for both 2D and 3D displays. It's only a partial solution to the "future proofing" problem, which isn't enough.
Owen wrote:Then theres culling. Older game engines (Mostly Quake derived) use binary space partitioning. Its' conceptually a very simple system, and back when overdraw was expensive was perfect (BSP has no overdraw); but today its' inappropriate (overdraw is less expensive than walking the BSP tree) and it requires heavy pre-processing (so isn't usable for dynamic objects). Some engines use Octrees. They have the best scaling of all the methods usable for 3D graphics in terms of the number of objects in the scene, yet actually in pratice they, and all tree based systems, suck for realistic scene sizes: yes, they scale up, but they don't scale down.

So systems have been moving from heirachial culling to the case where many games on massive open worlds today just use a simple array of every object in the scene (e.g. Just Cause 2)

You're essentially implying your OS/graphics driver becomes the 3D engine without it having any understanding of the actual needs of the scene type underneath.
No; what I'm saying is that the applications describe what they want, and renderers draw what has been described; such that the renderer (e.g. a video driver) can do software rendering or binary space partitioning or rasterisation with overdraw or ray tracing or use "painters algorithm" or use z-buffers or do whatever else the render's creator thinks is the best method for the specific piece of hardware involved without any other piece of software needing to care how rendering happened to be implemented.

Basically; if application/game developers need to care how the renderer works, then the graphics API is a failure.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Post Reply