Graphics API and GUI

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
Ready4Dis
Member
Member
Posts: 571
Joined: Sat Nov 18, 2006 9:11 am

Re: Graphics API and GUI

Post by Ready4Dis »

a) If the video driver manages a dependency graph of "things" (where each thing might be a flat 2D texture, but might be texture with bumps, or a voxel space, or a point cloud, or a set of meshes and textures, or ....); then nothing prevents you from saying (e.g.) "This application's window is a point cloud and not a flat 2D texture".
Why is the video driver managing a dependency graph? The video driver should only be worried about doing what it's told. It gets vertex, color, texture coord buffers, as well as others (for bump, parallex, etc mapping, and vertex/pixel shader inputs), and is then told to render triangles using specific textures into a buffer. It doesn't know anything about the graph, the application stores the graph of what it needs to render (irregardless of what any other app is doing or using). An indoor environment would require a completely different structure than an outdoor environment, which is completely different than handling clickable buttons on an app. The compositor is basically just the glue that holds the apps together. After each application renders its own buffer, the compositors job is to put them on screen in the correct location. The application itself doesn't determine where it is located on screen (on the desktop, in a task manager, in a task bar, etc), it only worries about rendering to it's window.
b) The entire video system involves generating "things" by combining "sub-things" in some way; and because this "combine sub-things" functionality is ubiquitous and used throughout the entire system it's silly to consider "compositor" as a distinct/separate piece that is somehow important in any way at all. It would be more accurate to say that the system is not crap and therefore has no need for a "compositor".
I think we are talk about slightly different levels. When I say graphics API, all I mean is an api like openlg. This doesn't have any sort of dependency graph, that is a higher level thing, unless you're talking about not using a graphics API but have an entire graphics engine instead?
For 3D games, most things don't change every frame and there's significant opportunities (and techniques) to recycle data to improve performance; including VBOs, distant impostors, portal rendering, overlays (for menu system and/or HUD), etc.
While the entire scene doesn't always change, typically what is on screen does, and if you are rendering what's on screen, you are going to have to keep doing so. If a portal for an indoor rendering engine isn't visible, you don't render anything behind it. But that isn't the job of the graphics API ,that's the game or graphics engine (which is typically part of the game itself, not the OS). A lot of buffers will not change (vertices stay in the same place and are animated using vertex shaders for the most part nowadays), but why does the graphics api care, again, the application knows when that buffer needs changing. There are a lot of techniques for rendering 3d games, but I don't suspect many of them should be in the graphics api, otherwise you are trying to build an all inclusive game engine inside of your graphics api as part of your OS.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Graphics API and GUI

Post by Brendan »

Hi,
Ready4Dis wrote:
a) If the video driver manages a dependency graph of "things" (where each thing might be a flat 2D texture, but might be texture with bumps, or a voxel space, or a point cloud, or a set of meshes and textures, or ....); then nothing prevents you from saying (e.g.) "This application's window is a point cloud and not a flat 2D texture".
Why is the video driver managing a dependency graph? The video driver should only be worried about doing what it's told. It gets vertex, color, texture coord buffers, as well as others (for bump, parallex, etc mapping, and vertex/pixel shader inputs), and is then told to render triangles using specific textures into a buffer.
It's not that simple. E.g.:
  • Video driver is told to render triangles using specific textures to create "texture #4"; then
  • Video driver is told to render triangles using more specific textures that includes "texture #4" to create "texture #9"; then
  • Video driver is told to render triangles using more specific textures that includes "texture #4" and "texture #9" to create "texture #33"; then
    ....
It's like makefiles. You don't just have a single massive list of commands that rebuilds all of the pieces and then builds/links the final executable every single time a project is built; because that's extremely inefficient and incredibly stupid. In the same way you don't just have a single massive list of commands that renders the entire screen from scratch every frame. Instead you cache intermediate stuff (objects files or texture or whatever) and keep track of dependencies; to ensure that you only do whatever needs to be done (and don't waste massive amounts of time rebuilding/rendering things for no sane reason at all).
Ready4Dis wrote:It doesn't know anything about the graph, the application stores the graph of what it needs to render (irregardless of what any other app is doing or using).
No. The graph is a system wide thing, where different parts of the graph are from different processes (applications, GUIs, whatever). There is very little practical difference between (e.g.) generating a "rear view mirror" texture and slapping that texture into a larger "player's view from the car's driver seat" texture and generating a "racing car game" texture and slapping that texture into a larger "GUI's desktop texture". The only practical difference is who "owns" (has permission to modify) which nodes in the global dependency graph (where "node ownership" is irrelevant for rendering).
Ready4Dis wrote:An indoor environment would require a completely different structure than an outdoor environment, which is completely different than handling clickable buttons on an app.
You provide functionality; and that functionality can be used in multiple different ways by different pieces of software for very different reasons. You provide "render to texture" functionality; someone uses it for dynamically generated distant impostors, someone else uses it for portal rendering and someone else uses it for slapping window contents onto a GUI's screen. As far as the video driver cares it's all the same.
Ready4Dis wrote:After each application renders its own buffer...
Applications should not render their own buffer.
Ready4Dis wrote:..the compositors job is to put them on screen in the correct location. The application itself doesn't determine where it is located on screen (on the desktop, in a task manager, in a task bar, etc), it only worries about rendering to it's window.
In the same way that the code that asks the video driver to render the "rear view mirror" texture doesn't need to know or care what the final texture might be used for, or where or how that texture might be inserted into a racing car game's "player's view from the car's driver seat". It's the exact same functionality.
Ready4Dis wrote:
b) The entire video system involves generating "things" by combining "sub-things" in some way; and because this "combine sub-things" functionality is ubiquitous and used throughout the entire system it's silly to consider "compositor" as a distinct/separate piece that is somehow important in any way at all. It would be more accurate to say that the system is not crap and therefore has no need for a "compositor".
I think we are talk about slightly different levels. When I say graphics API, all I mean is an api like openlg. This doesn't have any sort of dependency graph, that is a higher level thing, unless you're talking about not using a graphics API but have an entire graphics engine instead?
Take a look at things like VBOs, render to texture, portal rendering, dynamically generated distant impostors, etc. Basically (assuming the game developer isn't incompetent) their final scene is the combination of many smaller pieces which can be updated independently and have dependencies. The only difference is that OpenGL doesn't track dependencies between pieces, which forces game developers to do it themselves. In other words, the dependence graph is there, it's just that it's typically explicitly written into a game's code because OpenGL sucks.
Ready4Dis wrote:
For 3D games, most things don't change every frame and there's significant opportunities (and techniques) to recycle data to improve performance; including VBOs, distant impostors, portal rendering, overlays (for menu system and/or HUD), etc.
While the entire scene doesn't always change, typically what is on screen does, and if you are rendering what's on screen, you are going to have to keep doing so. If a portal for an indoor rendering engine isn't visible, you don't render anything behind it. But that isn't the job of the graphics API ,that's the game or graphics engine (which is typically part of the game itself, not the OS). A lot of buffers will not change (vertices stay in the same place and are animated using vertex shaders for the most part nowadays), but why does the graphics api care, again, the application knows when that buffer needs changing.
A process (e.g. GUI) may not know when a texture it uses (that belongs to a completely different/unrelated process - e.g. an application) has changed. When the application tells the video driver to change its texture, the video driver knows the texture was changed even though the GUI doesn't.

Now imagine a widget tells the video driver "use data from the file "/foo/bar/hello.texture" for this texture"; the widget uses that texture in another texture; which is then included in an application (running as a separate process); and that application is embedded into a second application; and the second application's texture is included into the GUI's texture. Then imagine the user modifies the file "/foo/bar/hello.texture" on disk, and the VFS sends a notification to the video driver (causing the screen to be updated without any of the 4 separate processes involved doing anything at all).
Ready4Dis wrote:There are a lot of techniques for rendering 3d games, but I don't suspect many of them should be in the graphics api, otherwise you are trying to build an all inclusive game engine inside of your graphics api as part of your OS.
Recycling "include child object/s within parent object" support that has to exist (and currently does exist) within the video driver/GPU does not mean that it suddenly becomes a game engine (complete with collision detection, physics, sound, scripting, AI, etc).


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Graphics API and GUI

Post by Rusky »

You already need to separate the concept of "use data from the file '/foo/bar/hello.texture' for this texture" from "I'm ready for this texture to be displayed," to avoid showing the user half-finished results. Typically this is done by the application signalling the driver with something like SwapBuffers(), at which point there's no reason not to have applications render into their own buffers (especially with your ideal of a common message-based graphics API, where the dependency graph would just be a graph of applications sending SwapBuffers() messages to each other).
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Graphics API and GUI

Post by Brendan »

Hi,
Rusky wrote:You already need to separate the concept of "use data from the file '/foo/bar/hello.texture' for this texture" from "I'm ready for this texture to be displayed," to avoid showing the user half-finished results. Typically this is done by the application signalling the driver with something like SwapBuffers(), at which point there's no reason not to have applications render into their own buffers (especially with your ideal of a common message-based graphics API, where the dependency graph would just be a graph of applications sending SwapBuffers() messages to each other).
No; for my system the video driver loads the texture from the file itself and knows exactly when it's been loaded; and if it must be displayed before its loaded the video driver just uses "default grey" as a temporary substitute (or possibly some other colour, if the application provided an alternative colour when providing the file name). There is no attempt to avoid half-finished results - in all cases the video driver just does the best it can with the time it has (and if that means large grey blobs everywhere because the video driver can't render everything before the frame's deadline then tough luck).

Typically; the application loads the texture into its own address space where it's useless and then transfers it to where it actually needs to be (which is idiotic); and applications have buffers (which is idiotic); and applications have to do "temporal micromanaging" with SwapBuffers() or similar (which is idiotic); and manage dependencies themselves (which is idiotic); and even deal with device dependent resolution/colour depth/pixel format (which is idiotic).

I shouldn't need to point out that part of the reason I'm an OS developer in the first place (e.g. and not an application developer) is to escape from the pure stupidity of "typical".


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Ready4Dis
Member
Member
Posts: 571
Joined: Sat Nov 18, 2006 9:11 am

Re: Graphics API and GUI

Post by Ready4Dis »

No; for my system the video driver loads the texture from the file itself and knows exactly when it's been loaded; and if it must be displayed before its loaded the video driver just uses "default grey" as a temporary substitute (or possibly some other colour, if the application provided an alternative colour when providing the file name). There is no attempt to avoid half-finished results - in all cases the video driver just does the best it can with the time it has (and if that means large grey blobs everywhere because the video driver can't render everything before the frame's deadline then tough luck).
I see a few apparent problems. A lot of games like to specifically disallow textures from being updated and check for a checksum to make sure they aren't, so somebody can't go into a texture and set the alpha level and see through objects. so, if the graphics api reloads the texture without the app knowing, you just bypassed the games inbuilt security for cheaters :).
You don't just have a single massive list of commands that rebuilds all of the pieces and then builds/links the final executable every single time a project is built;
No, I don't, only the parts that want to real-time update will do so, the parts that don't have to won't.
No. The graph is a system wide thing, where different parts of the graph are from different processes (applications, GUIs, whatever). There is very little practical difference between (e.g.) generating a "rear view mirror" texture and slapping that texture into a larger "player's view from the car's driver seat" texture and generating a "racing car game" texture and slapping that texture into a larger "GUI's desktop texture". The only practical difference is who "owns" (has permission to modify) which nodes in the global dependency graph (where "node ownership" is irrelevant for rendering).
Hmm.. a system wide graph that anyone can access at any time? Why would anybody need to see the building blocks of the game? They just want to see the final results, unless you are talking about everything sharing textures and buffers?
Applications should not render their own buffer.
Sorry, bad choice of words. I was saying is that the app sends the rendering commands through the graphics api to it's own 'window' or 'texture'. The app itself isn't concerned how the final texture is displayed at the end, just that it finished. this allows it to be rendered in any way you chose (in my case, the GUI, or compositor, would typically render all the apps textures in a way that the user can interface with each application).
Take a look at things like VBOs, render to texture, portal rendering, dynamically generated distant impostors, etc. Basically (assuming the game developer isn't incompetent) their final scene is the combination of many smaller pieces which can be updated independently and have dependencies. The only difference is that OpenGL doesn't track dependencies between pieces, which forces game developers to do it themselves. In other words, the dependence graph is there, it's just that it's typically explicitly written into a game's code because OpenGL sucks.
I am very familiar with graphics programming and understand scene graphs and how things are rendered. I understand that opengl doesn't track dependencies, but how could it know enough about them to track? If I am rendering some reflective surface with environmental (reflective) mapping and render a cube in order to generate this map, why does the graphics api need to know what it's being used for? I can choose in my game engine to update it once every other frame to save resources, but what does that matter to the graphics api? Are you suggesting that the graphics API knows that the object relies on the cube map and if the object isn't drawn, then the cube map isn't updated automagically without the game engine needing to specifically worry about it (which is really easy to do in the app, but I can see the merits if it's done at the graphics api level)? Or are you saying that if the cube map is updated, then the object is updated? In the case of a game, unless the player is standing still, it needs to be updated anyways (even if the player is staying still, if *something in view* is moving, then it needs to be updated).

You seem to have a more concrete vision of what you want to accomplish than I do, glad you can offer some insight.
Ready4Dis
Member
Member
Posts: 571
Joined: Sat Nov 18, 2006 9:11 am

Re: Graphics API and GUI

Post by Ready4Dis »

BTW, just in case anyone cares, the binaries generated for SPIR-V from glslang are not formed properly. I started on a dissassembler for SPIR-V for my software engine and found that it generates the incorrect op-codes! I am going through and hand fixing the opcodes in a hex editor just to test my disassembler. I hope they fix the tools sooner than later as it's already a pain if everything is working ;).
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Graphics API and GUI

Post by Rusky »

Brendan wrote:Typically; the application loads the texture into its own address space where it's useless and then transfers it to where it actually needs to be (which is idiotic); and applications have buffers (which is idiotic); and applications have to do "temporal micromanaging" with SwapBuffers() or similar (which is idiotic); and manage dependencies themselves (which is idiotic); and even deal with device dependent resolution/colour depth/pixel format (which is idiotic).
No, typically the application loads the texture into its own address space and then to the graphics hardware, and there's no way to avoid that other than by picking some other arbitrary "useless" address space for it to pass through, because there's no DMA from the disk to the VRAM. Though I suppose it makes sense as a fast path for your distributed OS, which is distinctly not Ready4Dis's OS. You would still need a way to load a texture from memory regardless, in case the application generated it itself on the CPU.

As for resolution independence, I'm not sure what it has to do with SwapBuffers().
Brenden wrote:I shouldn't need to point out that part of the reason I'm an OS developer in the first place (e.g. and not an application developer) is to escape from the pure stupidity of "typical".
I'd say "avoiding large grey blobs everywhere" qualifies as the exact opposite of "pure stupidity."
Antti
Member
Member
Posts: 923
Joined: Thu Jul 05, 2012 5:12 am
Location: Finland

Re: Graphics API and GUI

Post by Antti »

Brendan wrote:There is no attempt to avoid half-finished results - in all cases the video driver just does the best it can with the time it has (and if that means large grey blobs everywhere because the video driver can't render everything before the frame's deadline then tough luck).
I have been thinking of ideas for providing half-finished results as smoohtly as possible. Instead of having large grey blobs, maybe there is a way of keeping the results equally low quality. Also, maybe there could be some creative ways for integrating render flaws into some meaningful context (e.g. motion blur). If there were a warning like "renderer is not going to make it", it would be possible to change the strategy and provide some affordable rendering in a controlled way.

It may be hard to implement any of this but it would be quite impressive to see dynamically adjusted graphics performance so that harware's load rate is always optimal, i.e. not necessarily highest possible, and the overall look'n'feel of the system is always very fast and responsive. To some extent this has been done already but "something is still missing" and a new system could make this elegantly enough.
Ready4Dis
Member
Member
Posts: 571
Joined: Sat Nov 18, 2006 9:11 am

Re: Graphics API and GUI

Post by Ready4Dis »

I understand the grey blob thing (of course you mean it could be a texture that says unavailable or anything else that you want), especially on a highly distributed system where loading a resource may be a slow process that you don't want to just sit and wait for. Loading straight from disk as part of the graphics api may have it's merits though, besides the user being able to cheat by changing textures or shaders ;). It means if you supply a system that knows how to load all types of file formats, then the application doesn't really need to worry about specifically loading an image, it just has to know where to find that image (and possibly check whether the image format is supported or that can just be a response from the graphics driver saying the image isn't supported, queue grey blob). I still see a few issues with this of course, what about things like mip-mapping? Do you load the image through the graphics driver (which has to open in ram, then send to vram), then ask the graphics driver for a copy (from vram -> application), then generate mip-map levels, and regenerate the texture, and upload back into vram? Or do different types of mip-map generators need to be part of the driver and you send it as a request, or only allow apps to use mipmapped textures if the file format supports them?
Brendan: How is your kernel coming along anyways, I know you have lots of plans and seeing as you're doing something so different, I can imagine it's going to take a while to get all the specifics worked out completely. I would be interested in checking it out when you get it to a point where it's at least partially usable. I've got a couple of test machines at the house (including an eee pc 900 netbook).
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Graphics API and GUI

Post by Brendan »

Hi,
Ready4Dis wrote:
No; for my system the video driver loads the texture from the file itself and knows exactly when it's been loaded; and if it must be displayed before its loaded the video driver just uses "default grey" as a temporary substitute (or possibly some other colour, if the application provided an alternative colour when providing the file name). There is no attempt to avoid half-finished results - in all cases the video driver just does the best it can with the time it has (and if that means large grey blobs everywhere because the video driver can't render everything before the frame's deadline then tough luck).
I see a few apparent problems. A lot of games like to specifically disallow textures from being updated and check for a checksum to make sure they aren't, so somebody can't go into a texture and set the alpha level and see through objects. so, if the graphics api reloads the texture without the app knowing, you just bypassed the games inbuilt security for cheaters :).
If the OS's file system security is so lame that it can't prevent people from tampering with a game's textures, then the OS's file system security is probably so lame it can't prevent people from tampering with a game's executable, shared libraries, scripts, etc either.

If there's a valid reason (e.g. photo editor) an application could still upload the texture data itself; it's just less common, less efficient and more hassle for the app/game developer.
Ready4Dis wrote:
No. The graph is a system wide thing, where different parts of the graph are from different processes (applications, GUIs, whatever). There is very little practical difference between (e.g.) generating a "rear view mirror" texture and slapping that texture into a larger "player's view from the car's driver seat" texture and generating a "racing car game" texture and slapping that texture into a larger "GUI's desktop texture". The only practical difference is who "owns" (has permission to modify) which nodes in the global dependency graph (where "node ownership" is irrelevant for rendering).
Hmm.. a system wide graph that anyone can access at any time? Why would anybody need to see the building blocks of the game? They just want to see the final results, unless you are talking about everything sharing textures and buffers?
A system wide graph that nobody (except video driver) can access directly; where processes ask the video driver "change node #X to description Y" and the video driver checks if the requesting process has permission or not (if the requesting process is the node's owner or not), and either updates the description or refuses.
Ready4Dis wrote:
Take a look at things like VBOs, render to texture, portal rendering, dynamically generated distant impostors, etc. Basically (assuming the game developer isn't incompetent) their final scene is the combination of many smaller pieces which can be updated independently and have dependencies. The only difference is that OpenGL doesn't track dependencies between pieces, which forces game developers to do it themselves. In other words, the dependence graph is there, it's just that it's typically explicitly written into a game's code because OpenGL sucks.
I am very familiar with graphics programming and understand scene graphs and how things are rendered. I understand that opengl doesn't track dependencies, but how could it know enough about them to track? If I am rendering some reflective surface with environmental (reflective) mapping and render a cube in order to generate this map, why does the graphics api need to know what it's being used for?
For rendering, the video driver has know how the reflective surface uses on the cube. For determining what to update; the video driver only needs to know that the reflective surface uses/depends on the cube, which is a tiny subset of the information the video driver must know (for rendering purposes) anyway.
Ready4Dis wrote:I can choose in my game engine to update it once every other frame to save resources, but what does that matter to the graphics api? Are you suggesting that the graphics API knows that the object relies on the cube map and if the object isn't drawn, then the cube map isn't updated automagically without the game engine needing to specifically worry about it (which is really easy to do in the app, but I can see the merits if it's done at the graphics api level)? Or are you saying that if the cube map is updated, then the object is updated? In the case of a game, unless the player is standing still, it needs to be updated anyways (even if the player is staying still, if *something in view* is moving, then it needs to be updated).
I'm saying that:
  • If the cube map's description is changed then:
    • If the camera angle remained the same; the video driver knows the cube map and the reflective surface need to be updated.
    • If the camera angle also changed; the video driver knows the cube map and the reflective surface need to be updated.
  • If the cube map's description remains the same:
    • If the camera angle also remained the same; the video driver knows that neither the cube map nor the reflective surface need to be updated.
    • If the camera angle changed; the video driver knows that only the reflective surface need to be updated.
In other words (makefiles!):

Code: Select all

cubemap: cubemap_description
    update_cube_map

reflective_surface: reflective_surface_description cubemap
    update_reflective_surface
Ready4Dis wrote:You seem to have a more concrete vision of what you want to accomplish than I do, glad you can offer some insight.
Note that my main objective here isn't to convince you to implement my graphics system; my main objective is to get you to think about possibilities beyond what you've seen in existing OSs and create your own unique system. Maybe it makes sense if the video driver doesn't use the GPU for graphics at all and only provides support for GPGPU (for HPC?). Maybe you can think of an efficient way to do graphics with point clouds (with no textures for anything) that has some interesting benefits. Maybe you discover an entirely new way to handle lighting/shadows. All I know is that if you fail to question existing techniques it's impossible to improve on them; and virtually everything that is currently considered "state of the art" exists because someone questioned whatever came before it.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Graphics API and GUI

Post by Brendan »

Hi,
Rusky wrote:
Brendan wrote:Typically; the application loads the texture into its own address space where it's useless and then transfers it to where it actually needs to be (which is idiotic); and applications have buffers (which is idiotic); and applications have to do "temporal micromanaging" with SwapBuffers() or similar (which is idiotic); and manage dependencies themselves (which is idiotic); and even deal with device dependent resolution/colour depth/pixel format (which is idiotic).
No, typically the application loads the texture into its own address space and then to the graphics hardware, and there's no way to avoid that other than by picking some other arbitrary "useless" address space for it to pass through, because there's no DMA from the disk to the VRAM. Though I suppose it makes sense as a fast path for your distributed OS, which is distinctly not Ready4Dis's OS. You would still need a way to load a texture from memory regardless, in case the application generated it itself on the CPU.
There is a fast path from VFS cache (RAM) to video card's display memory that doesn't involve passing through a useless/unnecessary address space.
Rusky wrote:
Brenden wrote:I shouldn't need to point out that part of the reason I'm an OS developer in the first place (e.g. and not an application developer) is to escape from the pure stupidity of "typical".
I'd say "avoiding large grey blobs everywhere" qualifies as the exact opposite of "pure stupidity."
I wouldn't.

For video there are 2 choices:
  • "fixed frame rate, variable quality"; where reduced quality (including "grey blobs" in extreme conditions) is impossible to avoid whenever highest possible quality can't be achieved in the time before the frame's deadline (and where reduced quality is hopefully a temporary condition and gone before the user notices anyway).
  • "fixed quality, variable frame rate"; where the user is typically given a large number of "knobs" to diddle with (render distance, texture quality, amount of super-sampling, lighting quality, ...); and where the user is forced to make a compromise between "worse quality than necessary for average frames" and "unacceptable frame rates for complex frames" (which means it's impossible for the user to set all those "knobs" adequately and they're screwed regardless of what they do).

Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Ready4Dis
Member
Member
Posts: 571
Joined: Sat Nov 18, 2006 9:11 am

Re: Graphics API and GUI

Post by Ready4Dis »

If the OS's file system security is so lame that it can't prevent people from tampering with a game's textures, then the OS's file system security is probably so lame it can't prevent people from tampering with a game's executable, shared libraries, scripts, etc either.
So, then the installed application (or are you only using distributed apps?) (or graphics driver?) must lock the file so no modifications can take place to it while the game is running (basically,at any point after the file integrity check)? That would make sense and solve that issue, which makes the video driver being aware of it's resource locations useful (like you said, one less hoop to jump through while loading). Maybe I don't think as far outside of the box as you, but it's near impossible to stop someone from tampering with files if they really want to, the key is that they can be checked during run-time (of course, that doesn't stop someone from editing the .exe file to stop doing a check sum, but that's an entire topic on it's own) and then disallowed to change (although, nothing to stop you from writing a file system driver that allows you to send the calls through an intercept app that then modifies said textures).
For rendering, the video driver has know how the reflective surface uses on the cube. For determining what to update; the video driver only needs to know that the reflective surface uses/depends on the cube, which is a tiny subset of the information the video driver must know (for rendering purposes) anyway.
Yes, but does the graphics system also know that through a portal there is an animated object that is moving? That means the reflective surface needs to be updated as well, even though neither the cube map description or the angle has changed. But it only needs to be updated if that portal is visible from the perspective of the object, otherwise it doesn't affect it. Or would the game engine have to notify the video driver that the texture is dirty and needs refreshing? And if it needs refreshing, then who is responsible for doing the actual rendering of said objects? It has to be the game engine, otherwise the video driver needs to know about BSP, octree's, potarls, etc. And if the game engine needs to do determine when something might have changed and do the rendering to update it, then why does the video driver need to a large graph of everything?
In other words (makefiles!):
CODE: SELECT ALL
cubemap: cubemap_description
update_cube_map

reflective_surface: reflective_surface_description cubemap
update_reflective_surface
See my previous statement, that isn't near enough information to update the reflective surface.
Note that my main objective here isn't to convince you to implement my graphics system; my main objective is to get you to think about possibilities beyond what you've seen in existing OSs and create your own unique system. Maybe it makes sense if the video driver doesn't use the GPU for graphics at all and only provides support for GPGPU (for HPC?). Maybe you can think of an efficient way to do graphics with point clouds (with no textures for anything) that has some interesting benefits. Maybe you discover an entirely new way to handle lighting/shadows. All I know is that if you fail to question existing techniques it's impossible to improve on them; and virtually everything that is currently considered "state of the art" exists because someone questioned whatever came before it.
I understand, and I'm not just questioning because I think it's wrong, I want to understand it from another perspective and see if I can come up with a better way that suits me. I truly appreciate the time and effort you put in explaining everything and answer my questions.
For video there are 2 choices:
"fixed frame rate, variable quality"; where reduced quality (including "grey blobs" in extreme conditions) is impossible to avoid whenever highest possible quality can't be achieved in the time before the frame's deadline (and where reduced quality is hopefully a temporary condition and gone before the user notices anyway).
"fixed quality, variable frame rate"; where the user is typically given a large number of "knobs" to diddle with (render distance, texture quality, amount of super-sampling, lighting quality, ...); and where the user is forced to make a compromise between "worse quality than necessary for average frames" and "unacceptable frame rates for complex frames" (which means it's impossible for the user to set all those "knobs" adequately and they're screwed regardless of what they do).
The third choice that lots of games use, is of course Dynamic LOD (level of detail). It may render things in the background slower (animation updates, inverse kinematics) use billboards, reduce geometry, turn of certain features, etc. These can't all be part of the video driver unless each app tells it specifically how to render different quality levels. If the frame rates start dipping they can drop from parallax mapping, normal mapping, bump mapping to just plain texturing. It can do more fancy features for up-close objects and use less intense features for things further away or out of focus, etc. I just don't see how you can dump that off as the job of the video driver without it knowing a LOT more information about how the game and artists designed the game.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Graphics API and GUI

Post by Brendan »

Hi,
Ready4Dis wrote:I understand the grey blob thing (of course you mean it could be a texture that says unavailable or anything else that you want), especially on a highly distributed system where loading a resource may be a slow process that you don't want to just sit and wait for. Loading straight from disk as part of the graphics api may have it's merits though, besides the user being able to cheat by changing textures or shaders ;). It means if you supply a system that knows how to load all types of file formats, then the application doesn't really need to worry about specifically loading an image, it just has to know where to find that image (and possibly check whether the image format is supported or that can just be a response from the graphics driver saying the image isn't supported, queue grey blob). I still see a few issues with this of course, what about things like mip-mapping? Do you load the image through the graphics driver (which has to open in ram, then send to vram), then ask the graphics driver for a copy (from vram -> application), then generate mip-map levels, and regenerate the texture, and upload back into vram? Or do different types of mip-map generators need to be part of the driver and you send it as a request, or only allow apps to use mipmapped textures if the file format supports them?
Part of my project is to fix the "many different file formats" problem; where new file formats are constantly being created and none are ever really deprecated/removed, leading to a continually expanding burden on software and/or "application X doesn't support file format Y" compatibility problems. Mostly I'll be having a small set of standard file formats that are mandatory (combined with "file format converters" functionality built into the VFS that auto-convert files from legacy/deprecated file formats into the mandatory file formats).

If the video driver wants to reduce a texture's quality (e.g. to make it fit in limited amount of VRAM) it can, and if the video driver wants to generate and use mip-maps it can. It's the video driver's responsibility to decide what the video driver wants to do. All normal software (applications, GUIs, games) only ever deal with the highest quality textures and have no reason to know or care what the video driver does with them.
Ready4Dis wrote:Brendan: How is your kernel coming along anyways, I know you have lots of plans and seeing as you're doing something so different, I can imagine it's going to take a while to get all the specifics worked out completely. I would be interested in checking it out when you get it to a point where it's at least partially usable. I've got a couple of test machines at the house (including an eee pc 900 netbook).
Heh. The plan was to define a "monitor description" file format (capable of handling curved monitors, colour space correction, 3D displays, etc); and then write boot code to convert the monitor's EDID into this file format (and I did a whole bunch of research to design the "monitor description" file format, and wrote out most of the specification). However; to convert the colour space data from what EDID provides (co-ords for CIE primaries) into something useful involves using complex matrix equations to generate a colour space conversion matrix; and my boot code is designed for "80486SX or later" and I can't even assume an FPU or floating point is supported. This mostly means that to achieve what I want I need to implement a whole pile of maths routines (e.g. half an arbitrary precision maths library) in my boot code before I can even start implementing the code that generates the colour space conversion matrix.

Basically; things got too messy too quickly so I decided to take a break, and have been playing Gnomoria. ;)


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
SpyderTL
Member
Member
Posts: 1074
Joined: Sun Sep 19, 2010 10:05 pm

Re: Graphics API and GUI

Post by SpyderTL »

Brendan wrote:Part of my project is to fix the "many different file formats" problem; where new file formats are constantly being created and none are ever really deprecated/removed, leading to a continually expanding burden on software and/or "application X doesn't support file format Y" compatibility problems. Mostly I'll be having a small set of standard file formats that are mandatory (combined with "file format converters" functionality built into the VFS that auto-convert files from legacy/deprecated file formats into the mandatory file formats).
I've got a similar plan, in that I plan on running all files through the OS before handing them over to the application. Ideally, this means that the application should only have one format (or just a few formats) to worry about. The OS will essentially have "drivers" for different file formats that will be responsible for converting them to a common object format.

I've got a long way to go, though...
Project: OZone
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
Ready4Dis
Member
Member
Posts: 571
Joined: Sat Nov 18, 2006 9:11 am

Re: Graphics API and GUI

Post by Ready4Dis »

If the video driver wants to reduce a texture's quality (e.g. to make it fit in limited amount of VRAM) it can, and if the video driver wants to generate and use mip-maps it can. It's the video driver's responsibility to decide what the video driver wants to do. All normal software (applications, GUIs, games) only ever deal with the highest quality textures and have no reason to know or care what the video driver does with them..
I see, then is the user going to be able to set the graphics level based on the application? For example, one application has no problems running at super ultra awesome high quality, and another needs to run at low quality? I just think leaving to much up the video driver is going to make things run slower than they need to. If you enable mip-mapping (via toggle, slider, whatever) what if the application only wanted certain things to use mip-maps, and others not to (just one example of course)?
Part of my project is to fix the "many different file formats" problem; where new file formats are constantly being created and none are ever really deprecated/removed, leading to a continually expanding burden on software and/or "application X doesn't support file format Y" compatibility problems. Mostly I'll be having a small set of standard file formats that are mandatory (combined with "file format converters" functionality built into the VFS that auto-convert files from legacy/deprecated file formats into the mandatory file formats).
Yes, I like this idea, but it's hard for some things unless you have a very clearly defined internal representation for all the 'formats'. By that I mean a clearly defined bitmap structure that any format can load/save to/from, a text format that supports all the advanced features (bold/italics/underline, coloring, hyperlinks, embedded images?, fonts, etc). That idea I do like and plan on doing something very much like it in my own OS. I don't think each application needs to have it's own jpeg loading routine or link to a library to load a file. It should just open it and get an image out, or tell the video driver to map a .jpeg into a texture without worrying about it.
Heh. The plan was to define a "monitor description" file format (capable of handling curved monitors, colour space correction, 3D displays, etc); and then write boot code to convert the monitor's EDID into this file format (and I did a whole bunch of research to design the "monitor description" file format, and wrote out most of the specification). However; to convert the colour space data from what EDID provides (co-ords for CIE primaries) into something useful involves using complex matrix equations to generate a colour space conversion matrix; and my boot code is designed for "80486SX or later" and I can't even assume an FPU or floating point is supported. This mostly means that to achieve what I want I need to implement a whole pile of maths routines (e.g. half an arbitrary precision maths library) in my boot code before I can even start implementing the code that generates the colour space conversion matrix.
I have written a few colour space conversion routines for a project I was working on. It supports HSV, YCbCr, and regular RGB. I had support for YUV but it's basically the same as YCbCr (redundant) so I removed it. I actually removed a few now that I am looking back. HSL was removed, YUV removed conversion to CIE LUV and CIE LAB was removed... meh, oh well. Do graphics from the 486 era even support EDID? Is it really necessary to support that in the boot loader? If you ever want a hand with anything like that, I enjoy low level driver/graphics stuff more than kernel development ;). I've written a 3d rendering engine on a 486 sx (no FPU) and used tons of fixed point math before. Although, my colour conversion routines weren't meant for real-time so they are just floating point and no optimizations.
Post Reply