Re: Graphics API and GUI
Posted: Thu Oct 08, 2015 8:21 am
Hi,
Sure, it'd be hard to calculate the exact amount of time it will take, but there's no need for that level of accuracy.
BSP ensures things are rendered in a specific order. The silly way to use BSP is to render everything in "back to front" order so that Z checks aren't needed (but everything rendered ends up doing expensive "textel lookup" instead even if/when its overwritten later). The smart way to use BSP is to render everything in "front to back" order with Z checks; so that "textel lookup" can be skipped for everything that's occluded. Of course this assumes opaque polygons (and for anything transparent you probably want to do them "back to front" with Z checks after all of the opaque stuff is done, which doesn't fit well with either use of BSP).
Imagine a railroad track. A red train is moving at 100 m/s going from west to east and is 200 meters to the west of you. A blue train is moving at 50 m/s and is 100 meters to the east of you. Calculate when the trains will collide. This is a 1 dimensional problem (in that both trains are on the same linear track). To calculate when they will collide (and then where both trains will be when they collide) requires 2 dimensional calculations - essentially; it's the intersection of lines in 2D (where one of the dimensions is time).
Now imagine a billiard table with a white ball and black ball that happen to be on a collision course. This is a 2D problem (in that both balls roll along the same 2D plane). To calculate when they will collide (and then where the balls will be when they collide) requires 3 dimensional calculations - essentially; it's the intersection of 2 lines in 3D (where one of the dimensions is time).
Finally, imagine a 3D game. This time it's a bullet and a racing car. To calculate when they will collide (and then where they will be when they collide) requires 4 dimensional calculations - essentially; it's the intersection of 2 lines in 4D (where one of the dimensions is time).
Note: I've simplified a lot and it's not "intersection of 2 lines" for anything more than very simple cases. More correctly; typically you're calculating then point in time where the distance between the circumferences of bounding circles (2D) or bounding spheres (3D) reaches zero as your first test, and then doing individual "extruded line" or "extruded polygon" intersection tests on individual lines/polygons after that. Suffice to say it is a "N+1 dimensions" problem, and not tied to a game tick or frame rate; and for performance reasons should not be using the game's graphics data (e.g. 100 polygons for a dog's face) and needs to use much simpler geometry (6 polygons for a dog's face); and for accuracy and performance reasons should support primitive shapes (cones, spheres, cubes, cylinders) directly so that (e.g.) a ball doesn't need to be broken up into polygons at all .
In addition to this, I personally think that all games (including single player) need to be using a client/server model where the server and client/s are synchronised via. time (e.g. starting/ending trajectories), and where physics is done on server and graphics is done on the client/s (and where client can crash and burn, and be restarted, without any loss of game data).
Cheers,
Brendan
I honestly can't see why you think it might be difficult. If I told you the graphics pipeline (for converting "3D space" into "2D texture") has 3 major stages, and last time the first stage took 1234 us and there were 100 objects but now there are 200 objects, do you think you could estimate how long the first stage might take this time? What if I said the third stage took 444 us last time and you knew it's only effected by number of pixels and isn't effected by number of objects - would you guess that if the number of pixels hasn't changed it might take 444 us again?Ready4Dis wrote:I've done a lot of graphics programming, before and after 3d accelerators where available, I still don't think it's easy to do from either standpoint.Um.. I say it's fairly easy when the video driver does it (and hard when the game does it); and you say it's hard when the game does it (just ask game developers!)?
Sure, it'd be hard to calculate the exact amount of time it will take, but there's no need for that level of accuracy.
Erm.Ready4Dis wrote:It doesn't make BSP redundant, a lot of BSP renderers can turn off Z checks due to their nature, which saves the gpu from having to do read backs (which are slow). Yes, that's the point of octree's, portals, bsp's, etc. To only render what it needs, but which one you use depends on the type of world/objects you're rendering.Z-buffer doesn't stop you from rendering objects; but it's the last/final step for occlusion (and was only mentioned as it makes BSP redundant and is extremely common). Earlier steps prevent you from rendering objects.
For a very simple example; you can give each 3D object a "bounding sphere" (e.g. distance from origin to furthermost vertex) and use that to test if the object is outside the camera's left/right/top/bottom/foreground/background clipping planes and cull most objects extremely early in the pipeline. It doesn't take much to realise you can do the same for entire collections of objects (and collections of collections of objects, and ...).
Of course typically there's other culling steps too, like culling/clipping triangles to the camera's clipping planes and back face culling, which happen later in the pipeline (not at the beginning of the pipeline, but still well before you get anywhere near Z-buffer).
BSP ensures things are rendered in a specific order. The silly way to use BSP is to render everything in "back to front" order so that Z checks aren't needed (but everything rendered ends up doing expensive "textel lookup" instead even if/when its overwritten later). The smart way to use BSP is to render everything in "front to back" order with Z checks; so that "textel lookup" can be skipped for everything that's occluded. Of course this assumes opaque polygons (and for anything transparent you probably want to do them "back to front" with Z checks after all of the opaque stuff is done, which doesn't fit well with either use of BSP).
Why do you think the video driver needs to know what is visible in a hurry? In the early stages (including estimation) it only needs to know what is "possibly visible" in a hurry, and if the video driver thinks something is "possibly visible" and finds out that it's actually hidden behind something else later in the pipeline, then who cares? Worst case is the frame finishes a tiny fraction of a milli-second sooner, and maybe (but more likely not) something that it might have been able to draw with slightly higher quality is done with slightly worse quality. It's not like the user is going to notice or care.Ready4Dis wrote:Otherwise everyone would use the same exact thing. My point was, without these structures, how does the video driver know what is visible quick and in a hurry?
Physics and collision detection should have nothing at all to do with graphics in any way at all. The fact that game developers are incompetent morons that have consistently done it wrong does not change this.Ready4Dis wrote:Also, most BSP schemes use the BSP map and accompanying data in order to quickly do collision detection (like checking for the interaction of the player and an object). For it to work, it needs the transformed data and the BSP. If you push everything to the video driver, you still need your map for the game meaning it's even slower still since you need to ask the video driver for information or keep two copies.
Imagine a railroad track. A red train is moving at 100 m/s going from west to east and is 200 meters to the west of you. A blue train is moving at 50 m/s and is 100 meters to the east of you. Calculate when the trains will collide. This is a 1 dimensional problem (in that both trains are on the same linear track). To calculate when they will collide (and then where both trains will be when they collide) requires 2 dimensional calculations - essentially; it's the intersection of lines in 2D (where one of the dimensions is time).
Now imagine a billiard table with a white ball and black ball that happen to be on a collision course. This is a 2D problem (in that both balls roll along the same 2D plane). To calculate when they will collide (and then where the balls will be when they collide) requires 3 dimensional calculations - essentially; it's the intersection of 2 lines in 3D (where one of the dimensions is time).
Finally, imagine a 3D game. This time it's a bullet and a racing car. To calculate when they will collide (and then where they will be when they collide) requires 4 dimensional calculations - essentially; it's the intersection of 2 lines in 4D (where one of the dimensions is time).
Note: I've simplified a lot and it's not "intersection of 2 lines" for anything more than very simple cases. More correctly; typically you're calculating then point in time where the distance between the circumferences of bounding circles (2D) or bounding spheres (3D) reaches zero as your first test, and then doing individual "extruded line" or "extruded polygon" intersection tests on individual lines/polygons after that. Suffice to say it is a "N+1 dimensions" problem, and not tied to a game tick or frame rate; and for performance reasons should not be using the game's graphics data (e.g. 100 polygons for a dog's face) and needs to use much simpler geometry (6 polygons for a dog's face); and for accuracy and performance reasons should support primitive shapes (cones, spheres, cubes, cylinders) directly so that (e.g.) a ball doesn't need to be broken up into polygons at all .
In addition to this, I personally think that all games (including single player) need to be using a client/server model where the server and client/s are synchronised via. time (e.g. starting/ending trajectories), and where physics is done on server and graphics is done on the client/s (and where client can crash and burn, and be restarted, without any loss of game data).
I don't use text mode, I use the firmware's text output functions (like "EFI_SIMPLE_TEXT_OUTPUT_PROTOCOL" on UEFI; that might or might not be using text mode). During boot the OS reaches a "point of no return" where firmware is discarded and the firmware's text output functions can no longer be used (e.g. "EFI_EXIT_BOOT_SERVICES" is used). Immediately before this point of no return I switch video card/s to graphics mode, so that the entire OS (if there's no native video driver and its left with the generic frame buffer driver only) can use graphics; but this means that after the point of no return the boot code must also use graphics mode (up until the video driver/s are started). Note that (for me) this point of no return occurs before boot code has decided which micro-kernel (e.g. 32-bit/64-bit, with/without NUMA, etc) it should start.Ready4Dis wrote:I just meant the need for graphics mode for my boot log, not the need for the boot log. Of course I want to be able to see something if it fails, but it doesn't necessarily have to be graphical (especially if it happens before or during the initial graphics routines).For me; the OS will probably have to wait for network and authentication before it can become part of its cluster, before it can start loading its video driver from the distributed file system. There's also plenty of scope for "Oops, something went wrong during boot" where the OS never finishes booting and the user needs to know what failed (which is the primary reason to display the boot log during boot).
Cheers,
Brendan