Re: Concise Way to Describe Colour Spaces
Posted: Mon Jul 27, 2015 2:30 am
And how are you making that dynamic choice without even knowing that the texture compresses reasonably? (Especially considering that your average texture takes around a couple of seconds of CPU time to compress, so you can't exactly try each one and then use metrics to pick)Brendan wrote:Hi,
Why make one static choice when you're able to make choices dynamically (e.g. depending on the specific video card's capabilities, and whether the texture is being used for something too far away for details to make any difference or for something very close where details are very important, and "usage history" from previous frames, and estimates of how much memory bandwidth you've got to spare, etc)?Owen wrote:How does your driver know whether a texture is a suitable target for texture compression using one of DXT1, DXT3, DXT5, ETC1, ETC2, PVRTC or ASTC? Note: Each of these texture formats is lossy and therefore has it's own tradeoffs and some of them will make certain things look terrible. On the other hand, if you don't use them, you'll throw away a bunch of performance (yes, reading from compressed textures is faster than uncompressed) and at a minimum quadruple your memory bandwidth and the size of every texture (from 8bpp to 32bpp).Brendan wrote:When it's time to render a scene; the format the data is in is whatever format the video driver felt like converting the data into after it loaded the data from files. The data in files (textures, meshes, whatever) may or may not be optimal, but "file format standardisation" is far more important.
Yes; for some things the application may want to provide the data itself (rather than telling the video driver which file contains the data); but that changes nothing - the video driver still converts the data into whatever format it feels like when the data is uploaded to the video driver (and not when a rendering is being done), and "message protocol standardisation" is far more important.
I'm pretty sure that in any situation where I'd be playing a game on my phone, I'd not be in range of any of my own machines to use as a render server (because if I was, why am I using my phone?), and given that I'm away from my network I'd rather you didn't use my precious few megabytes of mobile data sending video data back and forth (at unacceptably high latency, anyway).Brendan wrote:If you're running a distributed OS then it's because you've got a network of multiple computers and not a single device (and ironically; about half of the research into "distributed real time rendering" is specifically for mobile phones because they're too under-powered and have to offload some rendering to a remote machine just to get acceptable results). With this in mind I'd be very tempted to just use the mobile phone as a dumb terminal and run its GUI, apps and/or games plus most of its rendering on a server.Owen wrote:So, if you're abstracting anything and your driver really has no idea (because lets face it, your driver doesn't contain a human eye which can accurately identify compression aberrations), lets give the hypothetical future situation that your OS is running on my mobile phone. You just quadrupled the size of every texture and massively dropped the texture cache residency. Also, because of this, the memory bus is running at 100% utilization and burning 1W, or 1/3rd of SOC power (yes, that is really how much power the memory interface on a SOC will burn), so now the CPU and GPU are required to down clock due to TDP limitations.
Of course if there isn't a suitable computer on the network (e.g. you took the mobile phone with you to lunch and you're out of wifi range) then the video driver will reduce quality to sustain frame rate and you'll probably just be seeing fuzzy grey blobs lurching around the screen for anything that changes often.
Of course, my phone is currently capable of much more than lurching gray blobs.
Of course, everything is 3D from the smallest little widget up on modern platforms. Have you ever looked at how the Android or iOS UIs are implemented? On top of OpenGL (er, probably Metal in the iOS case these days). Does the developer have to know about this? Not really.Brendan wrote:It's definitely more desirable than forcing mountains of hassles onto every application developer and forcing compatibility problems on users. Of course this doesn't mean it's a desirable situation, it's just a "least worst" situation.Owen wrote:Is that a desirable situation?
To put this into perspective; think about a "Hello World" application. For plain text only, a "Hello World" application is about 3 lines of C. For 2D it gets more complicated and it becomes something like (for Windows) the ~80 lines of code shown here. For 3D with OpenGL 1.0 the complexity involved becomes massive (starting with implementing a font engine(!)) and you end up relying on libraries to cope; and for modern OpenGL (with shaders) the complexity involved doesn't get any better. Now imagine what it's going to be like for software developers (especially beginners attempting to learn) when everything is 3D from the smallest little widget all the way up.
On the other hand, implementing a UI with a generalized 3D engine sounds like hell (and indeed every 3D engine I've ever worked with comes with a specialized UI library or gives you the option of several to plug in)
Your methodology sounds not too different from tile based renderingBrendan wrote:Most existing renderers use "draw everything with Z buffer tests". I'm not. I'm using "overlapping triangle depth culling" where nothing is drawn until all the occlusion, lighting, etc is done and there is no Z buffer. Because it's different to existing stuff it has different advantages and different disadvantages; and requires different methods.Owen wrote:Is your driver capable of figuring out the correct order to draw things for a given scene? (note: there is no optimum for every scene. If I'm drawing a race track, I want to draw sequential track segments in closest-to-furthest track order for maximum occlusion. If I'm drawing a corridor shooter, then I want to use the visibility information I pre-baked at map compile time)
Does your driver have any clue whether it can pull tricks like screen-space ambient occlusion, which have no basis in reality but subjectively work? Can it pull tricks like screen-space reflection where suitable to mostly simulate reflections without having to redraw the scene twice?
You may call tricks like screen space reflection "broken." I'd say from experience that it's subjectively much better than no reflection at all (of course in some situations it would be and you'd not want it - so in those situations of course you'd want to let the artist turn it off)Brendan wrote: The "overlapping triangles" stuff is done in closest-to-furthest order because nothing else makes sense. Lighting is done very differently and screen-space ambient occlusion isn't necessary.
Reflection is an unsolved problem - I haven't found an acceptable method yet (only "fast" methods that are dodgy/fragile/broken and give unacceptable/incorrect results, and "correct" methods that are too expensive unless you're able to throw multiple GPUs at it). My plan here is to use an expensive method that gives correct results, but is effected by the "fixed frame rate, variable quality" rule (where quality is reduced if/when there isn't enough time).
I'd be quite intrested to see a picture of this scene which is 1/3rd ray cast, 1/3rd ray traced and 1/3rd rasterized...Brendan wrote:Of course all of this is on the "lower level" side of the video interface's abstraction; and you can have 5 completely different renderers all (internally) using 5 completely different methods, that are all cooperating to generate (different parts of) the same frame. The exact details of any specific renderer are mostly irrelevant for OS design or where GUI/game/applications are concerned. It's not like existing systems where you're intimately tied to a specific renderer with specific techniques; where if you want to change to a radically different renderer (e.g. "cast rays from light sources") you're completely screwed and have to rewrite every game that's ever been written.
Of course (sadly), its far too easy for people who have never actually implemented realtime systems themselves (who only ever theorize new ideas and never implement them) to ignore the massive practical advantages of existing systems and only focus on the disadvantages.Brendan wrote:Of course (sadly), its far too easy for people who have never invented anything themselves (who only ever repeat ideas that someone else invented) to ignore massive advantages of anything different and only focus on the disadvantages that are far less important.