Maybe; but then maybe my intention is to get the original poster to think about the design of their OS and how a GUI would fit into it, rather than just doing the natural thing (and implementing a GUI without considering the lower level layers or the interface/s between the GUI and the lower level layers and ending up with some prehistoric pixel pounder).tjmonk15 wrote:Maybe you could preface your posts like this with a "If you're looking for A correct way to do this" or something simliar, that make make your views/posts more acceptable/approachable.
There's no reason why you can't do HDR on cell shaded images.tjmonk15 wrote:Beyond that, for my own info:Except Full-screen applications that wish to use Cel Shading instead.Brendan wrote:
- Nothing prevents the video driver from doing HDR.
Note 1: the amount of light that a monitor can put out is limited by its design (mostly by the strength of its back light). HDR is just a way to bypass that limitation - essentially, allowing "brighter than hardware can support" pixels, and then scaling the brightness down to the range the hardware can support, in a way that tricks people into believing the image actually is "brighter than technically possible" (by mimicking the effect of the human eye's iris). Basically, HDR is a way to bypass the hardware's pixel brightness limits, super-sampling is a way to bypass the hardware's resolution limits, and dithering is a way to bypass the hardware's "number of colours" limits.
Note 2: the other major limitation of current hardware is frame rate. For example, I'm sure we've all seen videos of a moving car where the wheels either look like they aren't rotating or look like they're rotating in the wrong direction. The solution to this problem is motion blur. However, motion blur is either extremely complex and expensive (e.g. keeping track of the trajectories and instead of drawing pixels you'd draw a (not necessarily straight) line from where the pixel was to where it is now) or extremely expensive and simple (e.g. the brute force approach of generating "n sub-frames" and merging them to form a frame).
In general there's 2 alternatives:tjmonk15 wrote:Except full screen applications that couldn't possibly render in real time at a higher resolution than they specify. (That resolution should be user-defined obviously, and be chosen from a list of resolutions that the user's monitor supports at all times)Brendan wrote:
- Nothing prevents the video driver from rendering a frame in a higher resolution and then scaling down, to improve the perceived resolution. Note: This could be extremely advanced - e.g. taking into account the physical properties of the screen.
- "Fixed detail", where frame rate varies as a consequence of performance and scene complexity
- "Fixed frame rate", where detail varies as a consequence of performance and scene complexity
For the second alternative you need to estimate how much work you're able to do in a fixed amount of time (including determining how much of the data you cached last time can be reused to avoid doing the work again) and use that estimate to vary things (like "intermediate resolution", where "too small/too distant" cut-off points are, whether to use textures or solid colour for which polygons, whether to use "slightly changed but maybe close enough" data you cached from the last frame, etc). It's not something that belongs on the application side of the "graphics device abstraction" (unless that "graphics device abstraction" is a leaky abstraction that leaks so badly that you're better off not having a video driver in the first place).
Basically, applications/GUIs should send a "description of scene" 60 times per second without caring about how it's rendered; and powerful hardware might render it at 60 frames per second with extremely high quality, and crappy hardware might render it at 60 frames per second with extremely low quality; but the applications/GUIs shouldn't need to know or care which. It should also be possible to send the raw "description of scene 60 times per second" data to a file and then spend an hour to generate each frame to produce an insanely high quality/photo-realistic video (without the applications/GUIs knowing or caring); and be possible to send the same "description of scene" to a pool of 100 computers and get each of those computers to render 1% of the screen and combine the results from each computer before displaying the frame in real-time (still without the applications/GUIs knowing or caring); or be possible to do any/all of the above and display the result on a 20*10 grid of 200 monitors where each monitor is connected to one of 100 completely different video cards (some ATI, some Intel, some NVidia) and where each monitor may be using one of many different video modes (still without the applications/GUIs knowing or caring).
Cheers,
Brendan