Driver driven display updates vs. today's poll-based systems
Posted: Fri Feb 02, 2018 2:34 pm
Hi all,
Apologies in advance if I am off topic here, but I have found it difficult to find a place to ask this that has an audience and would consist of people that could have valued opinion about this.
So, with today's and yesterday's computer systems, we have an HDMI (back in the day VGA) or another digital signal interface, between a graphic card and a display device, typically an LCD. As far as I understand, the graphic card has some control over when to "refresh" contents of the display (not video RAM, which is another aspect), but typically we always talk about some applied refresh rate, frequency at which the display in effect updates itself.
I am no electrical engineer so I don't know if it is typically the graphic card that exclusively drives the display or whether it is the display and/or the HDMI (an example) subsystem in it that polls the graphic card automatically at regular intervals for update from the framebuffer, so these details non-withstanding, I wonder:
Why in this day and age we can't switch over to a manual software-initiated display (not just framebuffer!) update mechanism supported on the hardware level all the way to the display? Application calls display manager, display manager calls driver to execute transaction that refreshes the display straight from the video RAM? Does it have something to do with legacy code and type of thinking on software developers' part?
To explain, we have now a variety of different display applications, software that does wildly different things, from 3D games that need a semi-regular update of a world they render, to spreadsheet and word processors and text editors where nothing has to be going unless the application needs to update the document view somehow, often as response to user action. For some applications there is no clear need for the display or graphic card to drive display update X times per second, indiscriminately refreshing it from video RAM.
Instead, we could imagine the graphic card driver expose the new paradigm by a function that a privileged (you don't want every user process to have monopoly over entire framebuffer, typically, but not a crucial detail here) application can call to signal the graphi card and the rest of the display subsystem that it wants to refresh the display, already having updated the framebuffer by that time. The hardware will update the display from the video RAM once, with the application using polling or asynchronous callback to learn of a completed transaction, so it knows for example when it can issue another signal at the earliest, and so that v-sync problems are a non-issue.
This is thus an entirely push-driven paradigm that rests on client initiated display update transactions, so to speak, eliminating v-sync problem at the root entirely and effectively giving us variable refresh rate in the entirety of rendering pipeline, software and hardware.
The updates are said to thus always be initiated by the graphic card, on request from and indirectly by the driver and in case with a traditional OS kernel, privileged software calling said driver, typically a "display manager" (X.org in Linux, DWM in Windows, etc). There is no polling and fetching and updating the display periodically by the display hardware, as is the case with display hardware systems we are used to currently.
Perhaps, and this is speculations of someone who isn't too familiar with the electronics behind current systems (LCDs, HDMI, DVI technologies), with the electronics that support our systems today, we wouldn't have to waste current on indiscriminately updating a display when it does not need to be updated, or try to solve the old problem of synchronizing the updating of framebuffer with the display refresh iteration.
I hope I am making sense with this. In an advent of e-ink displays for certain kind of computing applications, I imagine this is not that wild a proposition, if it holds water of course.
Imagine that your application, through the display manager tasked with multiplexing the display using a GUI (desktop environment), could signal when the portion of display contents it was allocated are updated, if it is once a second or 60 times a second, with the display manager driving the display based on this information, without useless display refresh. This seems to require involvement of both the display, the graphic card and software architecture. Is there any merit in doing this? If not on hardware level, then at least from a software engineering perspective?
Apologies in advance if I am off topic here, but I have found it difficult to find a place to ask this that has an audience and would consist of people that could have valued opinion about this.
So, with today's and yesterday's computer systems, we have an HDMI (back in the day VGA) or another digital signal interface, between a graphic card and a display device, typically an LCD. As far as I understand, the graphic card has some control over when to "refresh" contents of the display (not video RAM, which is another aspect), but typically we always talk about some applied refresh rate, frequency at which the display in effect updates itself.
I am no electrical engineer so I don't know if it is typically the graphic card that exclusively drives the display or whether it is the display and/or the HDMI (an example) subsystem in it that polls the graphic card automatically at regular intervals for update from the framebuffer, so these details non-withstanding, I wonder:
Why in this day and age we can't switch over to a manual software-initiated display (not just framebuffer!) update mechanism supported on the hardware level all the way to the display? Application calls display manager, display manager calls driver to execute transaction that refreshes the display straight from the video RAM? Does it have something to do with legacy code and type of thinking on software developers' part?
To explain, we have now a variety of different display applications, software that does wildly different things, from 3D games that need a semi-regular update of a world they render, to spreadsheet and word processors and text editors where nothing has to be going unless the application needs to update the document view somehow, often as response to user action. For some applications there is no clear need for the display or graphic card to drive display update X times per second, indiscriminately refreshing it from video RAM.
Instead, we could imagine the graphic card driver expose the new paradigm by a function that a privileged (you don't want every user process to have monopoly over entire framebuffer, typically, but not a crucial detail here) application can call to signal the graphi card and the rest of the display subsystem that it wants to refresh the display, already having updated the framebuffer by that time. The hardware will update the display from the video RAM once, with the application using polling or asynchronous callback to learn of a completed transaction, so it knows for example when it can issue another signal at the earliest, and so that v-sync problems are a non-issue.
This is thus an entirely push-driven paradigm that rests on client initiated display update transactions, so to speak, eliminating v-sync problem at the root entirely and effectively giving us variable refresh rate in the entirety of rendering pipeline, software and hardware.
The updates are said to thus always be initiated by the graphic card, on request from and indirectly by the driver and in case with a traditional OS kernel, privileged software calling said driver, typically a "display manager" (X.org in Linux, DWM in Windows, etc). There is no polling and fetching and updating the display periodically by the display hardware, as is the case with display hardware systems we are used to currently.
Perhaps, and this is speculations of someone who isn't too familiar with the electronics behind current systems (LCDs, HDMI, DVI technologies), with the electronics that support our systems today, we wouldn't have to waste current on indiscriminately updating a display when it does not need to be updated, or try to solve the old problem of synchronizing the updating of framebuffer with the display refresh iteration.
I hope I am making sense with this. In an advent of e-ink displays for certain kind of computing applications, I imagine this is not that wild a proposition, if it holds water of course.
Imagine that your application, through the display manager tasked with multiplexing the display using a GUI (desktop environment), could signal when the portion of display contents it was allocated are updated, if it is once a second or 60 times a second, with the display manager driving the display based on this information, without useless display refresh. This seems to require involvement of both the display, the graphic card and software architecture. Is there any merit in doing this? If not on hardware level, then at least from a software engineering perspective?