Gigasoft wrote:KemyLand wrote:
A single and simple question beats all this paragrah: Where does the game runs on? Your OS of course! If the OS doesn't refreshes the screen at 60Hz, how could the game (maybe DOS-style VGA direct writes
Any application can update any portion of its window at any time just fine. Other windows are unaffected by this.
Let's debug these two programs:
Program 1:
- Drawing something at 60Hz... The PIT comes in! Going to Kernel!
- Finishes drawing.
A fatal error on here. Try to mo
Program 2:
- Drawing something at 60Hz... The PIT comes in! Going to Kernel!
- Finishes drawing.
Kernel (on PIT interrupt):
- ...
- Going to next process! If 1 was running, 2 is ran. If 2 was running, 1 is ran.
If there were several processes running. This is what the users thinks:
- Let's start a game! (Begins process 1)
- Let's start...
- Let's start a game! (Begins process 2)
- WTF?! Both games are flickering at about 1/10th of a second!
Gigasoft wrote:
KemyLand wrote:
A fatal error on here. Try to move a window or mouse and you get the same stuff that Windows does when moving a screen when the system overload, the trace of the moment remains onscreen. That's because you'll need to do other (unmentioned in your design) extra innecessary computations in order to have everything right.
In the case of an overloaded system, I prefer the system to remain responsive rather than having to wait for it to redraw everything on the screen before I can do anything at all.
How do you think the OS you were writting this post on does things? The system won't remain more rensponsive this way, as graphics are not the only stuff done. 60Hz is just enough so the lag is unnoticeable on an overloaded system. Obviously, if other things are done, the lag will be noticed. You do not wait it to do that! Do you remember interrupts exist? Anyway, I'm not sure if you are talking about an RTOS, because you're preferring responsiveness over perfomance
.
Gigasoft wrote:
KemyLand wrote:
This design is theorically possible, although it requires lots of FPU/MMX/SSE* instructions to calculate all details, without mentioning you probably must be a genius before being able to desing such algorithms. That overload can easily take up all the system performance.
Obviously, keeping track of where the windows are, what is visible and what must be redrawn is much faster than shoving millions of pixels around in the system repeatedly. There is no floating point arithmetic involved and the structures describing these regions are typically very small. They can basically be thought of as a list of rectangles. This does not require that all windows have to be rectangular. I could easily make windows with rounded corners if I wanted to.
Again, that's dirty and error prone. You cannot simply do:
Code: Select all
class Window {
// ...
bool isThisWindowVisible;
// ...
}
Because of the ability of windows to overlap other (3D-appearance?) you can't just have a single variable. If a window overlaps another by a little X/Y misalignment, a visible space made of a vertical and/or horizontal slice will appear. You would reserve
a lot of dynamically allocated floats and doubles just for this. There's no floating point arithmetic you say, eh? I don't know if to laugh or cry. Do you at least remember that floating point computation is
essential for both 2D and 3D graphical computation? You can't tell me that FP computation is anyway faster than integral computation. I should admit something here, and it's that you
should avoid the redrawing of minimized windows, but nothing more.
Gigasoft wrote:
KemyLand wrote:
Remember that GUIs are not just square, pixelated windows. They involve several graphical effects. Maybe, someday, a graphical effect you add won't work, neither theorically nor pratically, with your design. A simple example?: The Desktop Cube effect of Compiz.
For the Desktop Cube effect, one would simply replace all the top level windows with a large transparent window that manages the application windows in its own way. It would keep a separate buffer for each window, and redraw the composited view whenever one or more windows have changed. Then it would wait for a small amount of time before checking again. Transparent windows (and other effects, such as blurring) can be implemented by having the system draw everything that is in the transparent part of the window into a temporary surface which is passed to the application, which then draws its own UI on top.
I should replay onhere that you said "FP Arithmetic isn't needed". For a 3D effect would you say me
?! This are simple mathematical laws. You need matrices and vectors for doing this kind of stuff. The Cube effect would transform graphically the properties of windows. In between a cube switch, the windows are multiplied by several matrices to get a transformed version that can graphically fit on what appears to be a rotating 3D cube. You cannot keep separate buffers for each window, because they're
really complexly transformed. You can't constantly malloc()/free() (goodbye performance), neither depend on std::vector
. Everything is unpredictable on this land!
[quote="Gigasoft"}
KemyLand wrote:
BTW, did you said you could update the screen asynchronously? Never mind! Some standards specificy a exact frequency for data to be transfered. HDMI for instance.
That does not hamper the ability to transfer things into the display chip's RAM. If you are talking about avoiding tearing, one can simply delay drawing until the next blanking period.
[/quote]
No, it doesn't harms the ability to do a innecesary parcial copy, but your model. Here, you are forced to update the screen everything X Hz, and that's against your design
. Even VGA does this internally!