Remember the thread about forcing developers versus making it the easiest approach?mystran wrote:Random ideas that I just got, but haven't really considered the implications of yet:
It's relatively easy to prevent applications from creating extra event-loops and from drawing outside the repaint handlers, by API design. But can't this be extended to other stuff as well...
...like let threads have a tasktype attribute, which is set when the thread is created, and then restrict different APIs to different tasktypes. Any single thread can only have single tasktype, so you can't put stuff that should be in different threads into the same thread.
Namely, restrict all GUI operations into a task-type "GUI". Restrict all audio operations into a task-type "multi-media" (which is also allowed access to streaming video overlays?). Then restrict all blocking IO into a tasktype "IO."
There would be two benefits actually: nobody would be able to do IO from a GUI thread, or audio from an IO thread. But even more importantly, the system scheduler could make more intelligent decisions depending on types of thread functionality.. and restricting APIs (forcibly) to certain task-types would force application writers to provide this information..
Such an evil plan...
GUI design thoughts, from both sides of the fence
Actually...
No... the problem you run into is you prevent the programmer from being able to easily serialize things happening in their code. Such as posting a sound and then showing a picture in that order. You now need to use some sort of signal mechanism between threads.
Designing the strategy at a global scale will fail for all but one case compared to designing for a fine grained scale. Unless of course you can assume your programmers are idiots.
Since you yourself will inevitably be programming your code, I must say that you are probably no more and no less competent when you write your programs than when you write your OS. So try to enable yourself rather than restrict.
No... the problem you run into is you prevent the programmer from being able to easily serialize things happening in their code. Such as posting a sound and then showing a picture in that order. You now need to use some sort of signal mechanism between threads.
Designing the strategy at a global scale will fail for all but one case compared to designing for a fine grained scale. Unless of course you can assume your programmers are idiots.
Since you yourself will inevitably be programming your code, I must say that you are probably no more and no less competent when you write your programs than when you write your OS. So try to enable yourself rather than restrict.
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.
- C. A. R. Hoare
- C. A. R. Hoare
Playing a sound should not be a synchronous operation anyway. Streaming audio (that is, reading a wavefile and dumping it into a buffer) is what needs to be in a separate thread. If you just tell the system to "please play this effect for me" then it's not really an "audio operation" at all.Avarok wrote: No... the problem you run into is you prevent the programmer from being able to easily serialize things happening in their code. Such as posting a sound and then showing a picture in that order. You now need to use some sort of signal mechanism between threads.
Anyway, such "some sort of signal mechanism" is the basis of all proper GUI code: it's called events. Any library which doesn't allow user defined events, or prevents sending events from one thread to another threads event queue is just plain impossible to program responsive GUIs with. GUI thread can't block (on anything but event-loop) so only way to notify it that something happened in the rest of the application is to send an event. So the signalling mechanism is already there:
- ask system to play a sound
- when an event arrives telling that the sound was played, display a picture
Well, in this case, I actually see some enforcement as enabling: if you get a fatal exception when you try to use certain API from certain type of thread, it's easy to notice if you make mistakes. I hate nothing as much as having my code freeze, then discovering that some supposedly non-blocking call indeed can in some situation take ages and is therefore in a wrong thread.Since you yourself will inevitably be programming your code, I must say that you are probably no more and no less competent when you write your programs than when you write your OS. So try to enable yourself rather than restrict.
The real problem with goto is not with the control transfer, but with environments. Properly tail-recursive closures get both right.
To some extent I agree with Avarok, but mystran made good points. However, the one big problem I see is when your abstracted GUI device context points to something other than a screen. Like a printer. Printers do not have windows to invalidate (or even events to a large extent) -- but you typically do need to draw on them in exactly the way you would to a screen device context. mystran's mechanisms seem well adapted to creating efficient screen drawing, but would involve painful contortions to do printer drawing. It seems a much bigger trick to efficiently handle both, without doing painful contortions for either.mystran wrote:every other programmer seems to think that when the application wants to draw, it asks the system for a handle to some graphics context, draws in there, and everything is fine. Which ofcourse is a seriously stupid thing to do in general...
The solution then, involves delaying the drawing in form of window invalidation.
That sounds like a conclusion - a printer isn't a screen and vice versa. A screen is a printer of sorts though - it allows visual output on a limited area designed to contain information. It is too different to match them though, screens commonly have visuals that allow transformations (scroll, mouseover, highlight) and printers do not.bewing wrote:To some extent I agree with Avarok, but mystran made good points. However, the one big problem I see is when your abstracted GUI device context points to something other than a screen. Like a printer. Printers do not have windows to invalidate (or even events to a large extent) -- but you typically do need to draw on them in exactly the way you would to a screen device context. mystran's mechanisms seem well adapted to creating efficient screen drawing, but would involve painful contortions to do printer drawing. It seems a much bigger trick to efficiently handle both, without doing painful contortions for either.
This is like inheritance with a square and a rectangle. They appear related but aren't in the context of your problem.
I don't see why anyone would want to print GUI elements (which is what windows on screen should represent). Sure, you might want to print a document, but it's not THAT hard to have a document, which knows how to render itself, then have a window, which knows how to put itself on the screen and appear responsive, handling invalidations and such, and then have a menu entry, which starts a printing thread, which asks for the document to render itself just like the window would, but for the printer.bewing wrote: To some extent I agree with Avarok, but mystran made good points. However, the one big problem I see is when your abstracted GUI device context points to something other than a screen. Like a printer. Printers do not have windows to invalidate (or even events to a large extent) -- but you typically do need to draw on them in exactly the way you would to a screen device context. mystran's mechanisms seem well adapted to creating efficient screen drawing, but would involve painful contortions to do printer drawing. It seems a much bigger trick to efficiently handle both, without doing painful contortions for either.
Added benefit is you get background printing essentially for free (well, if you don't want it, just call the printing methods directly, but that sucks).
Anyway, printing and screen display are different anyway. Don't forget the resolution difference between around ~100 dpi for screen vs. 600dpi for average printer, yet dumping that much pixels to printer is just stupid, so you mostly let the printer do the rasterization from vectors unless you need to print an image that happens to be bitmap to start with, or have a really really stupid printer (in which case the drivers will rasterize for you anyway). On the other hand, for screen you generally need some anti-aliasing measures, which is usually not that great idea for a printer.
So ... if enforcing proper invalidation processing also prevents people from drawing to screen and printer with the same codepath, I'd say that's just another Good Thing.
Oh, and yeah, I know there are libraries and APIs that let you draw vector data to screen directly, and handle the printer/screen differences for you, but even then I don't see why can't you put the drawing logic into a separate function you call from both your window draw-event code, and your printing code. Such code is still logically part of the document, not a window.
The real problem with goto is not with the control transfer, but with environments. Properly tail-recursive closures get both right.
-
- Member
- Posts: 1600
- Joined: Wed Oct 18, 2006 11:59 am
- Location: Vienna/Austria
- Contact:
This whole render_for_printer,render_for_screen screams for something nice, round and nifty
the strategy pattern
stay safe.
the strategy pattern
stay safe.
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
BlueillusionOS iso image
In my opinion, the standard method of triggering redraw methods on rectangles of the screen, and having each program define it's own redraw is begging for quirks, bugs, lag, and latency.
Doing this means that you're performing IPC, waiting for a context switch, calling an event handler, and then relying on user defined code to deal with something the shell/gui caused, is most aware of, and should be able to handle itself.
Why should each window owner have to care if another window gets dragged on top?
If each program has a framebuffer of their window and shares that memory with the shell, what exactly is the problem? The shell remaps pages into designated video memory, copies along the edge, and you're done.
Doing this means that you're performing IPC, waiting for a context switch, calling an event handler, and then relying on user defined code to deal with something the shell/gui caused, is most aware of, and should be able to handle itself.
Why should each window owner have to care if another window gets dragged on top?
If each program has a framebuffer of their window and shares that memory with the shell, what exactly is the problem? The shell remaps pages into designated video memory, copies along the edge, and you're done.
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.
- C. A. R. Hoare
- C. A. R. Hoare
One of my points, though, was that if you are going to all this effort to abstract the device context, then isn't the whole point of abstracting something to make it cover MORE possible hardware outputs, and not fewer? If you are going to restrict "screen drawing" functions to ONLY work on screens, then you might as well just be creating a HAL, and not an abstracted device context at all.Candy wrote:That sounds like a conclusion - a printer isn't a screen and vice versa.
My other amorphous point was that I was trying not to restrict things to a printer/screen pure dichotomy. Not that I can think of any other current devices that have font glyphs drawn on them, that aren't like screens or printers ... which is why I used the concept of "a printer" as an example. I was simply saying that if you want to abstract a GUI, I just had a feel that mystran's abstraction was not abstract enough to handle multi-touch 3d holographic smellivision.
That's actually how modern GUIs work (compositional X11, Aero, the OSX thingie), except they get minor complications to the soup by supporting some accelerated fun.Avarok wrote: If each program has a framebuffer of their window and shares that memory with the shell, what exactly is the problem? The shell remaps pages into designated video memory, copies along the edge, and you're done.
But yeah, the "content lost" invalidations where designed when it would have been prohibitionally expensive to have a backbuffer in memory for each window. That said, every other reason for invalidation is still valid (window resize, content change, had to throw the buffer away 'cos running out of video memory)...
The real problem with goto is not with the control transfer, but with environments. Properly tail-recursive closures get both right.