Window Manager specifics

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
jal
Member
Member
Posts: 1385
Joined: Wed Oct 31, 2007 9:09 am

Post by jal »

zaleschiemilgabriel wrote:With 3D hardware support those buffers are in video memory, so it's fast.
Not necessarily. They're textures, and just like textures in games, they can get uploaded to the vidmem. When making changes, you must change the texture, whether in sysmem or vidmem. Possibly (but I'm no gfx developer, so I may be wrong), it's even better to change in sysmem (especially when also having to read mem since reading vidmem is very slow) and then blit the entire thing to vidmem (letting the card do that via DMA of course).
That's what I was trying to say. Go with buffers if you plan on using 3d hardware.
Always go with buffers, unless you're short in memory (in embedded systems and the like, but you won't have that many windows in that case anyway). The whole update rectangle direct drawing thing is as legacy as legacy can get, in the days Win 3.1 had to run in 2Mb of main memory with 128Kb of non-accelerated vidmem.


JAL
User avatar
zaleschiemilgabriel
Member
Member
Posts: 232
Joined: Mon Feb 04, 2008 3:58 am

Post by zaleschiemilgabriel »

Changing textures is done in video memory, also. The graphics card provides functions for normal drawing primitives like rectangle, ellipse and so on, and it supports blitting one texture onto another, so everything you need is done in video memory, by the GPU. All you need to do is send commands to the GPU. In DirectX 6 you had to make up the code sent to the GPU and store it in a buffer which was sent to the GPU. LAter DirectX versions added higher level functions to hide the need to write GPU assembly code.
It's a good idea to keep a system buffer for use on other devices (i.e. for printing), but that's just a waste, if you ask me... Instead you could just draw the whole thing again when you need to print. Using video memory only for storing textures is guaranteed to be flawless only in newer graphics boards, though. That's why compiz only works well (and faster) on newer boards than on older ones, which might not support video-memory storage of textures.
User avatar
bewing
Member
Member
Posts: 1401
Joined: Wed Feb 07, 2007 1:45 pm
Location: Eugene, OR, US

Post by bewing »

jal wrote: The whole update rectangle direct drawing thing is as legacy as legacy can get, in the days Win 3.1 had to run in 2Mb of main memory with 128Kb of non-accelerated vidmem.
Yes, but:
It works.
It was fast enough to be usable, even on those ancient, dog-slow machines.
It is guaranteed compatible with everything.

When DirectX and accelerated video hardware finally get stabilized and a decade old, then it will be a good time to abandon these legacy fallback methods. But not yet.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Post by Brendan »

Hi,
bewing wrote:
jal wrote: The whole update rectangle direct drawing thing is as legacy as legacy can get, in the days Win 3.1 had to run in 2Mb of main memory with 128Kb of non-accelerated vidmem.
Yes, but:
It works.
It was fast enough to be usable, even on those ancient, dog-slow machines.
It is guaranteed compatible with everything.
What do you think about DOS?

It works.
It was fast enough to be usable, even on those ancient, dog-slow machines.
It is guaranteed compatible with everything.
bewing wrote:When DirectX and accelerated video hardware finally get stabilized and a decade old, then it will be a good time to abandon these legacy fallback methods. But not yet.
Um, DirectX is a decade old (DirectX was first released in 1995, 13 years ago). OpenGL is older than DirectX (and it's predecessor, IRIS GL, is even older). Accelerated video hardware existed in the 1980's - IBM's 8514 was introduced in 1987 (21 years ago).

Of course it takes time to write an OS - by the time my OS is "finished" DirectX and OpenGL will be at least 2 decades old, and hardware acceleration will be at least 3 decades old.

IMHO it's a more fundamental problem - the interface/API between the video driver and other software should be designed to allow all hardware acceleration to be used without the other software needing to know if any hardware acceleration is being used. If applications are manually drawing everything in their bitmap, then most hardware acceleration can't be used, and the interface/API between the video driver and other software has severe design flaws.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
bewing
Member
Member
Posts: 1401
Joined: Wed Feb 07, 2007 1:45 pm
Location: Eugene, OR, US

Post by bewing »

1. What do you think about DOS?
It works, it's fast, and it's not compatible with any Windows code, or any other GUI code. Major incompatibility problems. I take your point, of course. It is also very limited in "abilities", because it wasn't designed to have any.

But the only point of accelerated graphics hardware is that it is supposed to be so significantly faster than manually drawing rectangles. It has no extra abilities. It is simply requesting that you rewrite the entire front and backend of your GDI, with the promise of a notable drop in CPU time used.

2. Um, DirectX is a decade old ...
DirectX 10.1 is not a decade old. As a standard, it is not stable. This is the second worst thing that can possibly be said about a standard. A standard that is a rapidly moving target is almost as bad as not having any standard at all.

3. Accelerated video hardware existed in the 1980's ...
Without any standard at all, and again, not the tiniest bit stable.


4. The interface/API between the video driver and other software should be designed to allow all hardware acceleration to be used without the other software needing to know if any hardware acceleration is being used.


Yes, I agree completely. That is precisely the point I was attempting to make in my above post. The app should not manually draw in its own bitmap. The drawing functions need to be abstracted one level. The app issues "acceleratable commands" to a GDI function. The GDI function looks to see if the hardware is accelerated or not. If not, it manually draws in the bitmap (Windows 3.1 style with rectangles) -- except for windows that can benefit from being buffered. If the hardware is accelerated, the GDI function passes on the command string to the hardware. Of course, it is much more complicated than that in reality.

But we, as OS creators, are writing the GDI, and not the userapp. So we get to write all that fun rectangle stuff. We cannot just ignore it as being legacy BS, like jal is suggesting.
jal
Member
Member
Posts: 1385
Joined: Wed Oct 31, 2007 9:09 am

Post by jal »

bewing wrote:But we, as OS creators, are writing the GDI, and not the userapp. So we get to write all that fun rectangle stuff. We cannot just ignore it as being legacy BS, like jal is suggesting.
It depends on your aim. If you are, say, designing a legacy-free OS, running only on x64 in long mode etc. etc., I don't see the point of supporting anything but accelerated gfx. If you want your OS to run on a 386/486, or on some embedded hardware, supporting the old method makes sense. But most coders here aim for the newest hardware, so in that case I really would advise against implementing the old "update rectangle single buffer" method employed by Win3.


JAL
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Post by Brendan »

Hi,
jal wrote:If you want your OS to run on a 386/486, or on some embedded hardware, supporting the old method makes sense. But most coders here aim for the newest hardware, so in that case I really would advise against implementing the old "update rectangle single buffer" method employed by Win3.
I'm not convinced. The code to draw "stuff" needs to be somewhere - either in every application (where you get "code bloat"), in a shared library used by every application (where you get "dependancy hell"), or in the video driver (where you can add hardware acceleration later if you like). Even for the old hardware or embedded systems without any hardware acceleration, I can't see any real benefit in putting the code to draw "stuff" outside the video driver.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
bewing
Member
Member
Posts: 1401
Joined: Wed Feb 07, 2007 1:45 pm
Location: Eugene, OR, US

Post by bewing »

I'm not convinced either. Even on the newest of new hardware, you still need a fallback video driver, in case the user's video hardware is not a model that you've written a driver for yet. Or when you are still in the initial stages of OS development, and you haven't written any board-specific video drivers yet. When you design a fallback driver, the whole point is that it will not be as fast as a specially coded accelerated video driver. So, because a legacy-type driver is universal, it makes sense -- especially since it doesn't use up much in the way of system resources (except CPU cycles).
jal
Member
Member
Posts: 1385
Joined: Wed Oct 31, 2007 9:09 am

Post by jal »

There must be some misunderstaning - I never implied one wouldn't need some fallback, non-accelerated driver. What I was denouncing was zaleschiemilgabriel insistance against using a buffer for each (main) window, and instead use the old Windows 3 method of update rectangles and the like. The latter made sense in the old days, when memory was scares. But on modern day machines, I see no use for this mechanism. That doesn't mean that that buffer is fully application controlled only to be blitted by the driver.


JAL
User avatar
zaleschiemilgabriel
Member
Member
Posts: 232
Joined: Mon Feb 04, 2008 3:58 am

Post by zaleschiemilgabriel »

Sorry, I only based my ansers on the fact that Windows XP seems to be more responsive than Ubuntu and Vista, even with the accelerated graphics enabled. And no, I do not have a weak video board.

Cheers,
Gabriel
User avatar
inx
Member
Member
Posts: 142
Joined: Wed Mar 05, 2008 12:52 am

Post by inx »

Ubuntu is definitely snappier than XP. You're referring to X11, which is slow wherever it's shoved. That's not a Ubuntu/Linux flaw. That exists on *BSD/Linux/OS X running Apple's X/SCO/IRIX/System V/Cygwin with X/etc.
Crazed123
Member
Member
Posts: 248
Joined: Thu Oct 21, 2004 11:00 pm

Post by Crazed123 »

I find OS X's Quartz to be the most responsive window system I've used.

Is it actually true that compositing window systems basically just render each window to a texture and then composite the textures? Dang.[/i]
User avatar
zaleschiemilgabriel
Member
Member
Posts: 232
Joined: Mon Feb 04, 2008 3:58 am

Post by zaleschiemilgabriel »

Why must there be so many window managers for Linux is beyond me. An operating system, by definition is something that interfaces with the user through a GUI. You'd think that since most GUIs these days use similar window-based interfaces, there would be a standard for those. The window manager IMO is an unneeded concept. An operating system should only offer one type of "window manager", that should be highly extensible and customizable, not multiple, parallel, incompatible, or version-based ones. If the model most commonly used is that of a window, as a simple square on the screen, why are there so many different managers out there? :?
The problem with all of them is that each offers a minimum amount of customization options to the user, but to be fully customizable, you'd have to install all of the existing window managers on top of a kernel and then switch between them. That is the dumbest idea anyone ever had.
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Post by Combuster »

uhm, i run gnome apps in the KDE window manager, KDE apps in the gnome WM, and everything in Beryl

incompatibility, where? :roll:
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
User avatar
Brynet-Inc
Member
Member
Posts: 2426
Joined: Tue Oct 17, 2006 9:29 pm
Libera.chat IRC: brynet
Location: Canada
Contact:

Post by Brynet-Inc »

zaleschiemilgabriel wrote:Why must there be so many window managers for Linux is beyond me.
You mean X, There are many Window managers for X... and that's a good thing, some people don't like the design of.. twm for instance, so they use fluxbox.

As for compatibility between them, standards exist.. Google EWMH and ICCCM.

It's clear you can't even distinguish between X and Linux, why are you attacking something you don't comprehend? :roll:
Image
Twitter: @canadianbryan. Award by smcerm, I stole it. Original was larger.
Post Reply