Redesigning video system

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
rdos
Member
Member
Posts: 3276
Joined: Wed Oct 01, 2008 1:55 pm

Re: Redesigning video system

Post by rdos »

Combuster wrote:
rdos wrote:We have graphics accelerators because C is not adequate.
And we program graphics accelerators in... C :mrgreen:
Not the one's that are any good. Those are programmed in hardware with silicon, or in assembly.
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: Redesigning video system

Post by Combuster »

Oh my, you really are stuck in the previous millenium.

Seriously, GTFO. Both Quake and Unreal work much better on hardware accelerated mode even though the software versions were still written by worlds most epic assembly developers. Do you dare to bet your life in a coding contest against Carmack?
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
rdos
Member
Member
Posts: 3276
Joined: Wed Oct 01, 2008 1:55 pm

Re: Redesigning video system

Post by rdos »

The only problem with LFB (in combination with optimized assembly-code), is that the new generation of graphics cards have a huge penalty for reading the LFB. Therefore, an LFB solution needs to be combined with buffering in order to avoid reading LFB. This is easily observable in my guidemo-app which uses random combine-codes, that runs extremely slow on modern graphics cards, while code that doesn't use combine-codes run very well on modern graphics cards. So what I will do in my design is to use the buffer a little bit smarter in order to avoid reading the LFB. This will only affect older hardware marginally, but will provide great boosts on modern hardware.
rdos
Member
Member
Posts: 3276
Joined: Wed Oct 01, 2008 1:55 pm

Re: Redesigning video system

Post by rdos »

Combuster wrote:Oh my, you really are stuck in the previous millenium.
Some truths are valid accross milleniums. :lol:
Combuster wrote:Seriously, GTFO. Both Quake and Unreal work much better on hardware accelerated mode even though the software versions were still written by worlds most epic assembly developers. Do you dare to bet your life in a coding contest against Carmack?
I don't program games, and never will. Game performance and general GUI performance are two entirely different things. I optimize for general GUI performance, not game performance.
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re: Redesigning video system

Post by Solar »

rdos wrote:Why else do we have graphic accelerators if C was adequate for graphics?
Because we want our CPUs to do stuff other than graphics?

Because some applications (like, games?) required graphics that desktop CPUs could no longer deliver - not even at the hands of the best ASM coders of the time?
Every good solution is obvious once you've found it.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Redesigning video system

Post by Brendan »

Hi,
rdos wrote:I don't program games, and never will. Game performance and general GUI performance are two entirely different things. I optimize for general GUI performance, not game performance.
For modern GUIs, you need to use hardware acceleration for things like fast drawing, fast blitting, fast alpha blending and transparency, smooth mouse pointers, smooth animations, etc. Then there's a whole pile of "extra", like doing MPEG/video decoding in hardware and compositing effects.

Basically, if you've got full support for the video card's capabilities just sitting there it's easy to use it and have a fast and impressive GUI; and if you've only got a simple framebuffer you're not going to have enough spare CPU time to do much at all, and your GUI will be limited to "not very impressive" and have various performance problems (tearing/shearing, latency, etc) *especially* at high resolutions (e.g. a set of 4 monitors all at 1920*1200 is going to kill your poor little CPU).


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
rdos
Member
Member
Posts: 3276
Joined: Wed Oct 01, 2008 1:55 pm

Re: Redesigning video system

Post by rdos »

Solar wrote:Because we want our CPUs to do stuff other than graphics?
We have multicore for that. :mrgreen:
Solar wrote:Because some applications (like, games?) required graphics that desktop CPUs could no longer deliver - not even at the hands of the best ASM coders of the time?
Yes, but is it pure coincidence that we see CPUs that no longer can handle graphics happens at the same time as C starts to prevail in OSes? I don't think so. I think bloated C-designs is the major player in inadequate graphics performance, along with bloated GUIs that have no low-level interface. After all, DirectX was probably invented because the bloated GUI-interface was no good for game developpers on Windows, so M$ had to invent this in order to stay competitive as a gaming platform.
rdos
Member
Member
Posts: 3276
Joined: Wed Oct 01, 2008 1:55 pm

Re: Redesigning video system

Post by rdos »

Brendan wrote:Basically, if you've got full support for the video card's capabilities just sitting there it's easy to use it and have a fast and impressive GUI; and if you've only got a simple framebuffer you're not going to have enough spare CPU time to do much at all, and your GUI will be limited to "not very impressive" and have various performance problems (tearing/shearing, latency, etc) *especially* at high resolutions (e.g. a set of 4 monitors all at 1920*1200 is going to kill your poor little CPU).
It will at any rate, because there is no common standard for graphic acceleration, and a small OS will never have support for a lot of these accelerated cards. Therefore, the most sensible thing for a small, embedded OS, is to optimize for LFB speed, because it will mostly run with LFB only support anyway.
rdos
Member
Member
Posts: 3276
Joined: Wed Oct 01, 2008 1:55 pm

Re: Redesigning video system

Post by rdos »

berkus wrote:JFYI, Core Graphics almost entirely runs C code for graphics accelerators, on the graphics accelerator. This code is highly optimized for GUI work - e.g. compositing, memory management for the UI components (having the entire web page in video memory is much faster than trying to re-render it on each scroll update) and many other things that graphics accelerators do better than general purpose CPU whilst having much faster access to the memory they use.
That is rather irrelevant unless you work with Windows or Linux. Until graphics acceleration is as standardized as LFB and VBE, there is no reason for a one-man OS project to bother with it. There are other, far more important, device-drivers to write before video hardware accelerators that are almost per video-card. As long as graphics performance is adequate for the applications I write, I will not write hardware accelerated device-drivers.

Besides, a new series of AMD CPUs seems to have graphics integrated in the processor. That seems to be a promising approach. Both for fast LFB and for graphics acceleration.
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re: Redesigning video system

Post by Solar »

Yes, but is it pure coincidence that we see CPUs that no longer can handle graphics happens at the same time as C starts to prevail in OSes?
LOL...

I don't know where you've been in the last three decades (which is about the time span that I can claim to have experienced in front of computers), or what you are smoking to come to your funny conclusions, but computers were never capable of "handling graphics" in the way human minds could picture.

I was dreaming up 3D real-time adventures while playing The Bard's Tale on my C64. I was dreaming of real windows while using my MagicFormel extension plug-in.

I am sure Carmack could imagine more than the pixelfest that was Doom, and it sure wasn't due to lack of ASM skills that the textures weren't more detailed, the levels more complex. I am sure the two Steve's could picture a better GUI than the Apple II brought, but it couldn't be done at the time. And don't you dare to claim that Woz didn't know his ASM...
I don't think so. I think bloated C-designs is the major player in inadequate graphics performance, along with bloated GUIs that have no low-level interface.
Goes to prove that even you don't know everything.
Last edited by Solar on Mon Dec 05, 2011 8:47 am, edited 1 time in total.
Every good solution is obvious once you've found it.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Redesigning video system

Post by Rusky »

Recent developments in Mesa include a software renderer called llvmpipe. It generates CPU-specific, multi-processor code that takes advantage of whatever SIMD instructions are available. Performance is sometimes competitive (never equal) with low to middle-end graphics cards, but it takes up most of the processing time on all cores.

Can a hand-coded assembly software renderer take 0% CPU time while producing a seamless GUI, and have enough leftover power even on something like Intel graphics to run fancy Compiz plugins like wobbly windows and spinning desktop cubes?
rdos
Member
Member
Posts: 3276
Joined: Wed Oct 01, 2008 1:55 pm

Re: Redesigning video system

Post by rdos »

berkus wrote:
rdos wrote:That is rather irrelevant unless you work with Windows or Linux.
I'm sorry, your ignorance is far beyond offensive. Or you're a very successful troll.

Never knew that Apple's Core Graphics was even partly implemented on Windows or Linux.

End of conversation from my side.
OK, add Apple's OS as well then. Any OS that is supported by a large company could of course afford writing multiple device-drivers for video acceleration. Some, like Microsoft, can even count on many companies writing these drivers themselves, while not providing the relevant documentation to others in order to lock their chips to particular OSes.

Nothing you say above makes the statement that one-man projects should optimize for LFB invalid. Unless you aim to provide support only for a single accelerated video-card.
rdos
Member
Member
Posts: 3276
Joined: Wed Oct 01, 2008 1:55 pm

Re: Redesigning video system

Post by rdos »

Rusky wrote:Recent developments in Mesa include a software renderer called llvmpipe. It generates CPU-specific, multi-processor code that takes advantage of whatever SIMD instructions are available. Performance is sometimes competitive (never equal) with low to middle-end graphics cards, but it takes up most of the processing time on all cores.
It is not necesary to use floating-point in a GUI. Integers are quite adequate, unless you do rendering software.
Rusky wrote:Can a hand-coded assembly software renderer take 0% CPU time while producing a seamless GUI, and have enough leftover power even on something like Intel graphics to run fancy Compiz plugins like wobbly windows and spinning desktop cubes?
I have a multithreaded planet-motion demo that runs pretty well on top of a image / desktop on a 500MHz AMD geode with LFB only support. It can handle 25-30 planets in the animation on that hardware. :mrgreen:

Additionally, we have jpg/png related animations in our terminal, that are 300 x 320 pixels. These run just fine with LFB on a 500MHz AMD geode. We do not do the JPEG/PNG conversions in real-time, rather have a loader-thread do the decodings, and then the animation is carried out with a blit-operation. These animations do not consume all the CPU-time, rather typically only a small fraction.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Redesigning video system

Post by Rusky »

rdos wrote:It is not necesary to use floating-point in a GUI. Integers are quite adequate, unless you do rendering software.
Where did floating-point come from?
rdos
Member
Member
Posts: 3276
Joined: Wed Oct 01, 2008 1:55 pm

Re: Redesigning video system

Post by rdos »

berkus wrote:The only little difference being that Compiz doesn't do raster-only graphics.... but I guess that's beyond your imagination.
Of course I do non-raster graphics. #-o
Locked