Page 2 of 5
Re: Redesigning video system
Posted: Mon Dec 05, 2011 6:44 am
by rdos
Combuster wrote:rdos wrote:We have graphics accelerators because C is not adequate.
And we program graphics accelerators in... C
Not the one's that are any good. Those are programmed in hardware with silicon, or in assembly.
Re: Redesigning video system
Posted: Mon Dec 05, 2011 6:45 am
by Combuster
Oh my, you really are stuck in the previous millenium.
Seriously, GTFO. Both Quake and Unreal work much better on hardware accelerated mode even though the software versions were still written by worlds most epic assembly developers. Do you dare to bet your life in a coding contest against Carmack?
Re: Redesigning video system
Posted: Mon Dec 05, 2011 6:51 am
by rdos
The only problem with LFB (in combination with optimized assembly-code), is that the new generation of graphics cards have a huge penalty for reading the LFB. Therefore, an LFB solution needs to be combined with buffering in order to avoid reading LFB. This is easily observable in my guidemo-app which uses random combine-codes, that runs extremely slow on modern graphics cards, while code that doesn't use combine-codes run very well on modern graphics cards. So what I will do in my design is to use the buffer a little bit smarter in order to avoid reading the LFB. This will only affect older hardware marginally, but will provide great boosts on modern hardware.
Re: Redesigning video system
Posted: Mon Dec 05, 2011 6:59 am
by rdos
Combuster wrote:Oh my, you really are stuck in the previous millenium.
Some truths are valid accross milleniums.
Combuster wrote:Seriously, GTFO. Both Quake and Unreal work much better on hardware accelerated mode even though the software versions were still written by worlds most epic assembly developers. Do you dare to bet your life in a coding contest against Carmack?
I don't program games, and never will. Game performance and general GUI performance are two entirely different things. I optimize for general GUI performance, not game performance.
Re: Redesigning video system
Posted: Mon Dec 05, 2011 7:52 am
by Solar
rdos wrote:Why else do we have graphic accelerators if C was adequate for graphics?
Because we want our CPUs to do stuff
other than graphics?
Because some applications (like, games?) required graphics that desktop CPUs could no longer deliver - not even at the hands of the best ASM coders of the time?
Re: Redesigning video system
Posted: Mon Dec 05, 2011 8:00 am
by Brendan
Hi,
rdos wrote:I don't program games, and never will. Game performance and general GUI performance are two entirely different things. I optimize for general GUI performance, not game performance.
For modern GUIs, you need to use hardware acceleration for things like fast drawing, fast blitting, fast alpha blending and transparency, smooth mouse pointers, smooth animations, etc. Then there's a whole pile of "extra", like doing MPEG/video decoding in hardware and compositing effects.
Basically, if you've got full support for the video card's capabilities just sitting there it's easy to use it and have a fast and impressive GUI; and if you've only got a simple framebuffer you're not going to have enough spare CPU time to do much at all, and your GUI will be limited to "not very impressive" and have various performance problems (tearing/shearing, latency, etc) *especially* at high resolutions (e.g. a set of 4 monitors all at 1920*1200 is going to kill your poor little CPU).
Cheers,
Brendan
Re: Redesigning video system
Posted: Mon Dec 05, 2011 8:12 am
by rdos
Solar wrote:Because we want our CPUs to do stuff other than graphics?
We have multicore for that.
Solar wrote:Because some applications (like, games?) required graphics that desktop CPUs could no longer deliver - not even at the hands of the best ASM coders of the time?
Yes, but is it pure coincidence that we see CPUs that no longer can handle graphics happens at the same time as C starts to prevail in OSes? I don't think so. I think bloated C-designs is the major player in inadequate graphics performance, along with bloated GUIs that have no low-level interface. After all, DirectX was probably invented because the bloated GUI-interface was no good for game developpers on Windows, so M$ had to invent this in order to stay competitive as a gaming platform.
Re: Redesigning video system
Posted: Mon Dec 05, 2011 8:18 am
by rdos
Brendan wrote:Basically, if you've got full support for the video card's capabilities just sitting there it's easy to use it and have a fast and impressive GUI; and if you've only got a simple framebuffer you're not going to have enough spare CPU time to do much at all, and your GUI will be limited to "not very impressive" and have various performance problems (tearing/shearing, latency, etc) *especially* at high resolutions (e.g. a set of 4 monitors all at 1920*1200 is going to kill your poor little CPU).
It will at any rate, because there is no common standard for graphic acceleration, and a small OS will never have support for a lot of these accelerated cards. Therefore, the most sensible thing for a small, embedded OS, is to optimize for LFB speed, because it will mostly run with LFB only support anyway.
Re: Redesigning video system
Posted: Mon Dec 05, 2011 8:38 am
by rdos
berkus wrote:JFYI, Core Graphics almost entirely runs C code for graphics accelerators, on the graphics accelerator. This code is highly optimized for GUI work - e.g. compositing, memory management for the UI components (having the entire web page in video memory is much faster than trying to re-render it on each scroll update) and many other things that graphics accelerators do better than general purpose CPU whilst having much faster access to the memory they use.
That is rather irrelevant unless you work with Windows or Linux. Until graphics acceleration is as standardized as LFB and VBE, there is no reason for a one-man OS project to bother with it. There are other, far more important, device-drivers to write before video hardware accelerators that are almost per video-card. As long as graphics performance is adequate for the applications I write, I will not write hardware accelerated device-drivers.
Besides, a new series of AMD CPUs seems to have graphics integrated in the processor. That seems to be a promising approach. Both for fast LFB and for graphics acceleration.
Re: Redesigning video system
Posted: Mon Dec 05, 2011 8:43 am
by Solar
Yes, but is it pure coincidence that we see CPUs that no longer can handle graphics happens at the same time as C starts to prevail in OSes?
LOL...
I don't know where you've been in the last three decades (which is about the time span that I can claim to have experienced in front of computers), or what you are smoking to come to your funny conclusions, but computers were
never capable of "handling graphics" in the way human minds could picture.
I was dreaming up 3D real-time adventures while playing The Bard's Tale on my C64. I was dreaming of
real windows while using my
MagicFormel extension plug-in.
I am
sure Carmack could imagine more than the pixelfest that was Doom, and it sure wasn't due to lack of ASM skills that the textures weren't more detailed, the levels more complex. I am sure the two Steve's could picture a better GUI than the Apple II brought, but it couldn't be done at the time. And don't you dare to claim that Woz didn't know his ASM...
I don't think so. I think bloated C-designs is the major player in inadequate graphics performance, along with bloated GUIs that have no low-level interface.
Goes to prove that even you don't know everything.
Re: Redesigning video system
Posted: Mon Dec 05, 2011 8:46 am
by Rusky
Recent developments in Mesa include a software renderer called llvmpipe. It generates CPU-specific, multi-processor code that takes advantage of whatever SIMD instructions are available. Performance is sometimes competitive (never equal) with low to middle-end graphics cards, but it takes up most of the processing time on all cores.
Can a hand-coded assembly software renderer take 0% CPU time while producing a seamless GUI, and have enough leftover power even on something like Intel graphics to run fancy Compiz plugins like wobbly windows and spinning desktop cubes?
Re: Redesigning video system
Posted: Mon Dec 05, 2011 8:48 am
by rdos
berkus wrote:rdos wrote:That is rather irrelevant unless you work with Windows or Linux.
I'm sorry, your ignorance is far beyond offensive. Or you're a very successful troll.
Never knew that Apple's Core Graphics was even partly implemented on Windows or Linux.
End of conversation from my side.
OK, add Apple's OS as well then. Any OS that is supported by a large company could of course afford writing multiple device-drivers for video acceleration. Some, like Microsoft, can even count on many companies writing these drivers themselves, while not providing the relevant documentation to others in order to lock their chips to particular OSes.
Nothing you say above makes the statement that one-man projects should optimize for LFB invalid. Unless you aim to provide support only for a single accelerated video-card.
Re: Redesigning video system
Posted: Mon Dec 05, 2011 8:55 am
by rdos
Rusky wrote:Recent developments in Mesa include a software renderer called llvmpipe. It generates CPU-specific, multi-processor code that takes advantage of whatever SIMD instructions are available. Performance is sometimes competitive (never equal) with low to middle-end graphics cards, but it takes up most of the processing time on all cores.
It is not necesary to use floating-point in a GUI. Integers are quite adequate, unless you do rendering software.
Rusky wrote:Can a hand-coded assembly software renderer take 0% CPU time while producing a seamless GUI, and have enough leftover power even on something like Intel graphics to run fancy Compiz plugins like wobbly windows and spinning desktop cubes?
I have a multithreaded planet-motion demo that runs pretty well on top of a image / desktop on a 500MHz AMD geode with LFB only support. It can handle 25-30 planets in the animation on that hardware.
Additionally, we have jpg/png related animations in our terminal, that are 300 x 320 pixels. These run just fine with LFB on a 500MHz AMD geode. We do not do the JPEG/PNG conversions in real-time, rather have a loader-thread do the decodings, and then the animation is carried out with a blit-operation. These animations do not consume all the CPU-time, rather typically only a small fraction.
Re: Redesigning video system
Posted: Mon Dec 05, 2011 11:07 am
by Rusky
rdos wrote:It is not necesary to use floating-point in a GUI. Integers are quite adequate, unless you do rendering software.
Where did floating-point come from?
Re: Redesigning video system
Posted: Mon Dec 05, 2011 12:02 pm
by rdos
berkus wrote:The only little difference being that Compiz doesn't do raster-only graphics.... but I guess that's beyond your imagination.
Of course I do non-raster graphics.