Not the one's that are any good. Those are programmed in hardware with silicon, or in assembly.Combuster wrote:And we program graphics accelerators in... Crdos wrote:We have graphics accelerators because C is not adequate.
Redesigning video system
Re: Redesigning video system
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
Re: Redesigning video system
Oh my, you really are stuck in the previous millenium.
Seriously, GTFO. Both Quake and Unreal work much better on hardware accelerated mode even though the software versions were still written by worlds most epic assembly developers. Do you dare to bet your life in a coding contest against Carmack?
Seriously, GTFO. Both Quake and Unreal work much better on hardware accelerated mode even though the software versions were still written by worlds most epic assembly developers. Do you dare to bet your life in a coding contest against Carmack?
Re: Redesigning video system
The only problem with LFB (in combination with optimized assembly-code), is that the new generation of graphics cards have a huge penalty for reading the LFB. Therefore, an LFB solution needs to be combined with buffering in order to avoid reading LFB. This is easily observable in my guidemo-app which uses random combine-codes, that runs extremely slow on modern graphics cards, while code that doesn't use combine-codes run very well on modern graphics cards. So what I will do in my design is to use the buffer a little bit smarter in order to avoid reading the LFB. This will only affect older hardware marginally, but will provide great boosts on modern hardware.
Re: Redesigning video system
Some truths are valid accross milleniums.Combuster wrote:Oh my, you really are stuck in the previous millenium.
I don't program games, and never will. Game performance and general GUI performance are two entirely different things. I optimize for general GUI performance, not game performance.Combuster wrote:Seriously, GTFO. Both Quake and Unreal work much better on hardware accelerated mode even though the software versions were still written by worlds most epic assembly developers. Do you dare to bet your life in a coding contest against Carmack?
Re: Redesigning video system
Because we want our CPUs to do stuff other than graphics?rdos wrote:Why else do we have graphic accelerators if C was adequate for graphics?
Because some applications (like, games?) required graphics that desktop CPUs could no longer deliver - not even at the hands of the best ASM coders of the time?
Every good solution is obvious once you've found it.
Re: Redesigning video system
Hi,
Basically, if you've got full support for the video card's capabilities just sitting there it's easy to use it and have a fast and impressive GUI; and if you've only got a simple framebuffer you're not going to have enough spare CPU time to do much at all, and your GUI will be limited to "not very impressive" and have various performance problems (tearing/shearing, latency, etc) *especially* at high resolutions (e.g. a set of 4 monitors all at 1920*1200 is going to kill your poor little CPU).
Cheers,
Brendan
For modern GUIs, you need to use hardware acceleration for things like fast drawing, fast blitting, fast alpha blending and transparency, smooth mouse pointers, smooth animations, etc. Then there's a whole pile of "extra", like doing MPEG/video decoding in hardware and compositing effects.rdos wrote:I don't program games, and never will. Game performance and general GUI performance are two entirely different things. I optimize for general GUI performance, not game performance.
Basically, if you've got full support for the video card's capabilities just sitting there it's easy to use it and have a fast and impressive GUI; and if you've only got a simple framebuffer you're not going to have enough spare CPU time to do much at all, and your GUI will be limited to "not very impressive" and have various performance problems (tearing/shearing, latency, etc) *especially* at high resolutions (e.g. a set of 4 monitors all at 1920*1200 is going to kill your poor little CPU).
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Redesigning video system
We have multicore for that.Solar wrote:Because we want our CPUs to do stuff other than graphics?
Yes, but is it pure coincidence that we see CPUs that no longer can handle graphics happens at the same time as C starts to prevail in OSes? I don't think so. I think bloated C-designs is the major player in inadequate graphics performance, along with bloated GUIs that have no low-level interface. After all, DirectX was probably invented because the bloated GUI-interface was no good for game developpers on Windows, so M$ had to invent this in order to stay competitive as a gaming platform.Solar wrote:Because some applications (like, games?) required graphics that desktop CPUs could no longer deliver - not even at the hands of the best ASM coders of the time?
Re: Redesigning video system
It will at any rate, because there is no common standard for graphic acceleration, and a small OS will never have support for a lot of these accelerated cards. Therefore, the most sensible thing for a small, embedded OS, is to optimize for LFB speed, because it will mostly run with LFB only support anyway.Brendan wrote:Basically, if you've got full support for the video card's capabilities just sitting there it's easy to use it and have a fast and impressive GUI; and if you've only got a simple framebuffer you're not going to have enough spare CPU time to do much at all, and your GUI will be limited to "not very impressive" and have various performance problems (tearing/shearing, latency, etc) *especially* at high resolutions (e.g. a set of 4 monitors all at 1920*1200 is going to kill your poor little CPU).
Re: Redesigning video system
That is rather irrelevant unless you work with Windows or Linux. Until graphics acceleration is as standardized as LFB and VBE, there is no reason for a one-man OS project to bother with it. There are other, far more important, device-drivers to write before video hardware accelerators that are almost per video-card. As long as graphics performance is adequate for the applications I write, I will not write hardware accelerated device-drivers.berkus wrote:JFYI, Core Graphics almost entirely runs C code for graphics accelerators, on the graphics accelerator. This code is highly optimized for GUI work - e.g. compositing, memory management for the UI components (having the entire web page in video memory is much faster than trying to re-render it on each scroll update) and many other things that graphics accelerators do better than general purpose CPU whilst having much faster access to the memory they use.
Besides, a new series of AMD CPUs seems to have graphics integrated in the processor. That seems to be a promising approach. Both for fast LFB and for graphics acceleration.
Re: Redesigning video system
LOL...Yes, but is it pure coincidence that we see CPUs that no longer can handle graphics happens at the same time as C starts to prevail in OSes?
I don't know where you've been in the last three decades (which is about the time span that I can claim to have experienced in front of computers), or what you are smoking to come to your funny conclusions, but computers were never capable of "handling graphics" in the way human minds could picture.
I was dreaming up 3D real-time adventures while playing The Bard's Tale on my C64. I was dreaming of real windows while using my MagicFormel extension plug-in.
I am sure Carmack could imagine more than the pixelfest that was Doom, and it sure wasn't due to lack of ASM skills that the textures weren't more detailed, the levels more complex. I am sure the two Steve's could picture a better GUI than the Apple II brought, but it couldn't be done at the time. And don't you dare to claim that Woz didn't know his ASM...
Goes to prove that even you don't know everything.I don't think so. I think bloated C-designs is the major player in inadequate graphics performance, along with bloated GUIs that have no low-level interface.
Last edited by Solar on Mon Dec 05, 2011 8:47 am, edited 1 time in total.
Every good solution is obvious once you've found it.
Re: Redesigning video system
Recent developments in Mesa include a software renderer called llvmpipe. It generates CPU-specific, multi-processor code that takes advantage of whatever SIMD instructions are available. Performance is sometimes competitive (never equal) with low to middle-end graphics cards, but it takes up most of the processing time on all cores.
Can a hand-coded assembly software renderer take 0% CPU time while producing a seamless GUI, and have enough leftover power even on something like Intel graphics to run fancy Compiz plugins like wobbly windows and spinning desktop cubes?
Can a hand-coded assembly software renderer take 0% CPU time while producing a seamless GUI, and have enough leftover power even on something like Intel graphics to run fancy Compiz plugins like wobbly windows and spinning desktop cubes?
Re: Redesigning video system
OK, add Apple's OS as well then. Any OS that is supported by a large company could of course afford writing multiple device-drivers for video acceleration. Some, like Microsoft, can even count on many companies writing these drivers themselves, while not providing the relevant documentation to others in order to lock their chips to particular OSes.berkus wrote:I'm sorry, your ignorance is far beyond offensive. Or you're a very successful troll.rdos wrote:That is rather irrelevant unless you work with Windows or Linux.
Never knew that Apple's Core Graphics was even partly implemented on Windows or Linux.
End of conversation from my side.
Nothing you say above makes the statement that one-man projects should optimize for LFB invalid. Unless you aim to provide support only for a single accelerated video-card.
Re: Redesigning video system
It is not necesary to use floating-point in a GUI. Integers are quite adequate, unless you do rendering software.Rusky wrote:Recent developments in Mesa include a software renderer called llvmpipe. It generates CPU-specific, multi-processor code that takes advantage of whatever SIMD instructions are available. Performance is sometimes competitive (never equal) with low to middle-end graphics cards, but it takes up most of the processing time on all cores.
I have a multithreaded planet-motion demo that runs pretty well on top of a image / desktop on a 500MHz AMD geode with LFB only support. It can handle 25-30 planets in the animation on that hardware.Rusky wrote:Can a hand-coded assembly software renderer take 0% CPU time while producing a seamless GUI, and have enough leftover power even on something like Intel graphics to run fancy Compiz plugins like wobbly windows and spinning desktop cubes?
Additionally, we have jpg/png related animations in our terminal, that are 300 x 320 pixels. These run just fine with LFB on a 500MHz AMD geode. We do not do the JPEG/PNG conversions in real-time, rather have a loader-thread do the decodings, and then the animation is carried out with a blit-operation. These animations do not consume all the CPU-time, rather typically only a small fraction.
Re: Redesigning video system
Where did floating-point come from?rdos wrote:It is not necesary to use floating-point in a GUI. Integers are quite adequate, unless you do rendering software.
Re: Redesigning video system
Of course I do non-raster graphics.berkus wrote:The only little difference being that Compiz doesn't do raster-only graphics.... but I guess that's beyond your imagination.