GPU based?
GPU based?
I'm no expert in OS development at all, but I was wondering if there was any interest in the use of graphics cards in operating systems, eg, a OS boots up. loads it's kernel into the video cards memory and runs mainly from the GPU, with the help of the CPU. It would be an interesting test. I know the folding team have created ways to tap into this power. Idea's and thoughts? Good, Bad or Ugly?
- Steve the Pirate
- Member
- Posts: 152
- Joined: Fri Dec 15, 2006 7:01 am
- Location: Brisbane, Australia
- Contact:
Running things on the GPU is a good idea. In fact I'm trying to include just that in SharpOS. But you can't run an operating system on the GPU just yet. The problem is, the GPU is good at running a single math algorithm upon a large data set. Example:
You get two arrays in and you need to multiply the values of both arrays at a specific index and add a third offset. The GPU is fast in this case.
The GPU is really slow for random memory access and especially slow for non-math algorithms as branching or even calling subroutines are not strengths there.
So you'll have problems running general purpose algorithms there.
You get two arrays in and you need to multiply the values of both arrays at a specific index and add a third offset. The GPU is fast in this case.
The GPU is really slow for random memory access and especially slow for non-math algorithms as branching or even calling subroutines are not strengths there.
So you'll have problems running general purpose algorithms there.
- piranha
- Member
- Posts: 1391
- Joined: Thu Dec 21, 2006 7:42 pm
- Location: Unknown. Momentum is pretty certain, however.
- Contact:
You'd have to run from the CPU first to load drivers for the GPU, then I would calculate, as said before, math algorithms on the GPU.
But then what do you use for speedy graphics?
-JL
But then what do you use for speedy graphics?
-JL
SeaOS: Adding VT-x, networking, and ARM support
dbittman on IRC, @danielbittman on twitter
https://dbittman.github.io
dbittman on IRC, @danielbittman on twitter
https://dbittman.github.io
Considering that current graphics cards have 128 and more stream processors you'll still be able to draw your fancy graphics.
Additionally in the coming years more and more PCs will have two GPUs or more (one onboard and the secondary free for advanced calculations/3D graphics) - so a design, which allows for scheduled GPU use - like a classic OS Thread Scheduler is destined to be useful.
Additionally in the coming years more and more PCs will have two GPUs or more (one onboard and the secondary free for advanced calculations/3D graphics) - so a design, which allows for scheduled GPU use - like a classic OS Thread Scheduler is destined to be useful.
- karloathian
- Posts: 22
- Joined: Fri Mar 28, 2008 12:09 am
i read somewhere that nvidia and ati are working alo on making programs that use that great floating point power. i also heard that folding@home is going to start using the gpu too.i just looked up on wikipedia and found [img]http://upload.wikimedia.org/wikipedia/commons/thumb/9/98/F%40H_FLOPS_per_client.svg/350px-F%40H_FLOPS_per_client.svg.png[img]
os dev is so metal
Just wanted to clearify a few points:
GPU's are great for doing a similar function on a massive amount of data sets. So if you have a very large data set that needs some function performed, this may indeed be useful, if the time to send the data and receive the answer doesnt' overcome the speed benefit of using the GPU to begin with. Now, GPU's suck at branching, most code we use rely's heavily on branching. I just don't think that the information processing done in an OS is very conducive to a GPU. Now, does that mean that other applications won't be able to benefit? No, as long as the application has large data sets with a similar process, and limited branching. Having a bunhc of threads running that all require different things to be processed would be useless to try to port to a GPU, as would a kernel that doesn't do much except handle the hardware abstraction and IPC...
GPU's are great for doing a similar function on a massive amount of data sets. So if you have a very large data set that needs some function performed, this may indeed be useful, if the time to send the data and receive the answer doesnt' overcome the speed benefit of using the GPU to begin with. Now, GPU's suck at branching, most code we use rely's heavily on branching. I just don't think that the information processing done in an OS is very conducive to a GPU. Now, does that mean that other applications won't be able to benefit? No, as long as the application has large data sets with a similar process, and limited branching. Having a bunhc of threads running that all require different things to be processed would be useless to try to port to a GPU, as would a kernel that doesn't do much except handle the hardware abstraction and IPC...
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
That does not mean that the GPU is still useful - it is a separate processor, and even though most programs will get sucky performance, it is an extra computing resource that can handle some of the system's load.
Things like an MP3 player wouldn't be too bad really to run on it. The audio decoding is math-intensive (inverse discrete cosine transform), and the front end is a bit of graphics that would go to the video card anyway. Set up a DMA transfer directly to video memory and you have one cycle eater off the main CPU
Things like an MP3 player wouldn't be too bad really to run on it. The audio decoding is math-intensive (inverse discrete cosine transform), and the front end is a bit of graphics that would go to the video card anyway. Set up a DMA transfer directly to video memory and you have one cycle eater off the main CPU
Hi,
So, for an MP3 player, would overall system performance improve due to having more free CPU time, or would overall system performance be worse due to PCI bus bandwidth limitations?
Of course the fastest way would probably be to store the pre-processed MP3 data on disk (e.g. in a cache of some sort), so that you could send the pre-processed data from disk to RAM to sound card without doing any processing at all.
Cheers,
Brendan
I'd assume data would go from disk to RAM to video/GPU to RAM to sound card (four transfers over the PCI bus), instead of from disk to RAM to CPU to RAM to sound card (2 transfers over the PCI bus). I'd also assume that the PCI bus is used for other things at the same time - e.g. doubling the PCI bus bandwidth used by MP3 decoding reduces the performance of networking, file I/O, etc.Combuster wrote:Things like an MP3 player wouldn't be too bad really to run on it. The audio decoding is math-intensive (inverse discrete cosine transform), and the front end is a bit of graphics that would go to the video card anyway. Set up a DMA transfer directly to video memory and you have one cycle eater off the main CPU
So, for an MP3 player, would overall system performance improve due to having more free CPU time, or would overall system performance be worse due to PCI bus bandwidth limitations?
Of course the fastest way would probably be to store the pre-processed MP3 data on disk (e.g. in a cache of some sort), so that you could send the pre-processed data from disk to RAM to sound card without doing any processing at all.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.