GPU based?

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
TheSkorm
Posts: 2
Joined: Wed Apr 30, 2008 4:41 am

GPU based?

Post by TheSkorm »

I'm no expert in OS development at all, but I was wondering if there was any interest in the use of graphics cards in operating systems, eg, a OS boots up. loads it's kernel into the video cards memory and runs mainly from the GPU, with the help of the CPU. It would be an interesting test. I know the folding team have created ways to tap into this power. Idea's and thoughts? Good, Bad or Ugly?
User avatar
Steve the Pirate
Member
Member
Posts: 152
Joined: Fri Dec 15, 2006 7:01 am
Location: Brisbane, Australia
Contact:

Post by Steve the Pirate »

I don't know if that's possible, but it sounds unlikely.

If it were, I think the biggest problem would be supporting more than one or two cards - they aren't really standardised like regular CPUs, are they?
My Site | My Blog
Symmetry - My operating system.
TheSkorm
Posts: 2
Joined: Wed Apr 30, 2008 4:41 am

Post by TheSkorm »

If we were able to focus on the main groups of cards (ATI, Nvidia and possibly Intel), which keep the chips fairly similar, we could harness the power of both the CPU and GPU at the same time. Very Very loose idea.
grover
Posts: 17
Joined: Wed Apr 30, 2008 7:20 am

Post by grover »

Running things on the GPU is a good idea. In fact I'm trying to include just that in SharpOS. But you can't run an operating system on the GPU just yet. The problem is, the GPU is good at running a single math algorithm upon a large data set. Example:

You get two arrays in and you need to multiply the values of both arrays at a specific index and add a third offset. The GPU is fast in this case.

The GPU is really slow for random memory access and especially slow for non-math algorithms as branching or even calling subroutines are not strengths there.

So you'll have problems running general purpose algorithms there.
User avatar
piranha
Member
Member
Posts: 1391
Joined: Thu Dec 21, 2006 7:42 pm
Location: Unknown. Momentum is pretty certain, however.
Contact:

Post by piranha »

You'd have to run from the CPU first to load drivers for the GPU, then I would calculate, as said before, math algorithms on the GPU.

But then what do you use for speedy graphics?

-JL
SeaOS: Adding VT-x, networking, and ARM support
dbittman on IRC, @danielbittman on twitter
https://dbittman.github.io
grover
Posts: 17
Joined: Wed Apr 30, 2008 7:20 am

Post by grover »

Considering that current graphics cards have 128 and more stream processors you'll still be able to draw your fancy graphics.

Additionally in the coming years more and more PCs will have two GPUs or more (one onboard and the secondary free for advanced calculations/3D graphics) - so a design, which allows for scheduled GPU use - like a classic OS Thread Scheduler is destined to be useful.
User avatar
karloathian
Posts: 22
Joined: Fri Mar 28, 2008 12:09 am

Post by karloathian »

Ideally you should run indepedant parts of the kernel on the GPU , running asychronusly from the CPU.

I think you could use the CUDA architecture that lets you specify generic code instead of 3d related code, but then again your kernel would have to use a CUDA driver.
grover
Posts: 17
Joined: Wed Apr 30, 2008 7:20 am

Post by grover »

Even by using CUDA the limitations on mathematical algorithms remain. Using a GPU for anything but mass math is worthless - it'll take longer to prepare the command stream and pass the data into the GPU memory than running it directly on the main processor.
User avatar
lukem95
Member
Member
Posts: 536
Joined: Fri Aug 03, 2007 6:03 am
Location: Cambridge, UK

Post by lukem95 »

i think it would be more benefitial to have a shared library or something for your maths (and graphics obv) API's, then have the functions in that call the GPU for code that will be faster.

it might take a lot of testing to see what functions would have a speed benefit though.
~ Lukem95 [ Cake ]
Release: 0.08b
Image
insanet
Posts: 4
Joined: Sun May 04, 2008 5:08 pm

Post by insanet »

i read somewhere that nvidia and ati are working alo on making programs that use that great floating point power. i also heard that folding@home is going to start using the gpu too.i just looked up on wikipedia and found [img]http://upload.wikimedia.org/wikipedia/commons/thumb/9/98/F%40H_FLOPS_per_client.svg/350px-F%40H_FLOPS_per_client.svg.png[img]
os dev is so metal
Ready4Dis
Member
Member
Posts: 571
Joined: Sat Nov 18, 2006 9:11 am

Post by Ready4Dis »

Just wanted to clearify a few points:

GPU's are great for doing a similar function on a massive amount of data sets. So if you have a very large data set that needs some function performed, this may indeed be useful, if the time to send the data and receive the answer doesnt' overcome the speed benefit of using the GPU to begin with. Now, GPU's suck at branching, most code we use rely's heavily on branching. I just don't think that the information processing done in an OS is very conducive to a GPU. Now, does that mean that other applications won't be able to benefit? No, as long as the application has large data sets with a similar process, and limited branching. Having a bunhc of threads running that all require different things to be processed would be useless to try to port to a GPU, as would a kernel that doesn't do much except handle the hardware abstraction and IPC...
grover
Posts: 17
Joined: Wed Apr 30, 2008 7:20 am

Post by grover »

Ready4Dis: That's what I was saying all along and seemingly no one has read the reply...
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Post by Combuster »

That does not mean that the GPU is still useful - it is a separate processor, and even though most programs will get sucky performance, it is an extra computing resource that can handle some of the system's load.

Things like an MP3 player wouldn't be too bad really to run on it. The audio decoding is math-intensive (inverse discrete cosine transform), and the front end is a bit of graphics that would go to the video card anyway. Set up a DMA transfer directly to video memory and you have one cycle eater off the main CPU :)
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
User avatar
Jeko
Member
Member
Posts: 500
Joined: Fri Mar 17, 2006 12:00 am
Location: Napoli, Italy

Post by Jeko »

How can we implement drivers for NVIDIA's CUDA and AMD's Close-to-the-Metal architectures? Their sources are opened to the public? If not, how we can do?
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Post by Brendan »

Hi,
Combuster wrote:Things like an MP3 player wouldn't be too bad really to run on it. The audio decoding is math-intensive (inverse discrete cosine transform), and the front end is a bit of graphics that would go to the video card anyway. Set up a DMA transfer directly to video memory and you have one cycle eater off the main CPU :)
I'd assume data would go from disk to RAM to video/GPU to RAM to sound card (four transfers over the PCI bus), instead of from disk to RAM to CPU to RAM to sound card (2 transfers over the PCI bus). I'd also assume that the PCI bus is used for other things at the same time - e.g. doubling the PCI bus bandwidth used by MP3 decoding reduces the performance of networking, file I/O, etc.

So, for an MP3 player, would overall system performance improve due to having more free CPU time, or would overall system performance be worse due to PCI bus bandwidth limitations?

Of course the fastest way would probably be to store the pre-processed MP3 data on disk (e.g. in a cache of some sort), so that you could send the pre-processed data from disk to RAM to sound card without doing any processing at all.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Post Reply