should this forum dedicate one section for GPU programming?
-
- Member
- Posts: 396
- Joined: Wed Nov 18, 2015 3:04 pm
- Location: San Jose San Francisco Bay Area
- Contact:
should this forum dedicate one section for GPU programming?
The reasons why I think it is necessary to cover GPU programming for osdev.org
- I have been reading a substantial books and training and recently tried working through code examples in "cuda by example" book. It is obvious GPU programming is becoming easier and more accessible to programmers. Many operating system concepts are tied to GPU programming: thread, memory management, memory related concepts (pinned, host, device etc.,)., streams etc.,
- GPU programmings are strongly tied to CPU programming, most or all programming involves the interaction and data transfer and task allocation, assignment between CPU and GPU.
- By incorporating GPU programming thread, this forum become more forward-looking place, embracing future technology trend in which GPU programming is well one of 'em.
- nvidia and gpu programming almost refer to same thing, nvidia dev talk forum has dedicated GPU programming section but from what I experienced, user participation is very absent and rare compared to this forum.
- Being a newer and recent technology (CUDA) someone learning the gpu programming there is not much online resources available for online technical discussion.
- I have been reading a substantial books and training and recently tried working through code examples in "cuda by example" book. It is obvious GPU programming is becoming easier and more accessible to programmers. Many operating system concepts are tied to GPU programming: thread, memory management, memory related concepts (pinned, host, device etc.,)., streams etc.,
- GPU programmings are strongly tied to CPU programming, most or all programming involves the interaction and data transfer and task allocation, assignment between CPU and GPU.
- By incorporating GPU programming thread, this forum become more forward-looking place, embracing future technology trend in which GPU programming is well one of 'em.
- nvidia and gpu programming almost refer to same thing, nvidia dev talk forum has dedicated GPU programming section but from what I experienced, user participation is very absent and rare compared to this forum.
- Being a newer and recent technology (CUDA) someone learning the gpu programming there is not much online resources available for online technical discussion.
key takeaway after spending yrs on sw industry: big issue small because everyone jumps on it and fixes it. small issue is big since everyone ignores and it causes catastrophy later. #devilisinthedetails
Re: should this forum dedicate one section for GPU programmi
No. This is a forum for OS development and not for general programming. The driver is of GPU programming is well covered by the OS development subforum. The application side of GPU programming (CUDA or whatever) is better discussed elsewhere.
managarm: Microkernel-based OS capable of running a Wayland desktop (Discord: https://discord.gg/7WB6Ur3). My OS-dev projects: [mlibc: Portable C library for managarm, qword, Linux, Sigma, ...] [LAI: AML interpreter] [xbstrap: Build system for OS distributions].
Re: should this forum dedicate one section for GPU programmi
I agree. This site serves its purpose better by remaining focussed on OS topics. Otherwise, where do you draw the line?
-
- Member
- Posts: 396
- Joined: Wed Nov 18, 2015 3:04 pm
- Location: San Jose San Francisco Bay Area
- Contact:
Re: should this forum dedicate one section for GPU programmi
i am almost like posting in desperation. Nvidia devtalk forum is mostly deserted, rarely got support with posts.
Not sure if there is any other good forum out there.
Not sure if there is any other good forum out there.
key takeaway after spending yrs on sw industry: big issue small because everyone jumps on it and fixes it. small issue is big since everyone ignores and it causes catastrophy later. #devilisinthedetails
Re: should this forum dedicate one section for GPU programmi
It looks fairly busy to me - https://devtalk.nvidia.com/default/boar ... rformance/
Certainly busier than I would expect a section of this forum to be if it was dedicated to that topic.
Certainly busier than I would expect a section of this forum to be if it was dedicated to that topic.
- Schol-R-LEA
- Member
- Posts: 1925
- Joined: Fri Oct 27, 2006 9:42 am
- Location: Athens, GA, USA
Re: should this forum dedicate one section for GPU programmi
For general nVidia graphics programming using the official drivers, the best places to look will probably be fora on game dev and the more advanced sort of rendering projects such as Blender or POVRay.
For using CUDA for general-purpose programming... well, the rendering communities could help, as can those on using GPGPUs for HPC such as protein folding, but the real experts are likely to be... cryptocurrency miners. Yeah, I know, but they do have a strong motivation to get it right...
Note that the reason that 'CUDA programming' is synonymous with nVidia' is because 'CUDA' is nVidia's implementation of GPGPU operations; AMD calls it 'GPU-Compute', and Intel pretty much ignores the topic for their iGPUs, or did until recently. There is a common standard called OpenCL that all of them (including Intel) support for programming their GPUs, but most have some additional features that can't be accessed entirely through OpenCL alone. OpenCL isn't specific to GPGPU operations, or even to graphics in general, being a protocol for all sorts of heterogeneous platform programming, but it is what it is most often used for; since the current plans are to merge it with Vulkan, that connection to GPU programming is bound to grow stronger.
For nVidia driver-level programming? There really aren't any at all, or if there are, they are on a private intranet in nVidia's own corporate network. They have only released incomplete information about their GPU's internals (and that only in the past five years), and in the past have threatened legal action against those who have distributed unofficial drivers or any information based on reverse engineering of their hardware (e.g., the Nouveau project).
They have backed off a bit on the legal threats in the past several years, at least regarding Nouveau, but they are still very guarded about most of the details. There are some indications that they incorporate some extra, undocumented changes to each new generation of their new hardware specifically for the purpose of breaking compatibility with Nouveau and slowing down their attempts to support newer GPUs - a move which is legal, as it serves to protect their IP, but one which tends to gain the ire of the FOSS community (it used to be a near-universal tactic on the part of hardware manufacturers, and still used by a number of other companies such as Qualcomm even now).
The best idea for writing your own nVidia driver is to go over the Nouveau source and docs, and maybe their mailing lists. Or just stick to AMD (who have been published most of the details needed for a complete driver since 2014, though some claim they do keep a few details under wraps to ensure that people preferentially use their official drivers) or Intel (who have been publishing their iGPU details for over a decade, though until this year they showed little interest in improving the iGPUs themselves, relegating them to very basic functionality, as they figure that anyone who needs more would go with a discrete GPU anyway). It isn't an acceptable solution, admittedly, but there really aren't many alternatives right now.
Mind you, if you are talking about an SoC such as the Raspberry Pi, then that's different anyway; while some maker-grade ARM systems use Tegra (which is used in nVidia's own SoC designs such as the Shield), most ARM-based SoCs use a GPU based on the Mali core, or something like the Broadcom Videocore, while MIPS systems usually use Videocore or Adreno, I think.
For using CUDA for general-purpose programming... well, the rendering communities could help, as can those on using GPGPUs for HPC such as protein folding, but the real experts are likely to be... cryptocurrency miners. Yeah, I know, but they do have a strong motivation to get it right...
Note that the reason that 'CUDA programming' is synonymous with nVidia' is because 'CUDA' is nVidia's implementation of GPGPU operations; AMD calls it 'GPU-Compute', and Intel pretty much ignores the topic for their iGPUs, or did until recently. There is a common standard called OpenCL that all of them (including Intel) support for programming their GPUs, but most have some additional features that can't be accessed entirely through OpenCL alone. OpenCL isn't specific to GPGPU operations, or even to graphics in general, being a protocol for all sorts of heterogeneous platform programming, but it is what it is most often used for; since the current plans are to merge it with Vulkan, that connection to GPU programming is bound to grow stronger.
For nVidia driver-level programming? There really aren't any at all, or if there are, they are on a private intranet in nVidia's own corporate network. They have only released incomplete information about their GPU's internals (and that only in the past five years), and in the past have threatened legal action against those who have distributed unofficial drivers or any information based on reverse engineering of their hardware (e.g., the Nouveau project).
They have backed off a bit on the legal threats in the past several years, at least regarding Nouveau, but they are still very guarded about most of the details. There are some indications that they incorporate some extra, undocumented changes to each new generation of their new hardware specifically for the purpose of breaking compatibility with Nouveau and slowing down their attempts to support newer GPUs - a move which is legal, as it serves to protect their IP, but one which tends to gain the ire of the FOSS community (it used to be a near-universal tactic on the part of hardware manufacturers, and still used by a number of other companies such as Qualcomm even now).
The best idea for writing your own nVidia driver is to go over the Nouveau source and docs, and maybe their mailing lists. Or just stick to AMD (who have been published most of the details needed for a complete driver since 2014, though some claim they do keep a few details under wraps to ensure that people preferentially use their official drivers) or Intel (who have been publishing their iGPU details for over a decade, though until this year they showed little interest in improving the iGPUs themselves, relegating them to very basic functionality, as they figure that anyone who needs more would go with a discrete GPU anyway). It isn't an acceptable solution, admittedly, but there really aren't many alternatives right now.
Mind you, if you are talking about an SoC such as the Raspberry Pi, then that's different anyway; while some maker-grade ARM systems use Tegra (which is used in nVidia's own SoC designs such as the Shield), most ARM-based SoCs use a GPU based on the Mali core, or something like the Broadcom Videocore, while MIPS systems usually use Videocore or Adreno, I think.
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Re: should this forum dedicate one section for GPU programmi
You forgot Imagination PowerVR GPUs. This is the second on the market of SBCs after ARM's Mali. Everything else is almost not present. Because Qualcomm's pebbles are too expensive to get into the SBC market targetting poor nerds, and RPi yeah, it's a special case with their dedication to Broadcom. There is also Vivante vendor, my Samsumg tablet uses GPU from it. Let's face it, RPi is the suckiest SBC ever existed.Mind you, if you are talking about an SoC such as the Raspberry Pi, then that's different anyway; while some maker-grade ARM systems use Tegra (which is used in nVidia's own SoC designs such as the Shield), most ARM-based SoCs use a GPU based on the Mali core, or something like the Broadcom Videocore, while MIPS systems usually use Videocore or Adreno, I think.
But the situation with the documentation openness is as with nVidia - it's zero of it. Or subzero. And, of course, GPUs on SBCs are only 3D engines/accelerators. 2D, display driving, image processing, video codecs etc are other fully closed modules from other vendors. *sigh*
As a statistics, from what boards I have,
Board, SoC, GPU:
Mips Creator CI20, Ingenic jz4780, PowerVR SGX540
Beagle Bone Black, TI Sitara am3358, PowerVR SGX530
CSA90, Rockchip rk3368, PowerVR Rogue G6110
Cubieboard 2, Allwinner a20, Mali400
Pine64+, Allwinner a64, Mali400MP2
Banana Pi M2 Ultra, Allwinner r40, Mali400MP2
- Schol-R-LEA
- Member
- Posts: 1925
- Joined: Fri Oct 27, 2006 9:42 am
- Location: Athens, GA, USA
Re: should this forum dedicate one section for GPU programmi
Urk, how did I overlook PowerVR? Me r Teh dumazzz.
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
-
- Member
- Posts: 396
- Joined: Wed Nov 18, 2015 3:04 pm
- Location: San Jose San Francisco Bay Area
- Contact:
Re: should this forum dedicate one section for GPU programmi
good discussion and insights, very informative
key takeaway after spending yrs on sw industry: big issue small because everyone jumps on it and fixes it. small issue is big since everyone ignores and it causes catastrophy later. #devilisinthedetails
Re: should this forum dedicate one section for GPU programmi
Here, the fresh analysis confirming this statement on their latest offering from dudes that do with it more than just babbling.Let's face it, RPi is the suckiest SBC ever existed.
- Schol-R-LEA
- Member
- Posts: 1925
- Joined: Fri Oct 27, 2006 9:42 am
- Location: Athens, GA, USA
Re: should this forum dedicate one section for GPU programmi
I would hardly consider them to be an unbiased source of information on their competitors. Taking what the manufacturers of the Libre says about the RPi 3 B+ at face value is like accepting something which Intel says about an AMD processor (or vice versa), or which AMD says about an Nvidia GPU (ditto).zaval wrote:Here, the fresh analysis confirming this statement on their latest offering from dudes that do with it more than just babbling.Let's face it, RPi is the suckiest SBC ever existed.
Or, to stay on topic, something which the Raspberry Pi Foundation says about the LeMaker Guitar, the ODROID XU4, or the Banana Pi M2.
I agree that the Libre Le Potato and Renegade are superior boards to the RPi in some respects, especially in terms of having open documentation for pretty much the whole system (though I have no idea whether the quality of said documentation is comparable). If you exclude the open-source aspect (WRT to some of the subsystems), the same also is true about the Imagination Creator boards (especially if you want a MIPS CPU, as I do), and it is a shame that those don't appear to be made any more (it is possible that Tallwood, the new owners of the MIPS IP, will make a new reference SBC line for the MIPS architecture, but I haven't see any sign of it so far). The Tinker board was technically superior to the RPis of two years ago as well, and still a match for the 3 B+ (though a lot more locked down, IIUC). There are several others which one could say the same about.
However, most of them are more expensive (only slightly more for the Le Potato and Renegade, considerably more for the Tinker and the Creator), none have the massive user community the RPi does, and frankly, none are as useful for the primary purpose of the RPi: education.
Large swathes of the maker community may be in love with the RPi (often in a dysfunctional, co-dependent way), but the main purpose of the RPi was, and remains, to be something that was cheap enough for schools to buy in volume, and simple enough to set up that primary school students wouldn't get scared off by them. They are intended for kids who are have already mastered LEGO Mindstorms and LittleBits, but aren't quite ready for Arduinos.
Yes, the Zero and the CM are clearly meant for maker and industrial applications, and the Raspberry Pi Foundation fall all over themselves to assist people using the main-line RPis for those things as well (I've been led to understand that their documentation is fantastic, at least for the things that aren't under an IP lockdown), but the real focus is still on teaching kids electronics.
Besides, if you want to talk disappointments, take a look at the Onion Omega 2+, an inexpensive MIPS-based IoT setup often matched against the RPi Zero. I looked into using one as an OS-dev target, but the people on their forum basically said, "don't bother", and for good reason, it turns out. The documentation is terrible, and the wifi subsystem - which is the primary communication channel to the device, though since you would need to connect the core device to an expansion dock even just to power the thing, you could get around that - on the MediaTek MT7688 SoC they are using is a locked-down proprietary system, which basically means that bare-metal programming isn't a realistic option at all. Mind you, it really is meant to compete more with Arduinos and Seeeds, anyway, but still, that didn't leave me feeling very good about it.
(Using DiscWhores for their forum software didn't win any points with me, either, but that's my negative experiences with Jeff Atwood and Co. on the Daily WTF forums talking.)
Last edited by Schol-R-LEA on Wed May 02, 2018 7:55 am, edited 1 time in total.
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Re: should this forum dedicate one section for GPU programmi
I will use the thread departure and just ask - can anyone explain why the memcpy and memset results ("Memory Throughput") are so asymmetrical? Is it an ARM feature missing in the RPi socs?zaval wrote:Here, the fresh analysis confirming this statement on their latest offering from dudes that do with it more than just babbling.Let's face it, RPi is the suckiest SBC ever existed.
Re: should this forum dedicate one section for GPU programmi
There is no such a feature. Maybe it's just a memset optimization magic, they forgot to switch off on the rk3328 and s905x boards.simeonz wrote: I will use the thread departure and just ask - can anyone explain why the memcpy and memset results ("Memory Throughput") are so asymmetrical? Is it an ARM feature missing in the RPi socs?
Re: should this forum dedicate one section for GPU programmi
I, personally, think that I sub-section could be a reasonable thing, but not an entire, separate new section. GPU programming is, at least, a part of operating system development in a sense, so it makes no sense to segregate them into two separate sections. But again, sub-sections could be reasonable, but not new sections, at least in my mind.