PDA

View Full Version : Cuda



WILL
15-04-2011, 07:11 PM
Has anyone done much research or development with nVidia's new CUDA technology?

Read: What is CUDA? (http://www.nvidia.com/object/what_is_cuda_new.html)

I'm wondering what could be done with the technology and where it can be applied in games...

code_glitch
15-04-2011, 08:05 PM
Well according to nvidia we could make some pretty outstanding stuff: better graphics (maybe 128x anti-aliasing...) bloom, physics and etc... Without a cpu. So, you sure you want a cpu for your board? Nah just a GPU...

Frankly though, I find cuda a little niche and obscure to code in, since you have to assume that every client has the right OS, card and motherboard for it to work. :( shame.

WILL
16-04-2011, 12:28 AM
Well the (proper! lets not get into that again... ;)) graphics drivers would add in support for the CUDA features of the card, no?

And how does one have a computer run without a processor? You don't happen to have a USB plug sticking out of the back of your head do you? :P

I for one would like to see some of our very talented folks (Luuk, Sascha, Nitro or the NECROubers) give a quick demo a try. I'm sure I'd be amazed at what they manage to do.

MuteClown
16-04-2011, 10:25 AM
CUDA is like openCL?

Bad Sector
02-05-2011, 05:06 AM
OpenCL has a very similar design with CUDA's "driver API". CUDA also has a "CUDA C" compiler (nvcc) which is basically a preprocessor that takes an enriched C, generates code that uses the "driver API" and uses a real C compiler (GCC or MSVC) to compile the code.

Personally i tried to use CUDA back when my GTX280 was brand new and it was nice. For parallelism-friendly algorithms, such as raytracing, the speedup can be huge (i made a test which had 2-3fps in CPU using C and 560+ fps in CUDA).

I also tried to use it with Lazarus:

http://dl.dropbox.com/u/5698454/tlazcudatest.png

This is basically the same test, slightly modified. The raytracer itself is written in "CUDA C" and linked with the Lazarus which does the presentation (it is slower because i'm downloading the image i get from CUDA to the CPU and upload it back on the GPU using Lazarus' OpenGL control while the C version didn't do that part - i don't remember why i did that in Lazarus though... it has been like two years since i wrote that).

However personally today i would use OpenCL instead. It is the proper open standard, more widely supported than CUDA and some configurations (like those from AMD and Apple i think) can use both the CPU and GPU at the same time.

Ingemar
03-10-2011, 08:22 AM
OpenCL and CUDA are both similar and different. I will be responsible for the GPU part of a course later this fall, where we primarily teach CUDA. Why CUDA and not OpenCL? Because CUDA is a million times easier to get started with. CUDA has a very neat integration between CPU and GPU code, while OpenCL looks more like an OpenGL/GLSL program.

But OpenCL is more open. CUDA can only be used from NVidia's own compiler (which uses GCC for the CPU part). OpenGL can be used from anything, FPC included. (I have working examples using FPC.) The kernels are quite similar between CUDA and OpenCL, but must, sadly, be written in C syntax.

LP
03-10-2011, 02:45 PM
I've used CUDA for one research work a while back. However, I think the language itself is in infant stages, I'd say much less evolved than HLSL, for instance. In some cases, you are better off working with actual shaders (say SM4) for GPGPU than with CUDA itself.

The speedup is subjective as in GPGPU you can do very simple things very fast.

If you need to do complex tasks, I'd recommend using multi-threading on actual CPU. In our latest publication which will appear in November this year in the conference of optimization and computer software, we have performance benchmarks of an illumination approach developed both on GPU and CPU using multi-threading, showing that CPUs such as Core i7 Q840 when using multiple threads on 64-bit platform are extremely powerful. We don't even use it to its full potential, SSE2+ could be used to speed things up even more.

I personally find CUDA a very low-level language, not far from raw x86/x64 assembly, which uses C syntax instead of actual instructions. For the entire code you need to think of hardware architecture and how to adapt your problem to the hardware itself, not the way around. It is nice for video editing, massive image processing and some other tasks like this, but even in this area from a developer point of view, you'll save a lot of time and headaches if you develop for CPU instead.

Ingemar
03-10-2011, 03:13 PM
I've used CUDA for one research work a while back. However, I think the language itself is in infant stages, I'd say much less evolved than HLSL, for instance. In some cases, you are better off working with actual shaders (say SM4) for GPGPU than with CUDA itself.

The speedup is subjective as in GPGPU you can do very simple things very fast.

If you need to do complex tasks, I'd recommend using multi-threading on actual CPU. In our latest publication which will appear in November this year in the conference of optimization and computer software, we have performance benchmarks of an illumination approach developed both on GPU and CPU using multi-threading, showing that CPUs such as Core i7 Q840 when using multiple threads on 64-bit platform are extremely powerful. We don't even use it to its full potential, SSE2+ could be used to speed things up even more.

I personally find CUDA a very low-level language, not far from raw x86/x64 assembly, which uses C syntax instead of actual instructions. For the entire code you need to think of hardware architecture and how to adapt your problem to the hardware itself, not the way around. It is nice for video editing, massive image processing and some other tasks like this, but even in this area from a developer point of view, you'll save a lot of time and headaches if you develop for CPU instead.
You are quite right that shaders can be just as fine. The shader languages are mature, very fast and available without extra installers. I always insist "don't count out shaders" when I teach GPU computing.

However, shaders don't give the optimization options that CUDA gives. Yes you do tweaking to adapt the algorithm to the hardware, and that is where it is at its best. And it often outperforms CPUs by a large margin.

chronozphere
08-10-2011, 09:01 AM
I've looked into CUDA and I would love to do more with it in the future. I'm considering to pick this as the topic for my Bachelors Thesis. :)

From what I have seen, CUDA provides a programming model that enables developers to use the power of the GPU for things other than 3D graphics. You could do interesting things like, analyzing big images, financial computations, biological computations, cryptography/hacking and way more. I don't think CUDA has a lot to offer when it comes to graphics for games, because we allready have those pipelines and shader-languages. However, It could be interesting to use the GPU for physics simulations in games, though you'd need to do some research on how to split the computation-units inside the GPU architecture so that they can do tranditional shaders + your own physics.

Also, note that CUDA is aimed at NVidia's GPU's. OpenCL would be a more attractive alternative when you want to make stuff work on AMD's cards aswell. However this is a research project on it's own. Not something to "just try" when you aim to make a game. ;)

Ingemar
29-12-2011, 11:23 PM
Recently, I was responsible for teaching CUDA as well as OpenCL in a new course here at our university. I have run a few similar courses before, for graduate students, but this was first time for undergraduates.

Quite enjoyable, and I feel that I sharpened my own depth in CUDA at the same time.

But do you know what annoys me, a lot? That GLSL, CUDA, OpenCL, all lock you into that damn C syntax (a 40 years old hack with a pile of obvious mistakes that nobody bothered to fix). But if I could do that, if I could make "CUDA for Pascal programmers", would anyone bother?

LP
30-12-2011, 05:54 AM
But do you know what annoys me, a lot? That GLSL, CUDA, OpenCL, all lock you into that damn C syntax (a 40 years old hack with a pile of obvious mistakes that nobody bothered to fix). But if I could do that, if I could make "CUDA for Pascal programmers", would anyone bother?
Not sure for CUDA and GLSL, but previously you could theoretically make a Pascal compiler for shader code that generates assembly code for HLSL and then use fxc to compile the assembly. However, in latest versions they have deprecated assembly. On the other hand, if you manage to compile Pascal shader code directly, this could be quite interesting.

I wouldn't say that GLSL and HLSL are strictly C because they have facilities for vector math and other operations, and the code is usually pretty basic so there is not much you can improve by using Pascal syntax. However, if you are up for the task, it would be great if your compiler would have a framework similar to defunct/dying Microsoft Effect (*.fx) framework; it might not be that popular among game developers, but for scientific applications it really helps.

Ingemar
30-12-2011, 08:11 AM
Not sure for CUDA and GLSL, but previously you could theoretically make a Pascal compiler for shader code that generates assembly code for HLSL and then use fxc to compile the assembly. However, in latest versions they have deprecated assembly. On the other hand, if you manage to compile Pascal shader code directly, this could be quite interesting.

I wouldn't say that GLSL and HLSL are strictly C because they have facilities for vector math and other operations, and the code is usually pretty basic so there is not much you can improve by using Pascal syntax. However, if you are up for the task, it would be great if your compiler would have a framework similar to defunct/dying Microsoft Effect (*.fx) framework; it might not be that popular among game developers, but for scientific applications it really helps.

Yes, shaders could be in assembly in the past, but that is deprecated so I don't know if we can access all new features with it. But I think it is easier to just convert Pascal syntax to C syntax. That should be pretty straight.

With more and more programming moving to shaders as well as CUDA and OpenCL, I think it is vital for the Pascal language (and related languages like Ada) to have that support, so programmers aren't pushed towards the C syntax - again. I don't mind jumping between two different syntaxes and languages, but I know people who can't.