Results 1 to 10 of 12

Thread: Cuda

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    I've used CUDA for one research work a while back. However, I think the language itself is in infant stages, I'd say much less evolved than HLSL, for instance. In some cases, you are better off working with actual shaders (say SM4) for GPGPU than with CUDA itself.

    The speedup is subjective as in GPGPU you can do very simple things very fast.

    If you need to do complex tasks, I'd recommend using multi-threading on actual CPU. In our latest publication which will appear in November this year in the conference of optimization and computer software, we have performance benchmarks of an illumination approach developed both on GPU and CPU using multi-threading, showing that CPUs such as Core i7 Q840 when using multiple threads on 64-bit platform are extremely powerful. We don't even use it to its full potential, SSE2+ could be used to speed things up even more.

    I personally find CUDA a very low-level language, not far from raw x86/x64 assembly, which uses C syntax instead of actual instructions. For the entire code you need to think of hardware architecture and how to adapt your problem to the hardware itself, not the way around. It is nice for video editing, massive image processing and some other tasks like this, but even in this area from a developer point of view, you'll save a lot of time and headaches if you develop for CPU instead.
    Last edited by LP; 03-10-2011 at 02:54 PM.

  2. #2
    Quote Originally Posted by Lifepower View Post
    I've used CUDA for one research work a while back. However, I think the language itself is in infant stages, I'd say much less evolved than HLSL, for instance. In some cases, you are better off working with actual shaders (say SM4) for GPGPU than with CUDA itself.

    The speedup is subjective as in GPGPU you can do very simple things very fast.

    If you need to do complex tasks, I'd recommend using multi-threading on actual CPU. In our latest publication which will appear in November this year in the conference of optimization and computer software, we have performance benchmarks of an illumination approach developed both on GPU and CPU using multi-threading, showing that CPUs such as Core i7 Q840 when using multiple threads on 64-bit platform are extremely powerful. We don't even use it to its full potential, SSE2+ could be used to speed things up even more.

    I personally find CUDA a very low-level language, not far from raw x86/x64 assembly, which uses C syntax instead of actual instructions. For the entire code you need to think of hardware architecture and how to adapt your problem to the hardware itself, not the way around. It is nice for video editing, massive image processing and some other tasks like this, but even in this area from a developer point of view, you'll save a lot of time and headaches if you develop for CPU instead.
    You are quite right that shaders can be just as fine. The shader languages are mature, very fast and available without extra installers. I always insist "don't count out shaders" when I teach GPU computing.

    However, shaders don't give the optimization options that CUDA gives. Yes you do tweaking to adapt the algorithm to the hardware, and that is where it is at its best. And it often outperforms CPUs by a large margin.

  3. #3
    I've looked into CUDA and I would love to do more with it in the future. I'm considering to pick this as the topic for my Bachelors Thesis.

    From what I have seen, CUDA provides a programming model that enables developers to use the power of the GPU for things other than 3D graphics. You could do interesting things like, analyzing big images, financial computations, biological computations, cryptography/hacking and way more. I don't think CUDA has a lot to offer when it comes to graphics for games, because we allready have those pipelines and shader-languages. However, It could be interesting to use the GPU for physics simulations in games, though you'd need to do some research on how to split the computation-units inside the GPU architecture so that they can do tranditional shaders + your own physics.

    Also, note that CUDA is aimed at NVidia's GPU's. OpenCL would be a more attractive alternative when you want to make stuff work on AMD's cards aswell. However this is a research project on it's own. Not something to "just try" when you aim to make a game.
    Last edited by chronozphere; 08-10-2011 at 09:10 AM.
    Coders rule nr 1: Face ur bugz.. dont cage them with code, kill'em with ur cursor.

  4. #4
    Recently, I was responsible for teaching CUDA as well as OpenCL in a new course here at our university. I have run a few similar courses before, for graduate students, but this was first time for undergraduates.

    Quite enjoyable, and I feel that I sharpened my own depth in CUDA at the same time.

    But do you know what annoys me, a lot? That GLSL, CUDA, OpenCL, all lock you into that damn C syntax (a 40 years old hack with a pile of obvious mistakes that nobody bothered to fix). But if I could do that, if I could make "CUDA for Pascal programmers", would anyone bother?

  5. #5
    Quote Originally Posted by Ingemar View Post
    But do you know what annoys me, a lot? That GLSL, CUDA, OpenCL, all lock you into that damn C syntax (a 40 years old hack with a pile of obvious mistakes that nobody bothered to fix). But if I could do that, if I could make "CUDA for Pascal programmers", would anyone bother?
    Not sure for CUDA and GLSL, but previously you could theoretically make a Pascal compiler for shader code that generates assembly code for HLSL and then use fxc to compile the assembly. However, in latest versions they have deprecated assembly. On the other hand, if you manage to compile Pascal shader code directly, this could be quite interesting.

    I wouldn't say that GLSL and HLSL are strictly C because they have facilities for vector math and other operations, and the code is usually pretty basic so there is not much you can improve by using Pascal syntax. However, if you are up for the task, it would be great if your compiler would have a framework similar to defunct/dying Microsoft Effect (*.fx) framework; it might not be that popular among game developers, but for scientific applications it really helps.

  6. #6
    Quote Originally Posted by Lifepower View Post
    Not sure for CUDA and GLSL, but previously you could theoretically make a Pascal compiler for shader code that generates assembly code for HLSL and then use fxc to compile the assembly. However, in latest versions they have deprecated assembly. On the other hand, if you manage to compile Pascal shader code directly, this could be quite interesting.

    I wouldn't say that GLSL and HLSL are strictly C because they have facilities for vector math and other operations, and the code is usually pretty basic so there is not much you can improve by using Pascal syntax. However, if you are up for the task, it would be great if your compiler would have a framework similar to defunct/dying Microsoft Effect (*.fx) framework; it might not be that popular among game developers, but for scientific applications it really helps.
    Yes, shaders could be in assembly in the past, but that is deprecated so I don't know if we can access all new features with it. But I think it is easier to just convert Pascal syntax to C syntax. That should be pretty straight.

    With more and more programming moving to shaders as well as CUDA and OpenCL, I think it is vital for the Pascal language (and related languages like Ada) to have that support, so programmers aren't pushed towards the C syntax - again. I don't mind jumping between two different syntaxes and languages, but I know people who can't.

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •