Page 1 of 2 12 LastLast
Results 1 to 10 of 12

Thread: Cuda

  1. #1
    Co-Founder / PGD Elder WILL's Avatar
    Join Date
    Apr 2003
    Location
    Canada
    Posts
    6,107
    Blog Entries
    25

    Lightbulb Cuda

    Has anyone done much research or development with nVidia's new CUDA technology?

    Read: What is CUDA?

    I'm wondering what could be done with the technology and where it can be applied in games...
    Jason McMillen
    Pascal Game Development
    Co-Founder





  2. #2
    PGD Staff code_glitch's Avatar
    Join Date
    Oct 2009
    Location
    UK (England, the bigger bit)
    Posts
    933
    Blog Entries
    45
    Well according to nvidia we could make some pretty outstanding stuff: better graphics (maybe 128x anti-aliasing...) bloom, physics and etc... Without a cpu. So, you sure you want a cpu for your board? Nah just a GPU...

    Frankly though, I find cuda a little niche and obscure to code in, since you have to assume that every client has the right OS, card and motherboard for it to work. shame.
    I once tried to change the world. But they wouldn't give me the source code. Damned evil cunning.

  3. #3
    Co-Founder / PGD Elder WILL's Avatar
    Join Date
    Apr 2003
    Location
    Canada
    Posts
    6,107
    Blog Entries
    25
    Well the (proper! lets not get into that again... ) graphics drivers would add in support for the CUDA features of the card, no?

    And how does one have a computer run without a processor? You don't happen to have a USB plug sticking out of the back of your head do you?

    I for one would like to see some of our very talented folks (Luuk, Sascha, Nitro or the NECROubers) give a quick demo a try. I'm sure I'd be amazed at what they manage to do.
    Jason McMillen
    Pascal Game Development
    Co-Founder





  4. #4

  5. #5
    OpenCL has a very similar design with CUDA's "driver API". CUDA also has a "CUDA C" compiler (nvcc) which is basically a preprocessor that takes an enriched C, generates code that uses the "driver API" and uses a real C compiler (GCC or MSVC) to compile the code.

    Personally i tried to use CUDA back when my GTX280 was brand new and it was nice. For parallelism-friendly algorithms, such as raytracing, the speedup can be huge (i made a test which had 2-3fps in CPU using C and 560+ fps in CUDA).

    I also tried to use it with Lazarus:



    This is basically the same test, slightly modified. The raytracer itself is written in "CUDA C" and linked with the Lazarus which does the presentation (it is slower because i'm downloading the image i get from CUDA to the CPU and upload it back on the GPU using Lazarus' OpenGL control while the C version didn't do that part - i don't remember why i did that in Lazarus though... it has been like two years since i wrote that).

    However personally today i would use OpenCL instead. It is the proper open standard, more widely supported than CUDA and some configurations (like those from AMD and Apple i think) can use both the CPU and GPU at the same time.

  6. #6
    OpenCL and CUDA are both similar and different. I will be responsible for the GPU part of a course later this fall, where we primarily teach CUDA. Why CUDA and not OpenCL? Because CUDA is a million times easier to get started with. CUDA has a very neat integration between CPU and GPU code, while OpenCL looks more like an OpenGL/GLSL program.

    But OpenCL is more open. CUDA can only be used from NVidia's own compiler (which uses GCC for the CPU part). OpenGL can be used from anything, FPC included. (I have working examples using FPC.) The kernels are quite similar between CUDA and OpenCL, but must, sadly, be written in C syntax.

  7. #7
    I've used CUDA for one research work a while back. However, I think the language itself is in infant stages, I'd say much less evolved than HLSL, for instance. In some cases, you are better off working with actual shaders (say SM4) for GPGPU than with CUDA itself.

    The speedup is subjective as in GPGPU you can do very simple things very fast.

    If you need to do complex tasks, I'd recommend using multi-threading on actual CPU. In our latest publication which will appear in November this year in the conference of optimization and computer software, we have performance benchmarks of an illumination approach developed both on GPU and CPU using multi-threading, showing that CPUs such as Core i7 Q840 when using multiple threads on 64-bit platform are extremely powerful. We don't even use it to its full potential, SSE2+ could be used to speed things up even more.

    I personally find CUDA a very low-level language, not far from raw x86/x64 assembly, which uses C syntax instead of actual instructions. For the entire code you need to think of hardware architecture and how to adapt your problem to the hardware itself, not the way around. It is nice for video editing, massive image processing and some other tasks like this, but even in this area from a developer point of view, you'll save a lot of time and headaches if you develop for CPU instead.
    Last edited by LP; 03-10-2011 at 02:54 PM.

  8. #8
    Quote Originally Posted by Lifepower View Post
    I've used CUDA for one research work a while back. However, I think the language itself is in infant stages, I'd say much less evolved than HLSL, for instance. In some cases, you are better off working with actual shaders (say SM4) for GPGPU than with CUDA itself.

    The speedup is subjective as in GPGPU you can do very simple things very fast.

    If you need to do complex tasks, I'd recommend using multi-threading on actual CPU. In our latest publication which will appear in November this year in the conference of optimization and computer software, we have performance benchmarks of an illumination approach developed both on GPU and CPU using multi-threading, showing that CPUs such as Core i7 Q840 when using multiple threads on 64-bit platform are extremely powerful. We don't even use it to its full potential, SSE2+ could be used to speed things up even more.

    I personally find CUDA a very low-level language, not far from raw x86/x64 assembly, which uses C syntax instead of actual instructions. For the entire code you need to think of hardware architecture and how to adapt your problem to the hardware itself, not the way around. It is nice for video editing, massive image processing and some other tasks like this, but even in this area from a developer point of view, you'll save a lot of time and headaches if you develop for CPU instead.
    You are quite right that shaders can be just as fine. The shader languages are mature, very fast and available without extra installers. I always insist "don't count out shaders" when I teach GPU computing.

    However, shaders don't give the optimization options that CUDA gives. Yes you do tweaking to adapt the algorithm to the hardware, and that is where it is at its best. And it often outperforms CPUs by a large margin.

  9. #9
    I've looked into CUDA and I would love to do more with it in the future. I'm considering to pick this as the topic for my Bachelors Thesis.

    From what I have seen, CUDA provides a programming model that enables developers to use the power of the GPU for things other than 3D graphics. You could do interesting things like, analyzing big images, financial computations, biological computations, cryptography/hacking and way more. I don't think CUDA has a lot to offer when it comes to graphics for games, because we allready have those pipelines and shader-languages. However, It could be interesting to use the GPU for physics simulations in games, though you'd need to do some research on how to split the computation-units inside the GPU architecture so that they can do tranditional shaders + your own physics.

    Also, note that CUDA is aimed at NVidia's GPU's. OpenCL would be a more attractive alternative when you want to make stuff work on AMD's cards aswell. However this is a research project on it's own. Not something to "just try" when you aim to make a game.
    Last edited by chronozphere; 08-10-2011 at 09:10 AM.
    Coders rule nr 1: Face ur bugz.. dont cage them with code, kill'em with ur cursor.

  10. #10
    Recently, I was responsible for teaching CUDA as well as OpenCL in a new course here at our university. I have run a few similar courses before, for graduate students, but this was first time for undergraduates.

    Quite enjoyable, and I feel that I sharpened my own depth in CUDA at the same time.

    But do you know what annoys me, a lot? That GLSL, CUDA, OpenCL, all lock you into that damn C syntax (a 40 years old hack with a pile of obvious mistakes that nobody bothered to fix). But if I could do that, if I could make "CUDA for Pascal programmers", would anyone bother?

Page 1 of 2 12 LastLast

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •