Wolfram Computation Meets Knowledge

Mathematica and NVIDIA in Action: See Your GPU in a Whole Different Light

Wolfram Research is partnering with NVIDIA to integrate GPU programming into Mathematica.

CUDA is NVIDIA’s performance computing architecture that harnesses modern GPU’s potential. The new partnership means that if you have GPU-equipped hardware, you can transform Mathematica‘s computing, modeling, simulation, or visualization performance, boosting speed by factors easily exceeding 100. Now that’s fast!

Afraid of the programming involved? Don’t be. Mathematica‘s new CUDA programming capabilities dramatically reduce the complexity of coding required to take advantage of GPU’s parallel power. So you can focus on innovating your algorithms rather than spending time on repetitive tasks, such as GUI design.

As Wolfram Research Senior Kernel Developer Ulises Cervantes-Pimentel reveals in this video, your GPU isn’t just for video games anymore.

If you find yourself near San Jose, California the week of September 20, stop by the GPU Technology Conference 2010. Ulises will host a talk about GPU programming in Mathematica on the 21st, and you can visit us in booth #31 during the conference.

Comments

Join the discussion

!Please enter your comment (at least 5 characters).

!Please enter your name.

!Please enter a valid email address.

12 comments

  1. While I am in the Bay Area I do not have $350 to spare to attend the exhibits. Will any of the detail be available on wolfram.com? And how soon? Thanks.

    Reply
  2. WOW! That is fantastic new!

    Is that going to be in Mathematica 8.0?

    When will Mathematica 8.0 be released?

    Reply
  3. Wonderful! We’ve been waiting for this for too long. The massive paralleliization afforded by the GPU pipeline structure really means quantum leaps in speed for simple arithmetic operations that can run independently of one another. Please rub that in the eyes of the arrogant C++ programmers who consistently love to be wrong with the statement “C++ is much faster than M”. When this is ready, we’ll have the ease-of-use and familiarity of the M f/e and the massive speed of parallel GPUs. CUDA programming is hard (to do it well, I mean, it’s easy to poke around), and CUDA harnessed from M means no CUDA programming effort, which the C++ programmer still has, and all the other M features, which the C++ programmer doesn’t have. The C++ programmer using CUDA has a huge amount of programming effort and gets limited features (only the features he programmed), whereas the M programmer with CUDA gets the same massive parallelization with much more features on top of it and doesn’t have to program anything in CUDA. Managing GPUs from M … a dream becoming reality.

    Reply
  4. Exactly! The dream is becoming reality. I am wondering how many NVIDIA gpu devices will be supported by the next version of Mathematica.

    Reply
  5. What CUDA Compute Capability version will be required for this?

    Reply
  6. We support all Compute Capabilities. Even next generation Fermi.

    Reply
  7. What do I have to implement on my computer to be able to repeat Ulise’s demonstration?

    Reply
  8. Why CUDA and not OpenCL? I hate single vendor lock-downs…

    (don’t get me wrong, this is great overall – I just despise vendor specific API choices)

    Reply
  9. No worries, we will support both CUDA and OpenCL, actually some of the demos in the video are OpenCL!!.

    Reply
  10. Tried to find more information, but to no avail… Honestly, I don’t expect Mathematica could offer much more than latest Matlab version already did, and this is stuff already proto-typed by efforts like work of AccelerEyes (http://accelereyes.com/) guys, which means support to declare arrays on GPU (to ease transferring data between CPU and GPU memory), and then support for BLAS, FFT and some LAPACK stuff (also probably support for CURAND stuff from coming CUDA 3.2 release) over these datatypes. Which is, all in all, going to be great, but having some years of experience with CUDA programming, I certainly find naive expectations stated in some of comments of above, that here will be no place for low-level CUDA programming any more. But, MathLink is certainly going to be there, so knowledgeable people will just keep writing needed CUDA kernels on their own, and be able to utilize them from Mathematica, just as was the case so far…

    Reply
    • We hope that you’ll be pleasantly surprised with the final outcome, but since this functionality is still in development, we are unable to discuss it openly. However, if you are attending GTC, please attend our talk or stop by booth #31. We will also be presenting similar material and conducting training sessions at the Wolfram Technology Conference in October.

      Reply
  11. Well, re. GPU support in MATLAB:
    1. It requires Parallel Computing Toolbox, in addition to MATLAB ($1K or more)
    2. There are pretty big holes in Accelereyes Jacket, though it is still not a bad start. Plus, it is about $1.5K

    If Mathematica includes GPU support as good (or better than them, as I expect), it would be amazing! It would be very useful to people like me who do not code in C (and get by just fine), and do not want to…

    Reply