Wolfram Blog
Angela Sims

Mathematica and NVIDIA in Action: See Your GPU in a Whole Different Light

September 14, 2010 — Angela Sims, Partnership Support Specialist

Wolfram Research is partnering with NVIDIA to integrate GPU programming into Mathematica.

CUDA is NVIDIA’s performance computing architecture that harnesses modern GPU’s potential. The new partnership means that if you have GPU-equipped hardware, you can transform Mathematica‘s computing, modeling, simulation, or visualization performance, boosting speed by factors easily exceeding 100. Now that’s fast!

Afraid of the programming involved? Don’t be. Mathematica‘s new CUDA programming capabilities dramatically reduce the complexity of coding required to take advantage of GPU’s parallel power. So you can focus on innovating your algorithms rather than spending time on repetitive tasks, such as GUI design.

As Wolfram Research Senior Kernel Developer Ulises Cervantes-Pimentel reveals in this video, your GPU isn’t just for video games anymore.

If you find yourself near San Jose, California the week of September 20, stop by the GPU Technology Conference 2010. Ulises will host a talk about GPU programming in Mathematica on the 21st, and you can visit us in booth #31 during the conference.

Leave a Comment


Erik Berg

While I am in the Bay Area I do not have $350 to spare to attend the exhibits. Will any of the detail be available on wolfram.com? And how soon? Thanks.

Posted by Erik Berg    September 14, 2010 at 5:32 pm

WOW! That is fantastic new!

Is that going to be in Mathematica 8.0?

When will Mathematica 8.0 be released?

Posted by muser    September 14, 2010 at 5:56 pm

Wonderful! We’ve been waiting for this for too long. The massive paralleliization afforded by the GPU pipeline structure really means quantum leaps in speed for simple arithmetic operations that can run independently of one another. Please rub that in the eyes of the arrogant C++ programmers who consistently love to be wrong with the statement “C++ is much faster than M”. When this is ready, we’ll have the ease-of-use and familiarity of the M f/e and the massive speed of parallel GPUs. CUDA programming is hard (to do it well, I mean, it’s easy to poke around), and CUDA harnessed from M means no CUDA programming effort, which the C++ programmer still has, and all the other M features, which the C++ programmer doesn’t have. The C++ programmer using CUDA has a huge amount of programming effort and gets limited features (only the features he programmed), whereas the M programmer with CUDA gets the same massive parallelization with much more features on top of it and doesn’t have to program anything in CUDA. Managing GPUs from M … a dream becoming reality.

Posted by Mooniac    September 14, 2010 at 6:57 pm
Juraj Durzo

Exactly! The dream is becoming reality. I am wondering how many NVIDIA gpu devices will be supported by the next version of Mathematica.

Posted by Juraj Durzo    September 15, 2010 at 9:43 am

What CUDA Compute Capability version will be required for this?

Posted by SWB    September 16, 2010 at 11:35 am
Abdul Dakkak

We support all Compute Capabilities. Even next generation Fermi.

Posted by Abdul Dakkak    September 16, 2010 at 7:12 pm
Bruno Autin

What do I have to implement on my computer to be able to repeat Ulise’s demonstration?

Posted by Bruno Autin    September 20, 2010 at 4:36 am

Why CUDA and not OpenCL? I hate single vendor lock-downs…

(don’t get me wrong, this is great overall – I just despise vendor specific API choices)

Posted by Maccara    September 20, 2010 at 5:48 am
Ulises Cervantes

No worries, we will support both CUDA and OpenCL, actually some of the demos in the video are OpenCL!!.

Posted by Ulises Cervantes    September 20, 2010 at 9:16 am

Tried to find more information, but to no avail… Honestly, I don’t expect Mathematica could offer much more than latest Matlab version already did, and this is stuff already proto-typed by efforts like work of AccelerEyes (http://accelereyes.com/) guys, which means support to declare arrays on GPU (to ease transferring data between CPU and GPU memory), and then support for BLAS, FFT and some LAPACK stuff (also probably support for CURAND stuff from coming CUDA 3.2 release) over these datatypes. Which is, all in all, going to be great, but having some years of experience with CUDA programming, I certainly find naive expectations stated in some of comments of above, that here will be no place for low-level CUDA programming any more. But, MathLink is certainly going to be there, so knowledgeable people will just keep writing needed CUDA kernels on their own, and be able to utilize them from Mathematica, just as was the case so far…

Posted by Crni    September 20, 2010 at 1:07 pm
    Wolfram Blog Team

    We hope that you’ll be pleasantly surprised with the final outcome, but since this functionality is still in development, we are unable to discuss it openly. However, if you are attending GTC, please attend our talk or stop by booth #31. We will also be presenting similar material and conducting training sessions at the Wolfram Technology Conference in October.

    Posted by Wolfram Blog Team    September 21, 2010 at 12:54 pm

Well, re. GPU support in MATLAB:
1. It requires Parallel Computing Toolbox, in addition to MATLAB ($1K or more)
2. There are pretty big holes in Accelereyes Jacket, though it is still not a bad start. Plus, it is about $1.5K

If Mathematica includes GPU support as good (or better than them, as I expect), it would be amazing! It would be very useful to people like me who do not code in C (and get by just fine), and do not want to…

Posted by muser    September 20, 2010 at 7:20 pm

Leave a comment in reply to muser


Or continue as a guest (your comment will be held for moderation):