Interesting news. If it works out, it means that Python can be used more for performance-intensive work that can leverage GPU's, which is already becoming a trend, from what I've been reading lately.
Nvidia, Continuum team up to sling Python at GPU coprocessors • The Register
NVIDIA and Continuum Analytics Announce NumbaPro, A Python CUDA Compiler
Update: The AnandTech post linked above also has a comment by Travis Oliphant, founder of Continuum Analytics, about the pros and cons of this initiative vs. other existing options.
GPU-Accelerated Computing Reaches Next Generation Of Programmers With Python Support Of NVIDIA CUDA.
Excerpt from above Nvidia link:
[ Continuum Analytics' Python development environment uses LLVM and the NVIDIA CUDA compiler software development kit to deliver GPU-accelerated application capabilities to Python programmers.
The modularity of LLVM makes it easy for language and library designers to add support for GPU acceleration to a wide range of general-purpose languages like Python, as well as to domain-specific programming languages. LLVM's efficient just-in-time compilation capability lets developers compile dynamic languages like Python on the fly for a variety of architectures.
"Our research group typically prototypes and iterates new ideas and algorithms in Python and then rewrites the algorithm in C or C++ once the algorithm is proven effective," said Vijay Pande, professor of Chemistry and of Structural Biology and Computer Science at Stanford University. "CUDA support in Python enables us to write performance code while maintaining the productivity offered by Python." ]
I had blogged about Continuum Analytics' cloud computing scientific Python product, Wakari, earlier.
- Vasudev Ram - Dancing Bison Enterprises
1 comment:
Not quite so sure about these press releases. They seem to get quite a few things wrong. LLVM can compile dynamic languages like python to machine code on the fly? Someone better tell the LLVM and PyPy guys that.
If anything I suspect this will involve _exposing_ gpu operations to Python - e.g. through a numpy-like interface.
Running *pure python* on a gpu? ~The contents of a simple inner loop perhaps...
Colour me sceptical.
Post a Comment