Saw it first via Python Weekly, on The Register UK site, then confirmed it via a Google search that showed some other related links, see below.
Interesting news. If it works out, it means that Python can be used more for performance-intensive work that can leverage GPU's, which is already becoming a trend, from what I've been reading lately.
NVIDIA and Continuum Analytics Announce NumbaPro, A Python CUDA Compiler
Update: The AnandTech post linked above also has a comment by Travis Oliphant, founder of Continuum Analytics, about the pros and cons of this initiative vs. other existing options.
GPU-Accelerated Computing Reaches Next Generation Of Programmers With Python Support Of NVIDIA CUDA.
Excerpt from above Nvidia link:
[ Continuum Analytics' Python development environment uses LLVM and the NVIDIA CUDA compiler software development kit to deliver GPU-accelerated application capabilities to Python programmers.
The modularity of LLVM makes it easy for language and library designers to add support for GPU acceleration to a wide range of general-purpose languages like Python, as well as to domain-specific programming languages. LLVM's efficient just-in-time compilation capability lets developers compile dynamic languages like Python on the fly for a variety of architectures.
"Our research group typically prototypes and iterates new ideas and algorithms in Python and then rewrites the algorithm in C or C++ once the algorithm is proven effective," said Vijay Pande, professor of Chemistry and of Structural Biology and Computer Science at Stanford University. "CUDA support in Python enables us to write performance code while maintaining the productivity offered by Python." ]
I had blogged about Continuum Analytics' cloud computing scientific Python product, Wakari, earlier.
- Vasudev Ram - Dancing Bison Enterprises