NVIDIA announced yesterday that they were collaborating with Microsoft to improve their upcoming Tesla server GPUs with an AI framework. Inspired by Microsoft Research's strides towards deep learning and reaching human parity with speech recognition, NVIDIA will be optimizing enterprise AI framework to run on the Tesla GPUs for both the Microsoft Azure cloud and on-premises servers.
"We're working hard to empower every organization with AI, so that they can make smarter products and solve some of the world's most pressing problems," said Harry Shum, executive vice president of the Artificial Intelligence and Research Group at Microsoft. "By working closely with NVIDIA and harnessing the power of GPU-accelerated systems, we've made Cognitive Toolkit and Microsoft Azure the fastest, most versatile AI platform. AI is now within reach of any business."
The toolkit mentioned is the open source and commercial grade Microsoft Cognitive Toolkit used for Skype, Cortana, Bing, and Xbox. It trains and adjusts algorithms to scale according to the input discovered, making GPUs more efficient with a hybrid cloud platform. In fact, according to the press release, the GPU-accelerated toolkit was over 170 times faster on NVidia GPUs than it was with former the former CPUs. Which isn't surprising given the thousands of cores in GPUs processing unilaterally.
Microsoft and NVIDA are both looking to improve enterprise and industry across many fields with these deep learning GPUs. One step further, the "world's first supercomputer in a box" built specifically for using AI and deep learning is now available. The NVIDIA DGX-1 just might revolutionize data server processing for enterprises thanks to the collaboration.