Bing and Azure benefit from continued advances in artificial intelligence via FGPA chips

Kit McDonald

Doug Berger from Project Catapult in the Deep Neural Network

Microsoft continues to innovate to make things better. When it comes to recreating the data center system isn’t any different to Microsoft. For the last five years, the Redmond giant has been playing with the use of a reprogrammable computer chip called a field programmable gate array. The FPGA revealed to us at Microsoft Ignite just last month, has been implemented to Bing and Azure servers and the results are said to be an industry game changer.

Sitaram Lanka has been working on the FPGA chip since 2011. In the background, it was called Project Catapult. Along with Derek Chiou, former team leader, engineers mostly relayed all performing tasks to be sent to the chip instead of going into the server first. The FPGA sees the information first and determines how to handle it without even involving the processor, allowing in only what it essential for the server to handle. As Chiou put it, “What we’ve done now is we’ve made the FPGA the front door.”

The direct application of FPGA into artificial intelligence has boosted the deep learning neural network. Since the computer chip excels at computing multiple processes at once, the neural network can increase its speed. This can improve minor everyday practices and translations. Even further, it can have a significant impact on research and development by improving the process.

Now, the FPGA’s are being placed into every Microsoft datacenter server. According to the blog post from Microsoft, the deep neural network will be deployed for Bing thanks to the computer chips. It’s expected that data centers will see a major increase in performance and even the search engine will be upgraded.

Because the chips are programmable, engineering teams can put their algorithms into the hardware without the need for additional software in between. Furthermore, they can be updated almost instantly to adjust for further developments. That’s a significant leap forward compared to waiting years for the necessary hardware to be available.

If you’re interested in learning more about the mechanics of FPGAs and their redefinition of data acceleration, take a gander at the research paper.