Microsoft’s HoloLens 2 will bring AI to Mixed Reality

Kareem Anderson

With Apple dipping its toe into the relatively shallow waters of augmented reality recently, the move has inadvertently forced previous industry stalwarts to now start finding the distinguishing features of their tech to pitch to developers.

Previously, Microsoft’s HoloLens pitch was free-range computational mobility. The HoloLens broke free of the previous paradigm of desktop tethering and allowed users to roam their world combining engaging high-fidelity images with real-world environmental awareness.

Now, Microsoft is upping its Mixed Reality offering by adding a dedicated AI coprocessor for implementing Deep Neural Networks (DNNs).

“Today, Harry Shum, executive vice president of our Artificial Intelligence and Research Group, announced in a keynote speech at CVPR 2017, that the second version of the HPU, currently under development, will incorporate an AI coprocessor to natively and flexibly implement DNNs. The chip supports a wide variety of layer types, fully programmable by us. Harry showed an early spin of the second version of the HPU running live code implementing hand segmentation.”

Microsoft has recently seen some industry push back for taking its time in delivering an updated HoloLens experience for developers. Some developers have been asking for updated hardware, tweaks to the field of view and additional APIs for more in-depth experiences, yet the HoloLens has remained relatively unchanged since its public inception.

With the news of a dedicated AI coprocessor, it would appear Microsoft is looking to bring a lot more than an updated piece of hardware to market with its HoloLens 2. Instead, it seems the company is looking to incorporate new paradigms for developers so that they can bring even more robust augmented and Mixed Reality experiences to users.