Pose in front of your Kinect and create a fully animated 3D game character of yourself

Ron

Pose in front of your Kinect and create a fully animated 3D game character of yourself

Wouldn’t it be cool if you could create your very own 3D game character that is a fully animated and recognizable version of yourself? Thanks to the Kinect for Windows v2 sensor and Fuse from Mixamo, you can now pose in front of the sensor and create your own character.

“The magic begins with the Kinect for Windows v2 sensor. You simply pose in front of the Kinect for Windows v2 sensor while its 1080p high-resolution camera captures six images of you: four of your body in static poses, and two of your face. With its enhanced depth sensing – up to three times greater than the original Kinect for Windows sensor – and its improved facial and body tracking, the v2 sensor captures your body in incredible, 3D detail. It tracks 25 joint positions and, with a mesh of 2,000 points, a wealth of facial detail,” Microsoft stated in an official blog post.

Once the Kinect captures your image, you can upload it to Body Snap or a similar scanning software program which will render it as a 3D model. Upload that model into Fuse, which will allow you to sculpt your 3D avatar with custom hairstyles, coloring, and a wide variety of clothing.

“Once you’ve customized your newly scanned image, you export it to Mixamo, where it gets automatically rigged and animated. The process is so simple that it seems almost unreal. Rigging prepares your static 3D model for animation by inserting a 3D skeleton and binding it to the skin of your avatar,” Microsoft explains.

Once you are done, you will have created a fully animated 3D character. This is a fantastic example of how developers can utilize the Kinect for Windows sensor to create animated characters. As Microsoft puts it, this is just one more example of how Kinect for Windows technology and partnerships are enhancing entertainment and creativity.

You can check out the process in the video below or head over to the VIA link to read more on this.