Video is an increasingly important aspect of modern culture, with smartphones representing billions of handheld video recorders and official video recording by tools such as police body cams becoming ever more prevalent. Often, the subjects of these ubiquitous videos need to be masked for privacy purposes, creating a challenge for anyone who needs to be sure their streamed video isn’t running afoul of privacy regulations.
Microsoft is well aware of this challenge, which is why the company has enabled face redaction capabilities in Azure Media Analytics. As Microsoft puts it in the announcement on the Azure blog:
Facial redaction works by detecting faces in every frame of video and tracking the face object both forwards and backwards in time, so that the same individual can be blurred from other angles as well.
Redaction is still a difficult for computers to solve and accuracy is not at the level of a real person. It is expected to find false positives and false negatives especially with difficult video such as low light or high movement scenes.
Since automated redaction may not be 100%, we provide a couple of ways to modify the final output.
In addition to a fully automatic mode, there is a two pass workflow which allows the selection/de-selection of found faces via a list of IDs, and to make arbitrary per frame adjustments using a metadata file in JSON format. This workflow is split into ‘Analyze’ and ‘Redact’ modes, as well as a single pass ‘Combined’ mode that runs both in one job.
Head on over to the blog post for before and after video examples, along with some code and other details on how to enable face detection in Azure Media Analytics. You can find out more at the documentation page, and check out the Azure Media Service blog to keep up with all the news. Get in touch with the team at the UserVoice page, and let us know in the comments if you think face reduction is an important addition to Azure Media Services.