Microsoft teams up with Intel, Adobe, BBC, and more to combat misleading digital content

Reading time icon 3 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team Read more

While Microsoft does own and operate a search engine and advertising platform, the company’s public involvement in the conversation regarding a proliferation of disinformation has been seemingly minimal.

However, it seems the company is becoming relatively proactive in combating disinformation by joining forces with Adobe, Arm, BBC, Intel, and Truepic by forming a coalition to track digital content as it evolves across the internet.

The Coalition for Content Provenance and Authenticity (C2PA) was formed to help create an open standard to certify sources, histories, or provenances where media content was originated to better verify digital content and reduce the spread of disinformation.

The C2PA’s open standard will give platforms a method to preserve and read provenance-based digital content. Because an open standard can be adopted by any online platform, it is critical to scaling trust across the internet. In addition to the inclusion of varied media types at scale, C2PA is driving an end-to-end provenance experience from the capturing device to the information consumer. Collaboration with chipmakers, news organizations, and software and platform companies is critical to facilitate a comprehensive provenance standard and drive broad adoption across the content ecosystem.

The formation of C2PA builds on earlier efforts of the Content Authenticity Initiative (CAI) and Project Origin. According to the Project Origin website, the stated goal of the effort is to create a “provable source of origin for media, and knowing that it has not tampered with en-route.” As for the first formation of the CAI, it echoes many of the same tenants as CPA as “a system to provide provenance and history for digital media, giving creators a tool to claim authorship and empowering consumers to evaluate whether what they are seeing is trustworthy.”

Recent technologies such as Deepfakes videos have expedited the need to secure not only media shared across the internet but verify and validate the hardware the source was originally captured on or manipulated with to sus out synthetically manipulated content as well. To help address the very tricky subject of deepfake content, Microsoft’s Eric Horvitz, a technical fellow, and chief scientific officer for the company, sought out from leading blockchain security network specialists.

As for any other specifics regarding timelines and code releases, and privacy details the coalition’s announcement was light on those details. Thus far, CAI has a cool video showcasing its first case study of its end-to-end demonstration of tracing captured media from hardware-secured smartphone technology through its evolution online.

Adobe: Secure Mode Enabled from Wild Combination on Vimeo.