The Content Authenticity Initiative (CAI) describes a new industry-standard attribution model for images, videos and more. But how will it work? And what obstacles would first need to be overcome?

How do we know what we’re looking at hasn’t been manipulated in some way?

This is a question that is now being asked more than ever as we’ve come to appreciate some of the effects of misinformation online. Whether it’s the words we read, the images we view, or the videos we watch, everything we see (and hear) online has the potential to be edited in a way that suits a specific agenda.

This, of course, is not a new issue. Every day, we actively take steps to guard ourselves against what we suspect may not be accurate. We may choose not to read certain newspapers or publications, for example, that have a record of spreading falsehoods. When a story breaks, we may temper our desire to find out information quickly with the knowledge that a number of inaccurate details can typically leak out as a situation unfolds. The ongoing COVID-19 pandemic has illustrated only too well just how critical it is to have accurate, up-to-date information as a foundation for decision making.

But as the ways in which we absorb information change over time, our safeguards should evolve accordingly. One of the latest efforts to bring about change comes courtesy of Adobe, Twitter and The New York Times, who have come together to develop the Content Authenticity Initiative (CAI).

What is the Content Authenticity Initiative?

The initial mission of the Content Authenticity Initiative is to “develop the industry standard for content attribution.”

The initiative – which has images as its first focus, but may eventually spread to videos, documents and streaming media among other things – details a set of standards that could allow content creators and publishers to securely attach attribution data to their assets, in a way that would “fit into the existing workflows of each of the target users.”

By doing this, the creator or publisher of the asset can explain its origins and what’s happened to it at every stage of its creation and processing, which in turn would help the consumer to work out whether or not it could be trusted.

 

The initiative was jointly announced by the above three organizations, although they are said to have collaborated with representatives from other commercial organizations, in addition to human rights groups, think tanks and advocacy groups among others. Examining the list of authors of the white paper shows involvement from the likes of the BBC, Microsoft, Truepic and the University of California, Berkley.

Is it not possible to use