The Content Authenticity Initiative (CAI) describes a new industry-standard attribution model for images, videos and more. But how will it work? And what obstacles would first need to be overcome?

How do we know what we’re looking at hasn’t been manipulated in some way?

This is a question that is now being asked more than ever as we’ve come to appreciate some of the effects of misinformation online. Whether it’s the words we read, the images we view, or the videos we watch, everything we see (and hear) online has the potential to be edited in a way that suits a specific agenda.

This, of course, is not a new issue. Every day, we actively take steps to guard ourselves against what we suspect may not be accurate. We may choose not to read certain newspapers or publications, for example, that have a record of spreading falsehoods. When a story breaks, we may temper our desire to find out information quickly with the knowledge that a number of inaccurate details can typically leak out as a situation unfolds. The ongoing COVID-19 pandemic has illustrated only too well just how critical it is to have accurate, up-to-date information as a foundation for decision making.

But as the ways in which we absorb information change over time, our safeguards should evolve accordingly. One of the latest efforts to bring about change comes courtesy of Adobe, Twitter and The New York Times, who have come together to develop the Content Authenticity Initiative (CAI).

What is the Content Authenticity Initiative?

The initial mission of the Content Authenticity Initiative is to “develop the industry standard for content attribution.”

The initiative – which has images as its first focus, but may eventually spread to videos, documents and streaming media among other things – details a set of standards that could allow content creators and publishers to securely attach attribution data to their assets, in a way that would “fit into the existing workflows of each of the target users.”

By doing this, the creator or publisher of the asset can explain its origins and what’s happened to it at every stage of its creation and processing, which in turn would help the consumer to work out whether or not it could be trusted.

 

The initiative was jointly announced by the above three organizations, although they are said to have collaborated with representatives from other commercial organizations, in addition to human rights groups, think tanks and advocacy groups among others. Examining the list of authors of the white paper shows involvement from the likes of the BBC, Microsoft, Truepic and the University of California, Berkley.

Is it not possible to use existing metadata standards for this?

Photographers, image agencies and others who publish images online already have the option of including metadata to communicate information about the creator of an image. This can be read and understood by various programs online and offline – but this new initiative goes many steps further.

One of the weaknesses of existing metadata standards is that this information can be easily edited or removed, even by those with only limited technical knowledge. The standards proposed by the CAI are not designed as a replacement for these existing standards, more a tamper-evident way of showing attribution, along with the history of any edits, their purpose and the persons responsible for making them.

This, according to the white paper, would be “built upon XMP, Schema.org and other metadata standards that goes far beyond common uses today,” although it does also acknowledge that this level of transparency would need to be balanced with privacy considerations of creators, publishers and consumers.

How is this more secure?

Rather than it simply being a case of filling in another few metadata fields, what the CAI describes is a system that combines greater transparency with more robust security.

This would allow every stage of an asset’s creation and processing to be appreciated by the end user. Changes at every stage are recorded as ‘assertions’, and these are cryptographically hashed for security. Each time a new assertion is made, it’s linked to the previous one, which creates a chain that ends up showing the full history of that asset.

The integrity of this information would be ensured through digital signature technology, which would reveal any tampering. A visual indicator of some sort would allow the viewer of the asset to notice these details are present so that they can be scrutinized.

How is this likely to work in practice?

The white paper describes different workflows using the CAI standards to show how it’s likely to work in practice for different purposes.

Individual creatives, for example, will have a different workflow and different requirements to photojournalists. But the basic idea is that every device or process that’s used in the creation or processing of an image or another asset supports that CAI standards so that the chain remains unbroken.

So, an image would begin life with the initial capture attribution details (such as the photographer’s name) already in place, as the photographer would program these into the camera prior to capture. Other necessary details – the specific camera, time and date, and so on – may already be part of existing EXIF or IPTC metadata, so if these are also required they could potentially be pulled through automatically.

Update (10/21): Adobe has recently released more details on how this is likely to work in practice, with a video that shows a prototype of its attribution tool in action. 

Each time the image enters some kind of process, such as image editing, any changes made to the asset can be recorded. These would combine automatically generated information with user-input details, and develops the picture of the asset’s journey, from its creation to publication.

What obstacles is this likely to face?

While the white paper goes into more detail on what these standards would entail, there are still many details that probably won’t be known for sure until they are released. These would no doubt be influenced by the third parties participating in its development, and would change as the standards evolve over time.

So what might stand in the way of its success? Obviously, it would need to be adopted by a number of organizations for it to gain traction. Naturally, with Adobe, The New York Times and Twitter driving the initiative, these organizations will set an example to other software manufacturers, publishers, social media sites and other relevant parties to follow suit, which should help it to grow into the universal set of standards it aims to be.

Should this be widely adopted, it will be interesting to see whether any visual indicators appear consistently across different platforms, helping people to quickly identify them and get accustomed to them being there, or whether the design and implementation would be left to each individual platform.

It will also be interesting to see how obvious these are. Twitter’s current approach for highlighting content that violates its terms (below), for example, cannot be ignored, and needs to be interacted with before the content can be viewed. However, the indicator for manipulated media (bottom) can easily go unnoticed.

Other issues are more down to the practicalities of such a system. Every asset, for example, requires an unbroken chain of CAI-supporting devices and/or processes for it to work, and for every individual to adhere to the process. In the workflow example provided for photojournalists, for example, Adobe states that the following would all need to support the standards:

the camera or other device used to take the image
any application used to edit the image
any additional photo-editing application used by the publisher
the content management system to which the content is uploaded
the social media platform on which the content is viewed

Clearly, the greater the number of individuals and processes in that chain, the greater the likelihood that one will not support it, or something else will go wrong to stop a full picture of an asset’s history from being developed. The white paper does acknowledge that when using legacy systems or devices that do not support the standards, it could be a case of a newsroom vouching for the authenticity of a photojournalist’s image or another media format post-capture, using CAI-supporting tools.

Professional photographers and other content creators who are aware of the initiative may take the necessary steps to ensure everything is adhered to. But what about citizen journalists and smartphone users? The white paper does mention smartphone users when it discusses a possible workflow for human rights activists, but this clearly only applies to a fraction of images or videos that may be captured by non-professional content creators using everyday devices.

Another issue is that, while tampering with this information may be evident to those who check it, it appears that either stripping this information completely from the image, or taking a screenshot of it to create a new version of the image, would still be possible. Indeed, the paper addresses this concern:

…  the “analog hole” or “rebroadcast attack,” which are common terms for subverting provenance systems by capturing an image of a photograph or computer screen, are not addressed directly by the model.

This is important as the complete lack of this information from an image is not in itself an indication that it has been manipulated (just as the presence of CAI information does not necessarily mean an image can be trusted, more that the necessary information exists). If the content itself allows for its provenance to be so easily checked, and ends up becoming widely adopted, those intent on manipulating it would presumably upload the media without any information, rather than publish it in a way that reveals any meddling. That becomes easier when you consider that many images and other types of content that predate the introduction of the CAI standards would no doubt continue to circulate online.

The white paper does propose possible solutions to image duplication, such as digital watermarks and depth mapping to differentiate screenshots from genuine images. Nevertheless, this does still highlight the main problem with easily downloadable content, and thus the importance of being able to control it at all times, such as by streaming images and enabling download and screenshot protection.

It may also be the case that, by the time it’s clear that an asset of some sort has been manipulated, enough people will have already seen it for it to have the desired effect. Given that the onus is on the consumer to check this information so that they can make up their own mind as to whether the asset can be trusted, it’s likely that those without the patience to carry out these checks may simply consider the presence of a visual indicator as a mark of trustworthiness. A further issue is the possibility of an asset deemed to be authentic (such as an image) accompanying another type of content (such as text) that contains falsehoods.

Final thoughts

In many ways, this initiative appears as a sensible solution to an increasingly important issue. It may be ambitious, but all such solutions must begin with these kinds of steps.

It’s encouraging to see a range of users have been taken into consideration, and it’s good to see the participation – at least in the white paper – of some of the most important industries and organizations. Furthermore, the fact that what’s being proposed is not only applicable to images but to other media – regardless of whether they can be downloaded or are designed to be streamed – only broadens its appeal, which should help it to develop more credibility in wider circles.

Social media channels in particular are under increasing pressure to flag up problematic content, and have a clear interest in adopting a standard that delivers greater transparency over an asset’s source and history, although quite how this will be balanced with privacy concerns remains to be seen. Pressure from lawmakers has already forced them to highlight posts that fall foul of their terms, and these new standards appear as a logical extension of this.

Given the additional complexities in ensuring that an image, video or something else must always adhere to a CAI-supporting process, it seems likely that it will still be some time before any significant amount of online content appears with the relevant information and indicator attached. The long-term success of such an initiative will greatly depend on its visibility to the everyday user, as well as their knowledge of its purpose. But it’s worth remembering that the vast majority of online images, videos and documents are not newsworthy or political in nature, which means that they are never likely to be manipulated and are thus unlikely candidates for any indicator of authenticity.

And perhaps it’s this issue that raises one of the most interesting questions: How will images, videos and other assets that comply with CAI standards co-exist with all other types of content online? Will it work to their advantage by making the presence of any visual indicators more obvious? Or will these be diluted by non-conforming media, and end up being overlooked?

 

 

Related articles