Growing uncertainty around what we can trust online has given rise to a number of initiatives whose goal is to improve transparency. But is this indicative of a broader trend?

Late last year, online dictionary Merriam-Webster announced that the word “authentic” had the honor of being its 2023 Word Of The Year

To many, this might not have come as a surprise, given the events of the last twelve months. But with runners-up such as “deepfake” and “dystopian,” and these results reflecting search volumes on the site over the past year, there is an obvious temptation to draw conclusions as to the public’s mood and focus.  

Some would consider Merriam-Webster’s announcement to be little more than a PR exercise rather than anything to be taken too seriously. The fact that these results are backward-looking – a term’s popularity in one year arguably tells us nothing about what may be the case in the following one – is also worth bearing in mind.

Nevertheless, as the issues that led to these searches persist beyond the year’s end, it seems likely that tools and technologies that serve as a mark of authenticity will continue to receive attention.

Building trust

One reason for this is that the matter is just as crucial to publishers of online content as it is to the audiences that consume it. For the latter, understanding what’s authentic is important as people want to be assured that they are receiving accurate information about the world. But for the former, it’s their entire reputation that’s at stake. 

The demands of rolling news coverage, together with a reliance on citizen journalism and a constant stream of newer publishers challenging so-called legacy media, leave ample room for mistakes. While the consequences of these may often be insignificant, they can easily invite accusations of bias, particularly in the reporting of global conflicts and health-related matters. It’s not just a question of rigorous fact-checking, but being able to do so at speed. Get both right and the prize is the public’s trust.

Publishers have always been conscious of this, but with respect to visual content, and the ease with which visual content can be manipulated and presented out of context, there’s an imperative to demonstrate this kind of trustworthiness in a more tangible way.

Any organization can claim to be trustworthy, but it takes little more than anonymous replies on social media posts with links to alternative sources of information to undermine this. So the logical response is to make the process of assessing this information as transparent as possible.

Read more: How can publishers increase and optimize page speed?

Earlier this year, for example, the BBC launched a new BBC Verify brand, which comprises around 60 journalists working in a dedicated space “with a range of forensic investigative skills and open source intelligence (Osint) capabilities at their fingertips”. These include Analysis Editor Ros Atkins, who is best known for his Ros Atkins On… series that distills complex issues into short video explainers.

The BBC states that, rather than sorting content into verified and unverified buckets, BBC Verify aims to actually explain the verification that has taken place. This allows the viewer to assess whether its methods are sound, rather than simply take its word that a piece of content is considered genuine.

Similarly, the Content Credentials tool from the Content Authenticity Initiative allows for greater transparency over the creation and editing of online content. By allowing viewers to inspect the credentials attached to the work – that is, the content producer, any edits that have been made, the original images used in any composite works, and the date and signing of the work – they’re better informed about what to trust.

These initiatives follow others developed over the past few years for advertisers, whose concerns around transparency are more to do with having sufficient oversight on the supply chain so that they can understand where their ads have been shown, the allocation of their budgets, and tools to counter ad fraud.

While these initiatives, which include the IAB’s Gold Standard and ads.txt, may appear distinct from things like BBC Verify and Content Credentials, they’re not wholly unrelated. Publishers committed to creating trustworthy environments are more likely to attract the right kind of audiences, which, in turn, are more valuable to advertisers.

A new era?

Perhaps it’s the fact that the emergence of these tools and systems coincides with Google’s long-awaited deprecation of third-party cookies in Chrome – which places more focus on exactly how online viewers are being targeted by ads and the data used to do so – that makes it seem like the following year will witness the start of a new chapter in online transparency.

Or perhaps these are a natural and logical continuation of what has already come before. Many publishers already state their editorial policies and highlight corrections where necessary; detail their ownership and funding; mark native advertising clearly; show authorship and links to writers’ social media channels; and adhere to requirements around affiliate commissions when recommending products that can be purchased on a third-party site. 

Read more: Premium publishers – what they are and the difference they make

Over the next few years, these practices are likely to be joined by policies that detail their responsible use of AI tools. Additionally, more obvious indicators of AI-generated content may become more prominent.

While some of these may be required to comply with the relevant regulations, publishers that voluntarily take additional steps to demonstrate their trustworthiness are more likely to hold themselves to a higher standard for their audiences and partners, and so will fare better under scrutiny. Given that the average person will only realistically take their news from a limited number of sources, steps like these may well determine who the publishers of tomorrow actually are.

 

 

Related articles