Tools used to manipulate images are more readily available than ever, and that can be an issue in terms of what makes something trustworthy. So what can be done to rebuild this?

Can you guess 2017’s word of the year?

The answer to that burning question is “fake news” – thanks in large part, of course, to the US president at that time.

But the fact that we’re still debating what can be done about disinformation and misinformation years later shows just how significant this issue has become.

When it comes to social media platforms, they rely on people being able to share content freely and easily.

Consequently, such platforms must also deal with the fallout from content created and used for harm.

Although it is worth noting that most of the largest platforms are now making greater efforts to educate users on the subject, a lack of protection and limited visibility over the origin of media means these issues will persist.

If these platforms can rectify this and demonstrate that they take the safety of their users seriously, they can rebuild the trust that has been eroded over the past few years.

With the online world facing the challenge of speed vs accuracy, it’s perhaps unsurprising that the WEF’s 2024 Global Risks Report found “misinformation and disinformation to be the top risk for the world in the next two years.”

At the heart of this epidemic is image manipulation.

Out-of-context images have repeatedly caused some sense of clouded judgment and made people more susceptible to believing any false narrative that might be attached to an image shared online.

But images that have been heavily altered, or that are entirely fictitious, to begin with, pose even greater issues.

What is image manipulation?

Image manipulation refers to the act of adjusting a digital picture in some way.

Often, this is done to help create a certain creative look or to fulfill a business objective. It can, for example, be used to fine-tune details make corrections, or even create entirely new compositions.

The history of image manipulation

The use of manipulated images has a longer history than you might think. While it’s reasonable to view it as a modern issue, the use of image editing to deceive the public dates back to the 19th century.

It’s claimed that the first case of image manipulation took place in the early 1860s – and that this particular instance shaped the future of money.

Abraham Lincoln’s face was edited onto the body of another politician, John Calhoun, to “distract from his ‘gangly’ frame.” As for the connection with money, this manipulated image was believed to be the basis of Lincoln’s original five-dollar bill.

The widespread use of image manipulation became particularly noticeable during the early days of fascism.

In Nazi Germany, for example, images were frequently edited to change their meaning, often to demonize minorities.

This can also be seen in a famous photo of Italian dictator Benito Mussolini, which was edited to remove the horse handler to create a sense of “heroism”.

During conflicts, photographs were used to uplift spirits, vilify opponents, and manipulate events, to evoke and exploit the emotions of the public amid the turmoil of war.

 

 

These examples only scratch the surface of image manipulation’s complex history and impact on society.

With the constant availability of online content, one might assume that people are careful not to accept everything at face value.

Sadly, this isn’t the case.

Why is image manipulation becoming more of a problem?

Easy accessibility to image editing software, together with the growth of AI tools, means this issue stands to disrupt society in a way we haven’t seen before.

According to image search engine Everypixel, the growth of AI-generated images has led to more images being created in a single year than humans have produced in over a century, with over 15 billion images using text-to-image algorithms already generated.

People have also become somewhat desensitized to some degree of image manipulation because it’s usually used in ways that the average person would deem acceptable, such as for improving profile pictures or Instagram posts.

But in recent times, fake and manipulated images have made headlines for the wrong reasons, such as the Princess of Wales’s Mother’s Day post and Taylor Swift’s AI-generated explicit images.

The rise of deepfakes is also being used to create misleading celebrity endorsements, causing scam and fraud headaches for both internet platforms and their users.

What can be done to stop image manipulation from spreading fake news?

It’s, of course, impossible to stop image manipulation completely. But when it comes to viewing these images online, there are a number of options available to help people identify manipulation and fakery.

Industry standards, comprising standardized verification, clear editing guidelines, and ethical codes, can help fight against the proliferation of fake images.

Some governments have even taken it into their own hands and implemented education in schools to help people spot fake media, despite the constantly changing nature of the issue making this more difficult.

Social media platforms play an important role too.

To address the proliferation of manipulated images, these platforms could work more closely with fact-checking organizations to verify the legitimacy of shared content.

Digital signatures, watermarking, and other image analysis tools could also be integrated directly into social media platforms to help flag potentially misleading content.

The ease with which images can be stolen is arguably the most important enabler of many of the risks associated with manipulated media.

Part of that problem is that the longstanding JPEG file has been the default format for images since the internet’s inception.

Yet, it offers no meaningful protection against theft – simply right-click and save, and from there, anyone with some image-editing know-how can manipulate images any way they wish to do so.

How does Content Credentials intend to influence the future of imagery and image manipulation?

Content Credentials give users context on the content they’re met with. This in turn allows them to make better decisions on whether or not the image can be trusted as a source of information.

The Adobe-led initiative allows for proper attribution to all images posted online, ensuring that the original image and its producer are visible, creating more transparency.

Furthermore, it’s possible to see how the image was manipulated, including editing history and any use of AI tools.

 

 

Additionally, by embedding images to news articles using SmartFrame, images can have all of the above while being protected against both drag-and-drop attempts and right-clicks, with a copyright warning thwarting screenshot attempts too.

And, as every SmartFrame is encrypted and only appears when a user is actively browsing a website, the image disappears as soon as the viewer closes the browser tab or window.

Image manipulation isn’t new – but the need to highlight when it’s used is becoming non-negotiable

The targeting of celebrities and key world events, and the content used for this, clearly show the need for increased regulation and protection.

Through clear labeling, increased education, and added accountability with repercussions, we can help mitigate the spread of misinformation and rebuild trust in the images we see online.

As James Warren, NewsGuard’s executive editor, noted in a recent Substack post: “Kim Kardashian’s enhanced glam shot on a magazine cover is one thing, fiddling in the slightest with a photo of the Israel-Hamas war is another.”

 

Related articles