How can a manipulated image damage a business?

The ability to change an image’s content has become easier than ever, thanks in part to GenAI tools. But could distributing an altered image lead a business to face serious consequences? Dan Raywood investigates.

Rewind to Mother’s Day 2024, and most people will probably remember what they were doing. For one mother of three, sharing a photo of herself and her children opened up the reality of image manipulation.

In that case, the photo of the Princess of Wales was later withdrawn amid concerns about manipulation. The intense press coverage around it may even have brought forward the announcement of her cancer diagnosis.

Aside from the impact of the incident, this whole episode highlighted how photo-editing techniques can be used to manipulate and alter images – apparently, in this case, to merge several shots into a single, ideal photo – and the public’s readiness to scrutinize the results.

This was arguably the first incident of its kind – but could we see another altered image cause the sort of disruption that the Royal Family faced?

After all, if an image of a C-level executive were to be altered in order to portray them in a negative way, what would the impact be on them and their business?

Fake news – real consequences

Ilia Kolochenko, CEO at ImmuniWeb and a fellow at the British Computer Society, says there have been “countless incidents where celebrities and politicians were tricked, harassed and blackmailed with deepfakes, fake news, and misinformation.”

He admits that the vast majority of these incidents are “isolated events, and they didn’t really cause disruption.” But there is evidence of several “small but well prepared cyberattacks,” which affected cryptocurrency influencers, where X (formerly Twitter) accounts were compromised, and announcements were posted that encouraged followers to invest in a specific NFT or cryptocurrency. “So we had incidents that caused tangible financial damage,” he says.

Reputational damage

Quantifying the cost of an attack such as one involving reputational damage is particularly hard, especially when compared to a more common cybercrime where you can see a financial loss.

David Sancho, senior threat researcher at Trend Micro, says it’s fairly obvious that a company’s reputation can be damaged by deepfakes. One of the most immediate ways this could happen is by making the company appear to take a stance on a highly sensitive topic — something that can be done relatively easily.

“I don’t know what the agenda behind those attacks might be, because pure reputational damage ‘just because’ is not very likely unless there’s a hidden agenda by the attacker, so that would possibly imply hacktivists,” Sancho says.

“A hacktivist can say ‘look at this company doing this’ just to lower the reputation. It can happen, but I haven’t seen it happen very often.”

He also cites how a stock price could be affected by rough claims, but this would most likely be temporary and if the company denied the claims, then the stock price goes back to normal.

“In that meantime, the attacker can invest in that artificially low stock just to make money later. It can happen. I’m not saying that it hasn’t happened; it probably has. But since it’s very difficult to quantify, it’s very difficult to know. Stock fluctuates, and there’s a lot of bad behavior out there, so I haven’t seen it, but it theoretically is possible.”

Political agenda

Sancho says that this sort of action is undertaken by hacktivists “or people with political agendas,” and it became clear that this would also have to be the action of a very determined individual to make such a series of efforts against a single entity and cause determined damage.

Ultimately, most people may have a bad experience with a product or service, but after calming down, they usually move on. And Sancho says he completely agreed, as “it is probably not worth the effort for most people.” The most likely scenario would be a political campaign where you made an effort to disrupt the other party’s electoral efforts, such as with the Conservative Party’s 1997 “New Labour, New Danger” campaign.

A notable incident from 2020 involved an altered video of Nancy Pelosi, former speaker of the U.S. House of Representatives, where she was apparently shown to be “drunk and slurring her speech.” This was later revealed to have been manipulated, with the clip described as “low quality and jerky.”

Stamp out the impact

These incidents could be damaging if the victim — or their company — is unable to stamp out the impact and prove it to be based on a falsehood. So what can a business do to try to undo the damage or defend itself against this kind of attack?

Sancho says this is a hard situation to recover from “because by the time they open their mouths, the damage is already done, so the only thing they can do is say that this is a lie.” He says there’s not a lot you can do when somebody says false things against you.

“In the context of reputation, it probably will be done only in a political context,” he says. “It’s not impossible, but it just won’t happen very often, unless there is some other political agenda attached to it.”

Kolochenko says we have “almost attained the point of no return” with this sort of disruption, but it has nothing to do with AI “because it was possible before, but now with AI, it’s just cheaper and faster and more efficient.”

He believes that every company is vulnerable to a “massive AI-powered misinformation attack.” He cites an example where someone could create deepfake pornographic videos, and post them on sharing sites and social media – and in many cases, it could spread faster than it can be removed.

This could lead to management being tied up dealing with the aftermath, and a response “may be uncoordinated and unprofessional”. Potentially, mistakes can be made, which would be “more harmful than useful because people will be panicked.”

LLM legacy

Kolochenko says that even after all of the stories are taken down, in six to 18 months, all this data will be scraped by AI crawlers, LLMs will be trained, and in about a year from this incident, details will emerge.

“So AI will be poisoned with this information, and given that people will probably be relying even more on AI, this can lead to catastrophic reputational damage that will be kind of irreparable,” Kolochenko says. “Given that the right to be forgotten with LLMs from a technical viewpoint is non-executable, we have a big challenge if we’re talking about a smaller company.”

He continues: “Fraudulent techniques and AI can cause long-lasting reputational damage to everyone. A company can become toxic, and you won’t be able to clean up the internet. You also won’t be able to remove that data from training sets of AI models, unless you obtain a court order to remove the models themselves. But even then, you cannot remove the data from the training set.”

The impact on a smaller business is the telling factor here: facing a nation-state assault, whether it be a cyber-attack or misinformation, can have huge consequences. The response should include legal and marketing involvement, but even by the time of the attack, recovery may be too difficult.

While the use of AI to manipulate images is not at the heart of this issue, it is certainly making it easier – and the “feeding” of GenAI tools with data could cause problems for businesses trying to clear the claims. Having a response plan is crucial, as well as an eye on what others are saying about you.

Freelance writer, speaker and editor Dan has spent 23 years in B2B journalism, with over 15 years covering cyber security.