Images are behind many of today’s online threats, and big tech companies are struggling to defeat them. But their failures are often compounded by our own actions. So what are they, and we, doing wrong?

SmartFrame recently joined the online safety tech industry body OSTIA as an associate member, and will be working with the body’s members to raise awareness of the tools available to help protect people online. 

OSTIA’s formation earlier this year follows a white paper released by the UK government that focuses on online harms. This paper details the various threats that internet users currently face, and outlines a new statutory duty of care – overseen by communications regulator Ofcom – that’s aimed at curbing various illegal and harmful activities. 

Online harms exist in many different forms, and these change over time, so a comprehensive discussion of them all would be well outside of the scope of this article. But given that images are at the heart of many of these threats, a closer look at how these are captured, viewed, published, downloaded and shared with others helps us to gain an understanding of where problems may lie. It’s not simply a case of needing to protect ourselves and those around us from harmful content; we also need to recognize how our own perfectly lawful actions can also end up being problematic.

Threats can, of course, exist in all corners of the internet. But it’s easy to see that many of the most problematic threats we face are in part down to the fact that social media platforms, together with other major online players, have become victims of their own success. As their audiences have ballooned, they’ve struggled to keep up with the volume (and nature) of content posted on their channels. There’s little doubt that they are a lot safer than they would otherwise be without certain tools and processes that are in place, and progress continues to be made as threats evolve. Nevertheless, their failure to address more basic issues is worrying, and a key factor that allows these threats to continue.

So what are these threats, and what risks are we taking? Where are we being failed by those who we expect to protect us? And how do we become more responsible internet users?   

The threats we face

Internet safety is a critical issue, not least because the vast majority of us are, in some way or another, online. In the UK, nearly nine in ten adults and 99% of 12 to 15 year olds are online. In the US, over 85% of the population has access to the internet, which equates to around 281m people. Within these are countless vulnerable people, from children and the elderly through to those who may be more susceptible to radicalization.

Online threats, of some description, have always been with us. Even those of us who have been using the internet since it was first commercially available may find it difficult to remember a time when spam folders designed to collect suspicious-looking emails weren’t integrated into email services as standard, or when new PCs and laptops weren’t being bundled with anti-virus software. Scams and viruses are still with us, but as the internet has evolved, so have the dangers. 

Many of today’s threats exploit the things we do every day. Our smartphones and the images we take with them; the social media platforms we use; and the cloud-based services in which we store images and other files. These have rooted themselves in our day-to-day lives to the extent that we use them without much thought. And when we use them without much thought, we start to lose sight of the value they could have to someone without the best intentions.

Threats that concern images can typically be sorted into two categories, namely threats based on images we create ourselves and those based on images created elsewhere that are in some way harmful. As may be expected, the most serious image-based threats discussed in the white paper typically concern some type of pornography, such as images depicting child nudity and exploitation, as well as extreme porn and revenge porn. Other threats mentioned that often involve images include harassment and cyberbullying, as well as content that incites violence or hate crimes, or that promotes terrorism or other illegal activity. 

The paper also highlights the problem of perfectly legal content and platforms intended for adult audiences being accessed by children, from pornography to dating apps. The complexity in adopting effective age-verification systems for legal pornographic websites has meant that plans to introduce this in the UK were recently abandoned. Similar systems are on the cusp of being introduced elsewhere (such as in France), although technical problems and privacy concerns mean that it remains to be seen whether these will be successful.

Together with the growing issue of misinformation – which, most problematically, concerns deliberate acts of disinformation – one can start to appreciate just how multi-faceted the issue of online harm is, and how crucial images are to the effectiveness of many of these threats. But what’s made these such a significant issue today?

How the situation has escalated

Various factors have come together to make this the current reality. One of the most significant of these is the ease with which images can be taken by the average person. Standalone cameras have long been affordable, but technology has moved on to the point where these no longer need to be bought separately, given the proliferation of cameras inside smartphones, tablets and computers. 

Image-editing software, which can be used to manipulate photos for all kinds of malicious purposes, is free, easily downloadable as an app, or even bundled into social media platforms. The overwhelming majority of picture taking and editing is done with no malice, of course, but, as we shall see, such images can also be at the heart of many harms.

Another factor is the increasing ease with which images can be shared with others. Social media platforms rely on user-generated content and activity as this allows them to understand their audience and sell advertising, while also helping to attract new users. So it’s no surprise that they have spent years making changes to algorithms, user interfaces, integrations, notifications and discoverability; sharing images is more pain-free than ever, but we rarely stop to consider who the real beneficiary of all these changes has been.  

The third key factor is anonymity. While illegal and problematic images are also shared outside of social media platforms, the ability to participate in harmful activity while remaining anonymous has allowed individuals – and, more insidiously, groups posing as an individual – to leverage these platforms and target existing users with propaganda, or other harmful content. As a paper produced on behalf of Ofcom highlights, one thing that allows this to happen is the fact that most online communications are asynchronous; when an individual cannot see a response to a message or another communication, they don’t see a negative emotional reaction to it, which can encourage them to act in a less inhibited manner. It’s easy to forget that social media platforms do not require any kind of proof of identity when opening an account, such as a copy of a passport or a driving license – an email address will usually suffice. 

Encrypted messaging apps that allow for individuals to securely send and receive images and videos have also risen in prominence in the past few years, their popularity being a logical consequence of years’ worth of focus on online privacy issues. These can be vital to journalists and others who may have a legitimate need for sensitive communication but, as we might have expected, their security has been exploited by criminals. Telegram in particular has been frequently cited in reports on the most prominent terrorist attacks in recent years.

It doesn’t concern me – does it?

Publishing extreme and obviously illegal content is one thing, but the overwhelming majority of online users do nothing of the sort, and comply with both the law and the terms of the platforms they use. Nevertheless, many everyday images that appear completely innocuous can be used for harm, whether the image owner is the intended victim or not.

Social media channels are an obvious place to discover and steal personal images. While threats exist across different platforms, Facebook appears to attract the most criticism, and there are many reasons for this. In November 2021 it was the third most visited website globally, and the most popular social media platform in terms of monthly average users. The fact that people connect with their friends and family here means that the nature of the content shared on it is more personal than on platforms such as Twitter and YouTube. But it’s the diversity of the platform that makes it particularly unique. 

At a basic level, there’s the personal information and images that are openly shared with connections (or publicly), which can be used to harm the individual. On top of this, Messenger allows for threats to take place privately, while Groups allow harmful ideas to reach new audiences when made public, and to reverberate when set to private. Facebook’s Marketplace can attract scams and other threats with some kind of financial goal, while issues with monitoring the live-streaming Live service became clear after last year’s mass shooting in Christchurch, New Zealand. Other social media networks may have one or more of these elements, but it’s the fact that Facebook offers everything under one roof that forces it to contend with a diverse range of criminal behavior.

There are many reasons why someone may wish to steal images from someone’s account. A common scam on Facebook, for example, sees a duplicate profile of an individual created by an impersonator, who then sends friend requests to that user’s friends under the pretense of this being a new, but genuine, account. Once their request is accepted, the impersonator has the same level of access to the acceptor’s account as other friends of the original user, which may include access to images, friend lists and personal profile information. They also have the ability to message that person’s friends and family in an attempt to extract personal information.

Similar impersonations on Twitter see prominent users having their identity cloned in order to send spam or links to malware, or to promote counterfeit goods, with the target audience being followers of the genuine account. Even just a profile image may be enough to deceive other users (and at the time of writing, this image is available to view and download from accounts that have been set to private, or that have blocked malicious accounts from interacting with them or reading their tweets).

Tools for reporting such attacks when they’re noticed have been a part of social media platforms for some time, but a recent news article details a more sinister type of threat that can’t be detected in the same way. Celebrities have traditionally been the subject of fake pornographic images, and more recent deepfake videos, but a recent story confirms that such a threat is no longer confined to those in the public eye. The story details how, since July 2019, over 100,000 images of women were harvested from social media sites and treated with AI tools to create fake nude images, before these were posted on Telegram. While the effectiveness of this technology in creating convincing images has been questioned, this technology will only improve in the future – and the fact that these are being published in encrypted channels means the likelihood of the victim ever discovering them are small.

These are just three examples of image-based threats that exist today, and key to them all is the theft of images from a social media profile. Their effectiveness depends on it. The platforms from which these images are taken decide how easily such images can be stolen, which also means they play a role in determining how problematic these threats can become.

Sharing images of ourselves is one thing, but things can become more problematic when we choose to share images of others. Social media being what it is, it’s difficult not to do this when we’re keeping friends and family updated on our lives, but the more that’s shared, the better we need to understand the security measures available to us.

In 2015, Australia’s Children’s eSafety Commissioner revealed that of the 45m images discovered on a single child exploitation site, around half appeared to be innocent images that were sourced from social media platforms such as Kik, Facebook and Instagram. Even if there was nothing inappropriate about the images themselves, they were said to have been frequently accompanied by comments that sexualized the subjects within them, and categorized into folders by the subjects’ appearance, age or another attribute. 

Family blogs maintained by parents without sufficient knowledge of online safety matters were also highlighted as a potential problem for the same reason, and the issue of digital literacy becomes more important as the age of those charged with supervising children rises. Poor knowledge of online risks among elderly internet users creates enough problems of its own for that demographic, but it presents additional dangers when these users have children in their care, as the usual restrictions that may be in place at home on smartphones, tablets and computers may not be enabled on devices belonging to these users.

One would think that as these platforms have grown, and have greater resources for tackling illegal content, the dangers would be minimized. But if we go by volume, it only appears to be getting worse. In 2014, there were just over 1m reports globally of images concerning child exploitation. Last year, the New York Times reported that there had been 18.4m reports worldwide concerning indecent images and videos of children online – double the amount of cases in the previous year. The article also mentions that those familiar with the reports claimed that 12m of these cases concerned Facebook’s Messenger platform. Facebook’s own reported data shows it had acted on a total of 37.4m individual pieces of content that concerned child nudity or sexual exploitation, and the figures from the first two quarters of 2020 show a marked increase.

The dark reality of keeping us safe

Between the law, social platforms’ own terms of use, and common sense, the average user shouldn’t have too many problems understanding what kind of content can and cannot be shared online. When it comes to stopping the spread of problematic content, progress has no doubt been over the years, partly in response to threats that have grown in prominence and partly in response to pressure from lawmakers (particularly over the last few years, as the dangers of disinformation and political interference have become more prevalent).

But in some cases, the specific content within an image would land it in something of a grey area, and a decision would be more down to a judgement call than anything else. Today, such decisions are carried out on social media platforms by a mixture of artificial intelligence and human moderators. The former may still be in its infancy, but is said to be adept at tackling nudity, to the tune of being able to correctly identify and automatically remove 99.2% of offending images. Nevertheless, some level of human moderation is still required for other types of problematic content – and as this has become more of an issue, reports of the effects of this content on moderators have made for disturbing reading. 

Last year, The Guardian reported that contractors tasked with moderating content on Facebook claimed to have witnessed colleagues becoming addicted to extreme graphic content and hoarding it for themselves (a claim Facebook denies), as well as being influenced by the hateful, far-right material they were supposed to be vetting. 

Earlier this year, Facebook paid $52m to moderators who had claimed their work has led them to develop mental health issues, with some claiming to have experienced symptoms of post-traumatic stress disorder (PTSD). As the BBC reported at the time, one contractor who recruits such moderators had started to ask workers to sign a form acknowledging that they understood this work could lead to PTSD

This issue is not new; Facebook has previously been sued for similar reasons. Nor is it specific to Facebook; Microsoft faced a similar lawsuit back in 2017, while YouTube is currently being sued by a former moderator who claims that it had failed to “provide a safe workplace for the thousands of contractors that scrub YouTube’s platform of disturbing content.” 

Confidentiality agreements may explain why we haven’t heard more about this issue. During a Periscope stream with The Real Facebook Oversight Board, one ex-moderator who moderated content on behalf of Facebook explained: “I think the biggest problem is NDAs, which can be held over your head … which can make it difficult to speak out about anything.” This raises an intriguing question: without reports of these lawsuits, would we know anything about this at all?

Clearly, a balance needs to be struck between human and AI moderation so that the general public is sufficiently protected without it creating such severe issues for a handful of individuals. But what are the chances of AI being able to take over completely?

Facebook, along with Twitter and YouTube, appears to have relied more on AI during the ongoing pandemic. It has previously stated, in response to another lawsuit, that it wanted to move towards this model. This was two years ago, and the fact that human moderators are still being used suggests that either the technology isn’t quite there yet or that the threats are changing too rapidly (or, more likely, a combination of the two). Furthermore, while a deeper shift towards AI sounds like a positive solution for moderators, concerns remain. “My guess is that as AI gets better at recognizing patterns in stuff that’s constantly posted … we’ll just get more of the extreme borderline content and have to make harder decisions more often,” the ex-moderator stated.

Another ex-moderator on the same live-stream agreed. “At the time [AI] didn’t seem like the best indicator as to what was violating or not. It felt like job security for sure … As soon as you recognize the pattern, the internet will change the pattern. It’ll work and get better, but there will always be borderline [content]. We’re good at that as people. The algorithm and bots are only going to do so much. You’ll still need a content moderator – always.”  

Whatever ratio of AI and human moderation is used, the reality is that, right now, people who we have never met are watching some of the most disturbing content online to ensure it gets nowhere near us. These platforms may claim to support these individuals, but these lawsuits, together with the comments from ex-moderators, indicate that this is something they are still grappling with.

How social platforms are making things worse

Preventing images that shouldn’t exist from reaching us is one thing, but freely providing tools for downloading other people’s lawful images only compounds the problems online users face. 

Anyone using Facebook will usually see a Download button next to images posted by friends and other accounts, and images that do not have this (because of privacy settings) can often still be stolen with conventional means. It’s not even necessary to be someone’s friend to have access to this control on their images. While security options can be customized, it’s possible to download images from profiles that are searched for, or when a connection shares someone else’s image. At the time of writing, default privacy settings give friends of friends the same kind of access to this content that friends have, and the privacy tour that new users are invited to take to better understand the settings available to them is entirely optional. 

Even if no malice is intended by downloading such an image, the presence of such a control shows little regard for the protection of an image owner’s content or their copyright. While UK and US copyright laws both detail a handful of scenarios in which such images from others may be copied and used, the fact that there are only a handful of legitimate exemptions that fall into this category makes the provision of this button puzzling. 

Such issues are problematic on other platforms too, albeit to a lesser extent. Images cannot be right-clicked or downloaded with a dedicated control from (Facebook-owned) Instagram, for example, although screenshots are possible and these images are easily accessible in the page’s source code. Twitter also doesn’t have a download button of any sort, but right-clicks, drag-and-drop saving and screenshots are all allowed. 

Photo-hosting site Flickr has similarly problematic controls. At the time of writing, it displays a license type underneath images uploaded to its platform, and the default option is All Rights Reserved. This, as it explains, means that:

“You, the copyright holder, reserve all rights provided by copyright law, such as the right to make copies, distribute your work, perform your work, license, or otherwise exploit your work; no rights are waived under this license.”

And yet, just to the side of this license is a “download this photo” button. Not only that, but clicking on this presents a range of image sizes to choose from.

To its credit, Flickr does support a range of other licensing options and allows users to prevent direct downloads in the manner described above. A ban on right-clicks and drag-and-drop actions do not allow the image to be saved from this page, but protection over screenshots is absent and the image is still available in the page’s source code. Even if users do choose the more secure option, finding a page with an image in a range of sizes is straightforward enough. Anyone intent on stealing such an image can do so without much effort.

Mobile challenges

Images are, of course, stolen from other channels outside of social media sites, and this problem extends to the mobile devices we use.

Some of the most common browsers used on smartphones and tablets, from Google Chrome and Microsoft Edge through to Firefox and Brave, have a context menu that appears upon a long press on an image, one that looks much like a right-click popup on a computer. While these menus differ slightly from one another, the option to download the image directly – or at least open it in a new tab where it’s isolated from other page content – is typically provided among these options. What are the chances that someone is using this to download their own image versus the chance that it’s being used to download someone else’s image without their permission or knowledge? 

Even if browsers prohibit these kinds of actions, the option to screenshot images on mobile devices will typically be available. As users of Snapchat, and certain banking and payment apps, may already know, screenshot detection and/or blocking has been around for some time, although it can be circumvented and is lacking from many of the apps that would particularly benefit from it.

Dating apps are an obvious example. While the threats associated with these have traditionally centered on the physical dangers of meeting strangers in person, image theft brings with it additional problems that don’t necessarily rely on any physical contact.

These include catfishing, which typically sees a new profile set up with stolen images in a bid to gain user’s trust and divulge personal or sensitive information (which obviously includes images). Earlier this year, it was reported that over 70,000 images were scraped from Tinder and found on cyber-crime forum, for unknown reasons. This came less than three years after a user claimed to have exploited Tinder’s API to scrape 40,000 images in order to create a facial dataset. Incidentally, these figures come nowhere near the 3bn or so images that were said to have been scraped by a start-up from Facebook, YouTube, Venmo and other sites for the same reason

These are extreme examples, but the ability to scrape this quantity of images in one go affects enough people for isolated incidents to be significant issues. More everyday image theft will typically be carried out by conventional image saving or screenshots, and at the time of writing, the majority of popular dating apps, such as Hinge, Tinder, Bumble, Plenty of Fish and OkCupid do not notify users when someone has taken a screenshot of their image. One notable exception, however, is Grindr, which recently introduced this as an optional setting.

Taking responsibility

Any kind of solution to these issues must balance security with practicality. Suggesting that people simply stop posting images online, or never use dating apps, is not the answer. But drawing their attention to the fact that they can never be completely sure that an image posted online or through an app only exists where it was originally published may be a sobering enough thought for them to reconsider what they share, and how they share it, in the first place.

So, unless images are published in a way that provides robust protection against being downloaded, the questions to be asked are: What is being posted? Where is it being posted and how? How easily can it be seen by others and stolen? And does the person who is publishing this content understand the possible risks in doing so?

While many would be reluctant to give up using social media platforms they’ve become accustomed to, at the very minimum, it’s a good idea to review the current safety tools on offer. These change over time, and it’s easy to overlook new features that may protect accounts and their users as they are introduced, so it’s worth checking current documentation to understand how secure your social media accounts actually are. 

Running through friend lists for any duplicate contacts, or others that in some way don’t look right, is also a good idea; you may well trust the hundreds of connections you have on a social media account, but it only takes one account being compromised for problems to start. Noticing spam being posted by a friend suggests their account has been subject to such an attack, which means it might be best to report the activity and unfriend them, and to notify them of this through a different channel. 

It may also be worth checking older inactive social media accounts that may still host images. If you’re particularly concerned about a specific image and its availability online, you may wish to perform a reverse search for it through Google Images to see if it can be found somewhere online. 

Changing passwords on a regular basis is also a good idea, as is using different passwords across different accounts and enabling multi-factor authentication where possible. The password-saving options within many browsers can be used to save longer and more complex passwords, although third-party tools are also available for this. 

If you have a tendency to use the same passwords across multiple sites, or have done so in the past, it’s worth investigating whether your password details may have been leaked at some point. Have I Been Pwned allows you to check whether your email address and other personal information may have got out in a historic data breach.

Final thoughts

Combating the threats outlined above is a significant and complex challenge. They differ from each other with respect to their targets, the nature of their operation, and the intended consequences. The fact that many of these may be considered to be harmful without quite straying into illegality makes them all the more difficult to police. But when we consider just how damaging these can be, and how many people they stand to affect, few would argue that the protections we have right now are sufficient.

No single solution will prevent or stop every threat, and new measures aimed at curbing these often raise questions that aren’t easy to answer. To what degree should we expect age verification systems to work in practice? How can developers of encrypted messaging apps provide their services while complying with the authorities? How do we define hate? And who gets to define it?

Success will depend on a number of factors. Educating online audiences – particularly younger users – on threats and making sure that everyone understands the tools available to combat them is important. Effective enforcement of codes of conduct from regulators will also be key. The ongoing development of a new content attribution model, which promises to deliver greater transparency over online content courtesy of the Content Authenticity Initiative, is also very encouraging. 

It’s also conceivable that, as AI improves, smartphone manufacturers may be required to use these tools to detect images that are likely to be problematic as soon as they are captured. Such a move would be no doubt be difficult, and would be met with plenty of opposition and privacy concerns, but when you consider the accuracy with which AI tools are currently able to detect nudity, it’s easy to see things moving in this direction.

But unless the ease with which images can travel online in the way they currently do is addressed, many threats will remain. Prevention, as the maxim goes, is better than the cure, and too little has been done to stop image theft in the thirty or so years since the internet has been commercially available. Now, we’re seeing the consequences.

 

 

Related articles