Twitter recently announced an expansion of its private information policy to include media. But what does this mean for users? And could more be done?

The damage that can be caused by the unauthorized distribution of private media online is an ongoing issue. This can come in many forms, but arguably the place in which it can be most damaging is social media.

There are billions of images posted on social media platforms every day, each one of them completely unprotected against theft. A simple right-click or drag and drop is all it takes for an individual to copy an image, one that can then be distributed anywhere, in any context, for any purpose.

We’ve written extensively about the importance of protecting images of you and images you own online, and the consequences you could face if you don’t.

Learn more: How to protect your images on Facebook, Twitter and other social media sites

In some cases, it’s possible to see the funny side of this. B.J. Novak, for example, star of The Office, was amused to discover his face being used to market products around the world without his knowledge or consent.

At the other end of the spectrum, however, it can destroy reputations, aid radicalization, and undermine democracy among other things.

We have long believed that more needs to be done to prevent this issue at source, so we were pleased to hear Twitter’s recent announcement that it will be expanding its private information policy to include media.   

What changes has Twitter made?

In a recent blog post, Twitter made the announcement that it is adding ‘private media’ to its list of things that cannot be shared on its platform without the owner’s permission.

An important thing to note here is that this isn’t about protecting the creator or copyright owner of the media – that’s covered in Twitter’s copyright policy. Instead, these changes protect the person featured in the media itself, aiming to prevent the sharing of “media of private individuals without the permission of the person(s) depicted.”

There are, of course, already general rules in place to prevent abusive behavior, which cover the use of images in this way, plus a non-consensual nudity policy to help prevent intimate photos being shared of someone without their permission.

However, the existing rules revolve around motive. The key difference with this new update is that the rules now apply to any images that are shared without permission of the private individual featured, regardless of intention.   

How do the changes affect news media?

There is a caveat to Twitter’s policy changes, which is that they do not apply to “media featuring public figures or individuals when media and accompanying Tweet text are shared in the public interest or add value to public discourse.”

You can interpret this as you will, but Twitter does follow it up by confirming that all images of public figures are still covered by the aforementioned abusive behavior and non-consensual nudity policies. 

How is it enforced?

To detect and police this issue, Twitter initially requires an authorized representative to file a first-person report to confirm unwarranted usage.

Once this report has been received, Twitter will act in accordance with its range of enforcement options, which can be found here.

How effective is it?

While Twitter’s intentions are undoubtedly good, the new rules have already been subject to abuse, as reported by The Washington Post.

Just two days after the changes were made, Twitter mistakenly suspended the accounts of a number of anti-extremism researchers and journalists, following an influx of “coordinated and malicious reports” from far-right extremists attempting to remove their images from said accounts.

In a follow-up report, Twitter admitted errors and said it is working to fix the issues and make sure the rule is used as it should, but it’s clear there are still teething problems to address.

Unanswered questions

This abuse stems from a lack of clarification as to exactly what constitutes a breach of these new rules. In Twitter’s own words, “this update will allow us to take action on media that is shared without any explicit abusive content, provided it’s posted without the consent of the person depicted.”

Such a vague explanation has been met with criticism, as it seems to invite more questions than it answers.

The policy wording itself does attempt to offer more clarity by listing the following as reasons for media not being in violation:

– the media is publicly available or is being covered by mainstream media;
– the media and the accompanying tweet text add value to the public discourse or are shared in public interest;
– contains eyewitness accounts or on the ground reports from developing events;
– the subject of the media is a public figure.

However, this still leaves many potential questions from photographers unanswered.

What, for example, constitutes consent? How exactly is permission proved? To what extent does someone need to be in the image for permission to be necessary? What about someone captured in a street scene? What if they are not recognizable – does permission still need to be sought in this instance?

Until these questions are answered, we would expect to see more erroneous account suspensions, and as a result, legitimate journalists and activists leaving the social network in search of more favorable platforms. And the irony to this is that it’s only likely to lead to a reduction in genuine, informative, newsworthy content on Twitter, which is the very content it is trying to promote.

Part of a bigger picture

In recent years, Twitter’s association with fake news has been widely reported. Who could forget the spread of disinformation in the run-up to the 2016 US presidential election, or Donald Trump’s subsequent use of the platform? Twitter took action by blocking Trump’s official account and, with the ban still in place, it’s not taking any chances.

Twitter is also working with Adobe, Microsoft, the BBC, The New York Times and a long list of other huge names in tech and publishing to establish a new standard for media provenance across the wider web. The collaboration is called the Coalition for Content Provenance and Authenticity (C2PA), and it has recently released its draft specification to the public.

Of course, provenance data doesn’t prevent someone from sharing media that features you without your permission, but it is currently being integrated with image-streaming technology, which would add a new level of image protection.

By streaming all images from one master copy, much like a YouTube video, there would be far fewer copies made of offending images, in turn making the new rules much easier to police.

In the future we hope to see Twitter join the dots between these new technologies and its tighter private information policy to reach a truly holistic solution.

Conclusion

We welcome this change and believe it is another positive move in the right direction, but it’s clear that more clarification as to what constitutes a breach of the rules – and how a breach should be proven – is necessary.

However, even with absolute clarity, this policy change only contributes to a treatment for the issue, and does nothing to prevent it. We believe the answer is to go a step further.

By combining these new rules with image-streaming technology to secure images against theft and C2PA data to prove an image’s origin, Twitter could lead the way in creating a truly safe media ecosystem.

Users could make an informed decision about whether to trust what they see, and could rest assured that what they share is protected. And if a streamed image breaches the rules, it could be quickly and easily traced to the source without any additional copies being made.

In a digital world where fake news and online abuse are rife, this would not only be a welcome evolution of the social media landscape, but would also provide a model for the future of internet-wide image display.

Learn how image streaming benefits content owners, publishers and advertisers around the world.

 

 

Related articles