Downsampling is a common practice among those who want to protect their images online, but it comes with a drawback that many are only starting to realize.
As people began to appreciate the ease with which their images could be stolen from their websites, blogs and social media platforms, they resorted to a number of different methods to discourage this.
Perhaps the most popular of these has been the digital watermark. Not only does a watermark make it clear that an image has an owner and/or copyright holder, but its visibility also complicates things for anyone wishing to republish such an image elsewhere.
While watermarks have been shown to have some effect, they are not entirely reliable as a means of preventing unauthorized image use. It’s easy to find instances of images being used with these in place, and stories of photographers that have found their images having had these watermarks cloned or cropped out.
Reducing the resolution of images, also known as downsampling, has been another measure adopted by photographers for the same reason, often (but not always) used in conjunction with the watermark.
The reasoning behind this is that an image with a relatively low resolution has less value to a potential thief than if it were to retain its original pixel count.
Such an image, for example, may be suitable for online display, but it may fall short of the requirements for printing.
This practice stems from a time when high-resolution digital cameras were the preserve of professional photographers, and when such images were not widely available. Anyone wanting to use these would therefore have to purchase a license or come to some other arrangement with its creator.
Read more: Copyright and images – What you need to know
Today, however, even basic cameras and smartphones can capture images that not only exceed the resolution requirements of our computer displays, but are also sufficiently high in resolution to be printed to large sizes (even after some modest cropping). Purely from the perspective of pixel resolution, consumer and professional equipment is, broadly speaking, on a par.
There are, of course, many further variables that distinguish images captured using costly professional equipment from those captured by consumer devices, such as the quality of the lens, the type of sensor and so on. But the professional’s main advantage is the effort that goes into capturing the image.
This includes their creative vision and determination, the steps they will take to ensure images are technically sound, and the access they may have to certain tools or individuals needed to make an image reality.
Anyone can capture an image at a certain resolution, but not everyone will be prepared to wake before dawn for the best light, or wait in adverse weather for the perfect moment, or train themselves to use processing tools for the most professional finish.
As long as this is the case, the professional’s inclination to protect their images – and the temptation to downsample them before online publication – will remain.
A logical move? Or a shortsighted approach?
Even if we put aside the issues with online security, there are many sound reasons to downsample images.
First, for most applications, there is simply no need to publish images at their maximum resolution. If your camera captures 32MP images, but the display on which it will be viewed has a resolution equivalent to around 2 or 4MP – which is what today’s average laptop provides – the average user will simply have no need for all that extra information, particularly when you consider that they will typically be viewing these images within only a small proportion of their full display.
Another reason is that weightier images can slow down page load times, which can adversely affect both the user experience and your website’s SEO. If a user is waiting for more than a few seconds for your website to load, there’s a good chance they’ll simply leave.
Read more: 8 ways to optimize images for search engines
Downsampling images, therefore, appears to make sense. So what’s the problem?
These images aren’t just being viewed today; they’ll also be viewed in the future. And tomorrow’s online audience will not tolerate images uploaded at a resolution that was only sufficient for yesterday’s displays.
Our computers, laptops, tablets and phones have a higher resolution than they did a few years ago. As these displays continue to improve, images that were uploaded at a resolution lower than one that’s ideal for these displays are subject to one of two changes. Either they will be expanded to fill this new area (which obviously degrades image quality), or they will simply appear small relative to other images (which negatively affects the user experience).
This isn’t new – examples are easy to find
Here’s an example from 2004. The image in this article, which measures 128 x 128 pixels, was fine at the time as technology didn’t demand anything better. But today, you can barely make out the subjects within it, even with the less demanding display of a typical smartphone. And this will only get worse as technology continues to improve.
Of course, 2004 was a long time ago. So what about more recent examples?
These images from an article in The Guardian in 2010 don’t fare much better, and are further marred by heavy compression artefacts, while this example from the Independent in 2011 is noticeably pixellated too.
An example on the BBC News website from 2012 shows a marked improvement, with the images within the article measuring 976 pixels across. Even so, the difference in quality between these images and those found within an equivalent article published today is clear.
What resolution is my screen?
These figures don’t mean much unless we take today’s average computer and tablet displays into account.
Display resolution of common tablets, laptops and desktop computers (early 2021)
Device | Display size | Resolution |
---|---|---|
Apple iMac (21.5in) | 21.5in | 1,920 x 1,080 (102ppi) |
Apple iMac (21.5in, Retina 4K) | 21.5in | 4,096 x 2,304 (219ppi) |
Apple iMac (27in, Retina 5K) | 27in | 5,120 x 2,880 (218ppi) |
Dell Inspiron 15 3000 | 15.6in | 1,920 x 1,080 (na) |
HP Envy 13 | 13.3in | 1,920 x 1,080 (165ppi) |
HP Envy 13 (QHD) | 13.3in | 3,200 x 1,800 (276ppi) |
Apple iPad Air (2020) | 10.9in | 2,360 x 1,640 (264ppi) |
Samsung Galaxy Tab S7 | 11in | 2,560 x 1,600 (274ppi) |
LG Gram | 17in | 2,560 x 1,600 (178ppi) |
Microsoft Surface 3 | 15in | 2,496 x 1,664 (201ppi) |
Microsoft Surface Pro 7 | 12.3in | 2,736 x 1,824 (267ppi) |
As we can see, the display resolution on devices commonly used today vary considerably, from around 1,920 x 1,080 (Full HD) at the lower end to around 5,120 x 2,880 at the upper end.
This isn’t intended to be a guide for the dimensions of your images, though. While full-screen viewing may be more common than it used to be, most images will not typically be viewed across the full dimensions of a display, but only a small proportion of it.
Nevertheless, as more of us start to adopt devices with high-resolution displays, we will start to see these issues with older images with increasing frequency.
What does the future look like for older images?
The images in the examples provided above are all found on well-known news sites, and their publishers are more likely to adhere to a certain set of guidelines – and be more forward-thinking – than the average user with a personal website.
In other words, for every news article such as those above, there will be countless personal websites whose images are now suffering the same fate.
So how does this all play out? These images, and the websites on which they are found, will all likely go in one of three directions.
The first, and most likely, option is that the images remain in place and the websites on which they are hosted end up showing their age sooner, whether it’s because these images have been expanded to fill a certain container width, or because they are displayed at a small size. In either case, image quality is likely to be less than ideal.
The second outcome is that website owners update these images with higher-resolution versions as technology moves on. Given the time and effort this takes, and the necessity of the website owner to appreciate how these images appear to others, this is considerably less likely than the first option.
Finally, the third outcome is that images that show visible degradation are simply removed from web pages at a certain point. This sounds far fetched, and requires extra work, but the lack of images on certain news portals (such as The Independent) suggests this is already happening.
All of this makes the use of high-resolution images to begin with more appealing. This has the obvious advantage of visual quality and longevity, but it requires a trade-off between these benefits and the security and page-load time issues discussed above.
But perhaps there’s an option that solves all of the problems above?
Indeed, looking at the other images and graphics on these news pages makes the solution obvious.
Rather than have a static image that is uploaded and forgotten about, a container that can be dynamically – and automatically – updated with a new version of an image, as and when it’s required, is used in its place.
By taking the characteristics of the display on which the image is being viewed into account, such a container can automatically pull through the most appropriate resolution from the single high-resolution image that was initially uploaded. By doing this, the image owner effectively futureproofs the images on their website.
Furthermore, by streaming such an image from a centralized location, it’s protected from theft in the way an embedded image is not. Not only that, but this also means it doesn’t add weight to a website in the way that a high-resolution image embedded directly on the page would, in turn having no ill effects on page load times.
This approach ensures that as technology improves over time, or as a user upgrades to a new high-resolution monitor today, they will continue to be served a version of the image that appears best for that particular display, one that’s protected against theft.