As cyber attacks have become more sophisticated, the ways in which we secure our personal devices and keep private information safe has had to evolve too.
Whereas once a simple username/password combination was considered adequate, the tools we have to prevent unauthorized access to our phones, computers and online accounts today are many steps ahead of what they used to be.
Many of these are now rolled into a two-factor authentication system, which requires you to verify your identity in two separate ways. This typically draws its strength from the fact that it demands one piece of information that’s known to you alongside another that’s only revealed at the time it’s required. The latter, known as a one-time password, will typically have a limited period of validity for even greater protection.
So, as an example, the first of these could be a personal password, while the second may be a unique code that’s generated on demand. Banks, for example, have made use of this by issuing their customers with physical card readers that create one-time PIN numbers.
In steps biometrics
Two-factor authentication continues to be used in various forms and combinations, with the second, unknown code typically created by a dedicated authenticator app or delivered by text message. But today, it’s biometric information that’s seen as the more reliable means by which someone can prove they are who they claim to be.
Whether it’s a physiological reading, such as a fingerprint, or face or iris recognition, or a behavioural one like voice recognition or keystroke dynamics, these details are as personal as it gets – and so it follows that they would be the most secure way of protecting someone’s online accounts and identity. Guessing a password, or using a program to repeatedly try one until it works, is one thing – fooling a machine into recognizing someone else’s iris is quite another.
And yet, many well-publicised flaws with such biometric verification systems have shown that they aren’t quite as bulletproof as most of us imagine they ought to be.
Last month, for example, Samsung rushed out a security patch to fix an issue with its Galaxy S10 and Note 10 devices, whose fingerprint scanners were found to be easily fooled by unauthorized users. The issue was only discovered after a Galaxy S10 owner discovered that her husband could unlock her phone with his thumb, thanks to an impression of her thumbprint in a third-party screen protector covering the display.
Google also had its own woes with its most recent Pixel 4 smartphone. The company confirmed that the device’s Face Unlock feature can unlock the phone even when the subject has their eyes closed. While Google has claimed the feature is secure enough as it stands, many will no doubt consider it less secure than rival systems, such as Apple’s Face ID, which requires the user’s eyes to be open. Not surprisingly, a fix is promised to be in the pipeline.
These are, incidentally, not new concerns. The idea of being able to trick a device’s face recognition system with a photograph rather than an actual face has been discussed for some time, and proven on a number of occasions. As the New York Times reported in 2017, research carried out by New York University and Michigan State University also suggests that fingerprint scanners can be tricked by composite fingerprints that combine many common features:
“Full human fingerprints are difficult to falsify, but the finger scanners on phones are so small that they read only partial fingerprints. When a user sets up fingerprint security on an Apple iPhone or a phone that runs Google’s Android software, the phone typically takes eight to 10 images of a finger to make it easier to make a match. And many users record more than one finger — say, the thumb and forefinger of each hand. Since a finger swipe has to match only one stored image to unlock the phone, the system is vulnerable to false matches.”
What do we expect?
So why have there been so many issues with these supposedly secure systems?
First, perhaps our expectations are partly to blame. Do we consider these systems ought to be impenetrable methods of preventing unauthorized access to our personal devices and details, or rather more convenient ways of accessing them? No system is completely without vulnerabilities, and if there is a weakness of some sort, someone, somewhere, may well exploit it as soon as they know how.
Part of the issue may also be down to increased competition between smartphone manufacturers. With so many companies now battling against each other to bring their devices to market before their rivals, and each aiming to include increasingly sophisticated methods of verifying a user’s identity, it’s quite possible that they don’t undergo the rigorous testing we would expect.
The more devices there are, the greater the chance that one or more will have a weakness that undermines how a technology is viewed as a whole. As the Samsung case proves, the availability of third-party accessories, accessories that haven’t been given any kind of approval by the device’s manufacturer, is also sometimes to blame.
These issues are a particular concern with smartphones and tablets, as you will typically be using the same finger or thumb print to log into the device as you would to log into the various apps installed on it. (This is also true of laptops that incorporate fingerprint scanners, albeit to a lesser extent). But for most people, the convenience wins them over; you can’t exactly forget your thumbprint.
All of this is compounded by the separate, but equally vital, issue of exactly how much we can entrust our information – biometric and otherwise – to social media sites, smartphone companies and others. We expect this to be encrypted to high standards, but the truth is we don’t know exactly how well this is protected – or how easily it may be decrypted if accessed.
A slew of data breaches and hacking incidents over the years have highlighted the need for more robust methods of safeguarding sensitive parts of our online activities. And in recent years, it seems no company or organisation has been safe from attack. From Apple, Uber and Tumblr though to dating website Ashely Madison and even Radiohead, those with something valuable to protect have quickly learnt what happens when vulnerabilities in their systems are exploited. When the information of millions of people is involved, that can often lead to a considerable financial penalty on top of compromised brand image.
Naturally there’s not much we can do about how a company whose services we use stores and encrypts the information it has on us (aside from not using them to begin with, of course). But it does make you think twice about whether the conveniences of everyday protection we take for granted are eclipsing the flaws that continue to be found within them.