From loss prevention and social infrastructure maintenance through to combating COVID-19, artificial intelligence is being used by the biggest imaging brands in surprising ways. We take a look at what kind of a difference it’s making.

Everyone has heard of the term AI, but different people will have very different ideas of what it actually means. What image first comes to mind? Self-driving cars? Chatbots? Those terrifying Boston Dynamics videos?

With so many brands now claiming to harness AI tools for their products and services, it’s no wonder there’s no consensus.

Some people may appreciate they are already taking advantage of this when they ask Siri or Alexa a question, or when automatically suggested words mean that sentences quickly complete themselves when using Google Docs.

We’ve had surprisingly readable articles written by AI, and songs composed by AI too. But the more we dig into where AI is being used today, the more we start to appreciate the extent to which imaging-based AI is being relied upon by many other industries. But before we take a closer look at what’s going on, it’s probably best to answer the question: what is the difference between AI, machine learning and deep learning?

Artificial intelligence vs machine learning vs deep learning

Artificial intelligence is the term used to describe a technology or algorithm designed to mimic human behavior, one that requires a certain degree of intelligence. Machine learning is a subset of artificial intelligence, and this describes algorithms that analyze and spot patterns in data, which can then be used to form predictions and make suggestions, gradually becoming more accurate over time as they are fed more data. An example of this is recommended content on streaming services, which is often based on the user’s previous activity and the preferences of others with similar interests. Finally, deep learning, which itself a subset of machine learning, makes use of artificial neural networks that are modeled on the biological neural networks in the human brain. This attempts to follow how the human brain will reach conclusions, based on the information it’s provided. One area where this is used is speech recognition.

AI in consumer photography

Photographers are well used to hearing claims about the use of AI in their cameras and software packages. Seasoned users are often skeptical when encountering the term – and for good reason.

While certain image-processing tools appear to be remarkably intuitive, for example, camera-specific features that claim to make use of AI often appear more as evolutions of features that existed long before the term AI was found in any marketing materials.

That’s not to say that the use of AI has no benefit here, just that the role of this intelligence harder to discern, and so claims of its importance are harder to substantiate. Nevertheless, some recent developments have certainly been interesting.

Canon, for example, programmed its Speedlite 470EX-AI flashgun with technology that automatically swivels the flash head as and when it deems it necessary so that it’s always bouncing light off a surface in the most appropriate way for the subject. The company has also used deep learning systems in some of its previous cameras to automatically detect what kind of sports are being captured, so that the camera’s autofocus and subject-tracking characteristics can be automatically fine-tuned to the subject itself.

 

Sony has even managed to bring AI right down to the imaging sensor itself. Last year, it announced a pair of Intelligent Vision sensors, which it claimed were the world’s first to be equipped with AI processing functionality, which makes the use of a separate processor or external memory unecessary. Currently, it’s uncertain whether there are any plans to use this in a consumer product, but as the world’s largest image sensor manufacturer, it certainly points to how sensors may end up being designed as standard in future.

Adobe has been particularly vocal about the use of AI since the launch of Adobe Sensei, which it describes as an AI and machine learning tool that powers a number of its technologies. 

Photoshop is one recipient of these, and one of its most helpful tools that makes use of this is Content-Aware Fill, which automatically fills in selected areas of an image based on its surrounding pixels (such as when cloning out obstructions). The feature has now made its way to video editing too, via its After Effects program (below).

 

Sensei is also being used in the company’s Photoshop Camera app, which suggests filters and effects in real-time based on what’s being captured. More recently, the company broadened its AI-based offerings in Photoshop with Neural Filters. These take more automated adjustments to a new level by allowing the user to adjust the intensity of a subject’s smile, their apparent age, skin smoothness and more.

Adobe isn’t simply employing its AI tools to aid image manipulation; it has also carried out research into how deep learning can be used to detect manipulation of images where such editing could be problematic. One way in which it achieves this is by exploiting the tell-tale signs of image editing that would otherwise go unnoticed, such as changes to an image’s noise pattern or the presence of deliberately smooth areas (below).

 

As the company explains, “although these artifacts are not usually visible to the human eye, they are much more easily detectable through close analysis at the pixel level, or by applying filters that help highlight these changes.” Those following the company’s recent developments will also be aware that one of its areas of focus related to this is the Content Authenticity Initiative, which aims to boost trust in images and other content by making their provenance clear.

Does AI help or threaten professional photographers?

It seems logical that photographers who take the time to understand how AI is shaping image capture and processing may find it helps them to work faster and more efficiently. Take image tagging, for example: if a subject can be automatically recognized and tagged in an image, the photographer can potentially catalog their images much faster.

But to what degree is the evolution of these tools a threat to their very livelihood? At least right now, the answer depends in part on the level of creativity involved.

For example, given the many creative decisions involved in portrait or fashion photography, the idea of a machine being able to step in and replace the working photographer (and achieve the same results) doesn’t seem like a reality we’ll have to face any time soon. A machine cannot, of its own volition, travel to a location, meet a client to understand their needs, direct the subject in a certain way and so on. But for ordinary studio photography where the aim is to simply capture a subject faithfully against a clean background, the same cannot be said.

This also means that retouchers, and those who specialize in image processing, would be wise to keep on top of such changes. Italian photography start-up BOOM Image Studio is one company that is aiming to disrupt this space. Its platform combines human involvement with that of machines so that its clients are able to book photoshoots and receive results within 24 hours. Part of this rapid turnaround appears to be down to removing the human element from the image-editing process and letting AI step in to make decisions. It might sound far fetched, but with $7m raised in a recent Series A funding round, and plans to expand its existing presence in 80 countries to 180, the company certainly seems ambitious enough to make this a reality.

AI in medicine and diagnostics

Perhaps unsurprisingly, many companies that are best known for their photographic products are also involved in medicine and diagnostics. Indeed, for some, these can be a significantly larger part of their businesses than their consumer photography divisions, and many of these now make use of AI.

Olympus, for example, is probably best known for its range its compact and style-focused cameras, but historically, most of its business has been in life sciences, medical and industrial sectors. Indeed, it no longer owns this imaging division, having sold it at the start of the year. Today, it has around 70% of the global endoscopy market, and recently launched the ENDO-AID platform, which makes use of AI to detect and diagnose diseases during endoscopic examinations. Olympus claims that, thanks to machine learning, the technology is able to alert the endoscopist in real time when a suspected colonic lesion appears on the screen.

German optics specialist ZEISS also takes advantage of machine learning for its ZEN Intellisis platform, which deals with complex image segmentation in microscopic images (below). Similarly, Nikon has integrated a variety of AI tools into its NIS-Elements platform to help with microscopic imaging and analysis. These tools are used to help better detect the nuclear envelope of cells, where conventional segmentation would fail to detect them all, as well as to remove noise and blur, and improve contrast in images.

 

Nikon’s rival Canon has its own Artificial Intelligence (AI) Research team within its established Medical Systems division, and it too has used AI to assist in pattern recognition, noise reduction and image analysis among other things. Recent research conducted by the team has involved using machine learning to detect falls in elderly patients and specific cancerous tumors in CT scans.

Fujifilm boasts a wide range of subsidiaries, which cover pharmaceuticals, cellular dynamics and biotechnology among other things, and the company’s use of AI is particularly noteworthy. In 2018, it launched the Fujifilm Creative AI Center ‘Brain(s),’a center for the research and development of the next generation of AI technologies. Its ReiLI AI platform, launched in the same year was used in the analysis and detection of pulmonary diseases in COVID-19 patients.

 

Interestingly, the company now applying its medical diagnostic imaging know-how to areas outside of healthcare. Its cloud-based Hibimikke platform is centered around AI-based image analysis that inspects cracks in bridges, tunnels and other infrastructures, a process that’s otherwise manual and highly time-consuming. The recent announcement of a Fujifilm Group AI Policy – a set of guidelines for using artificial intelligence across all aspects of its business rooted in social, ethical and legal concerns – gives us some indication of just how prominent these tools are becoming, and the concerns that need to be addressed as they develop.

Will the most interesting developments happen elsewhere?

Most of the companies mentioned above have their roots in photography and imaging, and so one would expect them to embrace AI as it starts to become more useful for a broader range of tasks. But many other photo-centric AI developments made by companies outside of this sphere also stand to shape future technologies.

Indeed, they already are. As we discussed in our article on online harms, social media platforms are heavily reliant on AI to combat harmful content, which reduces the need for human intervention by way of moderators. Google, meanwhile, uses machine-learning-based AI to power Google Lens, which recognizes subjects its shown, from plants and insects through to products that can be bought online. Our article on Google’s reverse image search feature explains this in more detail.

Another area in which much progress is being made is in the automotive industry, specifically in relation to the development of autonomous vehicles. While these rely on many separate systems – sonar, lidar, GPS and others that are currently used for things like self-parking systems and Automated Lane Keeping Systems (ALKS) – the testing we’ve been privy to so far shows, unsurprisingly, just how reliant they are on imaging-based AI too.

 

To date, the category has attracted development from the biggest names in and out of the automotive sector, from Tesla with its Autopilot feature (above) and Uber with its now-abandoned robotaxi plans, though to Waymo, now owned by Google’s parent Alphabet. Longstanding rumors of Apple joining the party, meanwhile, were recently given a boost with reports of a tie-up with Hyundai.

While a number of countries have now legalized the testing of these vehicles on public roads, a series of well-publicized incidents – some fatal – have shown the limitations of the technology as it stands, and the likelihood of it still being some time before these are reliable enough for widespread use.

The issues around the safety of AI is critical in autonomous vehicles, and these concerns have joined many others to do with privacy and appropriate use of AI in general as the technology has developed. In 2015, Tesla CEO Elon Musk joined other investors in setting up OpenAI, an AI research laboratory that planned to concern itself with AI that would benefit humanity. DeepMind Technologies, now owned by Alphabet, has also addressed these concerns by launching the Ethics & Society Research Unit.

But it’s easy to imagine concerns over privacy and ethics will long continue, particularly in sensitive fields such as security. The recent storming of the US Capitol once again shifted focus to the use of AI tools to identify individuals after Clearview AI, a company that assists law-enforcement departments with AI-enabled facial recognition, experienced a spike in searches following the riots. Particularly troubling is the fact that its data set includes more than 3bn images that the company freely admitted to have harvested from social media platforms.

A number of companies are also currently involved in the fusing of traditional CCTV capture with facial recognition and AI, which can be used not only to detect known shoplifters or those suspected of anti-social behavior, but also to spot body language that suggests that a theft is likely to take place. A recent trial of this kind of technology by branches of UK supermarket Southern Co-operative provoked a backlash from privacy campaigners, although the company announced that it had no plans to roll the system out further.

It’s not the first supermarket to try this either. Sainsbury’s recently partnered with a startup called ThirdEye, employing a concealment detector in its stores to monitor theft. The technology behind the detector combines machine learning with conventional CCTV systems to spot when a customer places an item in their pocket, an action that triggers a video recording that’s subsequently sent to a member of staff. The trial, which ran over a period of six months, reportedly managed to deter 5,591 theft attempts. The video above shows this in action, together with a brief look at how such technology may be employed elsewhere in the same field.

Final thoughts

There’s little question that AI will continue to be relied upon to solve complex tasks and help us in our everyday lives, both in ways we can currently appreciate and in tools and services yet to be invented.

The fact that some of these technologies developed for one industry have now been successfully implemented elsewhere gives us some idea of how quickly we might see things change. It also points to the ways in which some companies may end up expanding as their tools become more competent and valuable.

But it easy to forget that all of this is still very much in its infancy, and that the risks are still unknown. Companies that make use of these technologies will need to demonstrate the steps they’re taking to employ them responsibly, perhaps by following Fujifilm’s example and making their own AI policies public, or at the very least by incorporating these into existing CSR reports. Understanding how this space can or should be regulated has been a matter of debate for some time, and this is likely to grow more convoluted as the power of AI increases. Concerns over safety and privacy are unlikely to go away, especially when you consider that the average person has very little visibility over what’s being analyzed, the data derived from this, and the way this is subsequently used and stored.

Nevertheless, there’s much reason to be hopeful. If these tools are already playing useful roles in the fields of diagnostics and online security among others, it’s likely that we’ve only just started to appreciate just how significant these could be for humanity in the long run.

 

 

Related articles