AI tools aren’t going away – so what can be done about the issues that have long plagued them?

AI is, unquestionably, here to stay.

The vast majority (94%) of business leaders recently surveyed by Deloitte agreed that AI was critical for future success, with Forrester predicting that nearly every organization will be using AI by 2025. The artificial intelligence software market itself is set to reach $37bn by that same year.

But this does not mean that AI technology is without its flaws. One ongoing issue is bias, which can leak into systems at any point, negatively impacting face recognition technologypolicing, and sentencing algorithms, for example, as well as hiring and credit-scoring, most often doing a disservice to already marginalized communities.

When it comes to specific marketing and advertising activities, implicit bias – which occurs when an algorithm systematically produces flawed and prejudiced assumptions through machine learning – can have disastrous consequences.

From a brand safety perspective, AI systems might struggle to identify the faces of people of color, or incorrectly register something as unsafe. However, they can also skew online targeting, missing out on entire segments of the population, resulting in lost sales, wasted resources, and costing a brand its reputation in the process.

Bias in the real world leads to bias in the data

Anything from human bias to computing power, program development, data volume, and data quality can shape machine learning models for the better or for the worse.

Not paying due diligence when it comes to automated processes that involve consumer data is a mistake that no brand wants to make. Advanced analytics and real-time decision-making might produce more accurate results, but these require more expensive hardware and more sophisticated software, which not every business can afford. 

Bias can leak into systems and processes at any given time. Homogeneous developers and teams might fail to consider diverse perspectives, which leads to data sets that do not represent a population accurately. This produces false predictions, misrepresentations, or discriminatory targeting.

The core of the issue is that bias in artificial intelligence reflects human biases, which are deep-rooted, informing all social, scientific, technological, and cultural knowledge – and, therefore, all systems and processes too. 

This means that curating more diverse and representative data sets to minimize bias is harder than it seems. For example, in one American study on brain activity, algorithms made more false predictions for African American (AA) subjects in comparison to White Americans (WA), even when models were trained on AA data alone. While the reasons for this were unclear, researchers hypothesized it could be because our biological knowledge of the brain stems from predominantly white patients, or because machines are specifically calibrated to white bodies and blood flows.

Legal and regulatory risks

The fact that the type of data included in – or excluded from – machine learning processes can very easily lead to bias is well known, and something governments across the world have taken steps to prevent. 

The Information Commissioner’s Office (ICO) in the UK discovered such practices in the ad tech ecosystem, where actors tried to build detailed user profiles to, among other things, predict people’s ability to afford certain goods and services. Marketing strategies based on biased data, which can lead to discriminatory profiling and targeting, is now a legal offense, one that Meta has already been sued for. A further four companies were also found to have violated Facebook’s own terms and conditions by age-restricting their financial ads.

Part of the problem is a lack of transparency around how AI comes to certain decisions, which is why the U.S. government’s National Security Commission on Artificial Intelligence (NSCAI) and National Institute of Standards and Technology (NIST) have both highlighted the need for rigorous and standardized auditing frameworks as well as systems of documentation. 

In the UK, Data Protection Impact Assessments (DPIA) have outlined the regulatory risks involved in practices such as invisible processing, where data is collected and processed without direct consent, including obtaining data for one purpose and using it for another. 

A consumer may agree to share their data without realizing exactly what they are agreeing to. Terms and conditions are notoriously obscure, which often leaves consumers unsure of what they are agreeing to or how this data is being processed.

To combat these issues, data legislations such as GDPR in Europe, UK GDPR, and CCPA in the US are seeking to address these issues and are continuously evolving. Business leaders need to demonstrate that their data collection and processing practices are clearly outlined, that they mitigate against bias, and that they follow local data privacy guidelines – or risk legal, financial, and reputational repercussions.

Privacy and security

Besides the clear risk to businesses and brands in terms of consumer trust, there are also challenges involved when it comes to using AI to manage, process, and store sensitive data.

The aforementioned lack of transparency within algorithmic processing affects privacy and security risks in similar ways. More often than not, multiple independent parties are involved in the data journey, often separating data collection processes from those focused on building and even training an algorithm.

This obfuscates levels of accountability and responsibility. In the United Kingdom, for example, one study found that the roles of data “processor” and “controller”, as defined in data protection legislation, are not always clearly identified within business-to-business AI services, compounding the issue of liability.

This opacity up and down the supply chain further threatens data safety. Machines are not immune to misconfiguration or misuse by hackers or disingenuous users. In an industry such as advertising, where there is typically great reliance on sensitive user information concerning geolocation, health, or financial status, security needs to be a priority and part of any AI system.

Mitigating bias

For brands that want to avoid regulatory risks and treat their audiences fairly, there are ways to reduce bias within AI models.

Transparency plays a large role when it comes to taking accountability for AI results. Companies need to be able to explain how an algorithm arrives at a certain conclusion, clarify the role and responsibility of any third parties used in the process, and demonstrate that anti-bias measures are being taken within external personnel, services, or tools.

Close scrutiny of data science methods can also help reduce algorithm-based bias and make AI models fairer. Beyond data input and processes, companies need to study results to discern whether model decisions negatively impact different demographic groups, or whether there are any statistically relevant differences in accuracy across cohorts.

Finally, to maintain security and avoid negligence, the security, safety, and reliability of all systems and processes need to be regularly reviewed against relevant privacy policies and regulations. Business leaders need to ensure that any breach can be detected and resolved. Since bias is such a likely culprit, human verification is still a must when it comes to using machine learning and AI, as is clear and detailed documentation and auditing of all processes.

The path ahead

The truth about AI is that while it can speed up processes, recognize trends, and make both predictions and calculated, data-driven decisions, there remains a risk of deploying human and societal biases at scale, further entrenching discriminatory views and practices.

Bias is embedded into our computational and statistical sources as well as our institutions and systems, which makes it difficult to extract. In view of tackling this issue, IBM put out a call to the advertising industry to dismantle bias within its ecosystem, announcing a research initiative and the development of a toolset to mitigate the impacts of biased AI. Other initiatives include the Business Software Alliance (BSA), which has also developed a framework for performing impact assessments and outlining best practices, governance, and safeguards. 

In the meantime, however, brands and businesses need to do what they can to get the best out of their AI systems without harming their audiences.

 

Related articles