Ir al contenido principal

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice

Proactively detecting and reducing AI bias

Matthew Nolan, Inicie sesión para suscribirse al blog

For industries where risk analysis is essential, such as financial services or insurance, advances in data collection and machine learning enable businesses to analyze large volumes of data, make decisions, and complete complex transactions in just milliseconds. Whether used for evaluating creditworthiness, detecting fraud, or underwriting liability, algorithms that learn and adapt in real time help businesses in these industries achieve massive scope and scale – and are a critical part of the decision-making process. Organizations in other non-regulated industries have taken notice and are also looking for ways to apply AI-based machine learning to customer-centric operations. But like financial and insurance firms, there are risks associated with decisions based on sensitive information, and business leaders therefore need to be proactive in removing biases from their transactions.

The problem is, self-learning algorithms are designed and trained by humans, and humans have flaws. We’re liable to pass on those flaws to the AI – intentionally or not – because of how we collect data, train models, and apply rules or logic when making decisions. We end up passing down algorithmic bias, and that becomes a huge liability – one that every business needs to be constantly aware of and pro-actively working to eliminate.

Bias represents a significant financial risk for businesses in the form of regulatory violations, lost opportunities, declining revenues, increased labor costs, and the loss of public trust and reputation

For example, just last year, a very recognizable technology company and their financial partner were publicly called out for discrimination for their new credit card offering when tech influencers David Heinemeier Hansson and Steve Wosniack noticed their spouses were offered credit limits an order of magnitude lower than their own – despite a shared banking and tax history. Not only did the perception of bias hurt the brand reputations of these businesses, but it also prompted an investigation by the New York State Department of Financial services into discriminatory practices.

Stories like this may be one of the reasons that consumers view AI with skepticism. In 2019 Pega conducted a survey of 6,000 consumers about their views on AI and empathy. Only 9% were very comfortable with a company using AI to interact with them.

But, shouldn’t AI be able to tell that something’s biased, when it’s right there in the open for everyone to see?

Unfortunately no. Human beings also have a hard time seeing bias, even when it’s right there in front of our eyes. We’re good at locking down and eliminating the obvious problem areas (age, ethnicity, religion, or gender), but the same biases can be hidden in any of the data we use, even if we can’t see it from the surface. Information like income, zip code, net worth, political affiliation, etc., are often strongly linked to the same issues. Because of that, bias can creep into your analytics and skew your decision-making.

Actions you can take to identify and reduce bias

Looking back, it’s easy to call out companies’ AI transgressions, but that’s the beauty of hindsight. Since the mistakes of bias can’t be fixed after the fact, we need to be proactive in our prevention. Here are some actions that you can take to help identify and remove potential bias:

  • Consider data quality as the utmost importance. With the availability of Big Data sources, businesses have a range of data to choose from when constructing machine learning algorithms. As my colleague Vince Jeffs advises, “seek clean data sources." Be conscious of behavioral and class identifiers in your data that could carry inherent bias, such as first language/language of choice, level of education, employment activity/occupation, credit score, marital status, number of social followers, among others. Ultimately, bad data creates bad models and results in biased AI.

  • Seek input from diverse and inclusive collaborators. Because we all have a unique baseline of experiences and biases, it’s important to collaborate with a diverse group of people when developing AI-based logic and learning algorithms. For example, in a recent study of automated speech recognition (ASR) systems, major players – Amazon, Apple, Google, IBM, and Microsoft – were found to have higher error rates when transcribing and analyzing non-white voices. The source of discrimination resulted from the models being trained by biased data – underrepresenting non-white individuals. A workgroup that reflects diversity in attributes such as age, gender, ethnicity, and geography, as well as level of ability or disability, can offer insightful feedback to improve algorithms and learning.

  • Take an “always-on” approach to bias protection. AI is reliant on dozens, hundreds, or thousands of models. Checking them intermittently won’t work – especially adaptive models. Instead, you need to run bias testing continuously as a standard course of action on all models for every decision. There are tools available in the marketplace to help with this. Our own Ethical Bias Check tool is unique in that it proactively detects bias in next-best-action strategies and allows you to adjust the underlying model, strategy, or business rules accordingly. This is something that only a few large brands are doing right now, but every brand should be adopting.

  • Get ahead of future regulations. Know your and your customers’ values and adjust bias threshold to them. You shouldn’t just aim for regulatory compliance. Regulations typically lag well behind digital innovation and social norms – so if you wait until after a regulation is enacted to react you’ll always be one step behind. Your customers want you to be reflective of the social values of today and expect responsibility on your part. You need to be thinking about the future and working to identify present and future forms of bias.

Transparency will also help build trust

Algorithms and their predictions need to be trusted. This means they need to be fair and explainable, not opaque or “black box.” And that’s a challenge. Machine learning algorithms are not necessarily linear – they evolve as they learn and in their newly evolved state could introduce new bias. Transparent algorithms can be inspected to understand the lineage of the AI – who created it (human or machine), what data is used, how it is tested, and how an outcome from that interpreted data is determined. Setting the appropriate thresholds for transparency and taking steps to identify and eliminate biases will help any organization, regulated or not, operate more ethically, ensure a fairer and more balanced outcome for everyone, and help bolster a reputation as a trusted brand.

The world’s largest brands differentiate themselves by knowing their customers and connecting with them on a personal level. They are vigilant about protecting customer relationships, building joint value, and demonstrating transparency in their transactions. By taking steps to identify and remove bias, you can proactively protect your valuable relationships while building a trusted brand identity.

Learn more:

Etiqueta

Tema: IA y toma de decisiones
Área de producto: Customer Decision Hub

Acerca del autor

Matthew Nolan, senior director of product marketing and decision sciences at Pega, helps organizations orchestrate customer journeys, personalize engagement, and maximize customer lifetime value with AI and next-best-action intelligence. He is also regular keynote speaker who shares his professional insights on nearly two decades of marketing technology experience.

Compartir esta página Share via X Share via LinkedIn Copying...
Compartir esta página Share via X Share via LinkedIn Copying...