'DEBIASING TECH' Series

Debiasing AI / Q&A

Artificial Intelligence has the power to accelerate positive change in the world, but only if it serves all of us. Below, senior tech leaders answer five questions on this topic.

17 Jan 2020

Introduction

 

This article is the first in a new content series titled ‘Debiasing Tech’ produced by the BIMA Tech Council.

AI has the potential to be a significant force for good – but humans must provide supervision and accountability to ensure it’s deployed in balanced and successful ways. Especially given that empathy and human connection are needed more than ever in today’s culturally turbulent times.

Below, four senior tech leaders answer a handful of the most important ethical and operational AI questions.

Our contributors this week

Jillian Moore

Global Advisory Lead, Avanade

Peter Barker

CTO, Rufus Leonard

Tiffany St. James

Managing Partner, Curate42

Adam Boita

Marketing Director, NCS

Do you still trust AI to be inclusive and balanced?

 

Tiffany St. James – “No. AI is only as inclusive and balanced as the team that designs it and much has to be done in industry to ensure that AI design has a diverse contribution to mitigate against bias. Whether that is race, gender, age, sexual orientation or neuro diversity.”

Peter Barker – “Although there is not a ‘yes’ or ‘no’ answer, our industry should always challenge the bias of AI. It’s most likely that our industry will use commercially available algorithms, rather than create our own. The accuracy of these algorithms and reduction in error rates has improved significantly over the last 3-4 years. However, bias will occur from how models are trained and how algorithms are implemented. An intentional lack of trust will ensure appropriate examination of the multiple sources of bias, helping us determine whether an AI is indeed inclusive and balanced.”

What are the biggest business and brand risks if something goes wrong?

 

Jillian Moore – “Financial, reputational and ethical. Financial is the most straightforward but the easiest to handle – if the potential gains exceed the losses, then it’s worth it. And organisations are used to building legal constructs that protect them such as damages clauses. Reputational risks are harder to handle but not unsurmountable. AI doesn’t introduce any damage to reputation that couldn’t also happen elsewhere, so again there are mechanisms in place for handling this. Ethical is the hardest, as this is the newest, least understood minefield to navigate and the long-term risks of ethical issues with companies are not really understood yet.”

Adam Boita – “There is a big spectrum to AI. We are not yet at the point of singularity which could ultimately transform the way we live our lives as well as business. However, focusing on the now as always it’s not “How can we use AI?” it’s “Can AI (alongside many options) be used to solve this problem?”. Technology is an enabler not a solution unto itself. Test and learn in a controlled environment is key before wider use to avoid errors and potential brand reputation damage. There are many notable”

Jillian Moore, Avanade
"AI doesn’t introduce any damage to reputation that could also happen elsewhere, so again they have mechanisms in place for handling this."

How can debiasing AI help to unlock new opportunities?

 

Adam Boita – “The good news is that the less biased you force yourself to be – the better performance you can deliver. The speed and efficiency created by the correct use of AI means you can test a vast range of creative routes and optimise them to the audiences needs. This means more data and better insights which should lead to better work. The old advertising model was focussed on getting one advert that was right for everyone based on biased strategies and creative best guesses. AI unlocks the potential to make 1000s of ads that morph and adapt to a diverse audience’s needs inflight.”

Jillian Moore – “If AI can truly be debiased, that moves it beyond the fundamental flaws of human thinking, which in turn unlocks all kinds of views, options and solutions which humans haven’t identified. A different perspective can only be a good thing, especially at this time where world events are forcing massive change at an even more rapid pace than before.”

How have your strategies to debiasing, deployment and monitoring AI evolved this year?

 

Tiffany St. James – “Helping the United Nations in collaboration with EY launch their Gender Innovation Principles in 2018 led me to researching and reviewing aspects of inequality and bias in all different aspects of technology. We now not only use the UN’s Global Innovation Coalition for Change’s principles in all aspects of AI and work hard to debias humans building and coding technologies.”

Peter Barker – “We’ve broadened the field of disciplines involved when creating a solution. We work with 3rd party data scientists, representative business stakeholders and subject matter experts to form hypotheses and predicted results, which we independently interrogate. In addition, we monitor production results against training results to determine whether bias is being introduced with production data sets. It is believed there are 180 different types of human bias. This highlights the need to examine both conscious and unconscious bias which may be prevalent in the solution’s designers, developers and stakeholders. Therefore, in addition to working with 3rd parties we provide proactive awareness training of unconscious bias for our teams.”

Adam Boita, NCS
"There is no biased AI without biased people. So start with a diverse team across the board. Every link and step in the process is an opportunity to let bias creep in."

What’s the first step to enhance inclusivity through debiased AI?

 

Adam Boita – “There is no biased AI without biased people. So start with a diverse team across the board. Every link and step in the process is an opportunity to let bias creep in. This is easier said than done but make sure you agree at the start of the activity how you are going to hold yourself accountable for removing bias and what the red flags will be as you progress. Don’t blame the AI – use the data to learn and take responsibility to be part of the solution, not the problem.”

Tiffany St. James – “The first step is to make a commitment to ensuring you have an inclusive response to all innovation. Other key steps include:

  • Design innovations that include diverse end-users.
  • Implement innovations with consideration of the broader ecosystems that we participate in.
  • Evaluate the impact of diverse groups to innovations using a data-driven approach.
  • Scale innovations that provide sustainable solutions to meet the needs of diverse groups of people.”

 

Many thanks to Tiffany St James, Jillian Moore, Peter Barker and Adam Boita for their contributions.

Please give us your anonymous feedback on this and future articles in the series using our form.