How cognitive empathy will change how you see the world

By Phil Harvey
23 Mar 2022

Empathy isn’t just a biological capability, it can be learned. Once you figure out how, a whole new world of diverse understanding will open up to you. Introducing, cognitive empathy.

By Phil Harvey, Autonomous Systems Architect at Bonsai (Microsoft Research), BIMA AI Council Member

In our 2021 book Data: A guide to humans, Dr Noelia Jimenez Martinez and I explore the learnable skill of cognitive empathy – the rational act of seeking to understand the perspectives of others – in relation to data work. As a result of this work, three things became clear:

  1. That, under the right model, cognitive empathy is a skill you can practise, learn and improve at, just as you would with art, design or data science. It’s in this way, that empathy can become a rational action and not just a biological capability.
  2. Many issues arising in the use of data and its applications in the lived human experience comes down to failures in empathy between groups of people with different skills and experiences.
  3. Cognitive empathy can be a methodology that can be applied to systems of any scale. We can have empathy for our planet as much as we can have empathy for each other.

The frame we will be using when exploring the need for a diversity of skills and perspectives in the AI industry is that we all have a perspective as individuals and we all create and share a perspective in any team or organisation we are a part of.

For example, the BIMA AI council itself has a perspective and our discussions and work are developed within this perspective to share with the wider BIMA community. We have to consider the BIMA community’s perspective and how it relates to the AI council’s perspective when deciding what to do. This is an example of cognitive empathy in action.

Focus on the perspective, not the person

Let’s look at other perspectives to better understand this kind of empathy. I’ll use two perspectives on AI to help my discussion.

Example 1: Visual media and language contribute to, sustain and perpetuate sociotechnical blindness

“Much of what we think we know about AI is influenced by what we ‘see’ it to be from images and video transferred across the world. These come to us in many forms: an image on a blog, a news site, a blockbuster sci-fi film, a fancy walking, talking gadget (Hello, Sophia!). We’ve started the first phase of our project, Is Seeing Believing?, by exploring how visual media has contributed, along with language, to sustaining and perpetuating sociotechnical blindness. In addition, we are investigating how AI imagery, distributed through digital networks and search engines, has educated the public about this very powerful and increasingly pervasive technology.”
– From Is Seeing Believing? by Anne T. Rogers and Lisa Talia Moretti 

This is a perspective from which I learn and grow but I had never considered before reading it. My perspective overlaps with that of the authors’ ‘enough’ that it stretches me towards learning.

Example 2: Liberals must stop politicising AI

“If you question this [not debiasing socially impactful algorithms] or any other of a wide range of liberal demands on AI, you’re in for a lot of grief. The more prominent the researcher that gets cancelled, the better, because it sends the most chilling message to everyone else, particularly junior researchers.”
– From We must stop militant liberals from politicizing artificial intelligence – The Spectator World

This is a perspective that I do not share. I will admit, I find reading an article from the given perspective uncomfortable. That feeling of discomfort is the result of the distance between my own perspective and that presented by the author. In our book, we give strategies for handling conflicts like this. Learning more about the perspective the author has shared, trying hard not to be driven by my emotional response and focusing on the perspective not the person, I discover that he posts strong opinions about ethicists not being allowed to participate in the AI industry. I get the impression his perspective leans towards the data-is-all-you-need radical empiricist school of AI and potentially Scientism.

I find the perspective shared in the first article comfortable because it overlaps with my own perspective which is built on a foundation of a wide-ranging and interdisciplinary skill set: from architecture and advertising to programming and applied AI research. I learn from this perspective because the new aspects shared are close to things I already understand and believe in. I am taken comfortably to the learning zone (sometimes called the stretch zone). The second perspective is uncomfortable because it lands me in a panic zone; the distance between my perspective and that of the author’s is vast. It is because of experience with diversity that I am able to maintain my calm and seek to understand rather than react and reject.

Moving from code to people

A few years ago, the second perspective wouldn’t have been so jarring, and I must admit hurtful, as it is today. When I was a professional programmer I believed different things because I saw the world a certain way. I met different people and engaged with different things back then. As a result, different perspectives were comfortable for me because I matured within that paradigm.

When I chose to give up programming it changed the path I was walking on. I stepped out of my old paradigm and by doing so I changed what I read, who I met and what I did. I had to rediscover what ‘work’ was to me. To understand the new perspectives people were sharing with me, I had to practice skills like active listening and suspending judgement. I began to reevaluate my beliefs against different measures. My perspective changed as a result of exposure to things that were different to what I had grown used to. I grew and changed as a person. This is the journey I have taken and I remain respectful of the choices and journeys others have taken.

Certain points on my journey have been painful. I have felt the need to defend myself, both as a programmer and as an ex-programmer. I’ve shifted perspectives from “Why won’t those people leave me alone? I am an expert!” to “Why am I upsetting these technical people now?” During my journey, I have learned to recognise and work through these scenarios with empathy. One of the biggest lightbulb moments I’ve had during this time is that I’ve learned how to better act with empathy by working in more diverse environments. I’ve learned that empathy and diversity are interdependent. Diversity gives you the opportunity to be exposed to different perspectives while empathy enables you to better understand those perspectives that are different to your own.

Applying the theory

It is through application that theory becomes a person’s reality. A tangential but useful article that discusses this is called Moral Philosophy & Being Good (found in the publication Philosophy Now). Diversity, empathy and ethics are related fields. As such, I find AI Ethics to be a relatively new and good example of applied ethics that makes real the ethical questions philosophers have been tackling for thousands of years. AI gives us a new lens, and therefore a new opportunity, to look at our theories of empathy, diversity and inclusion and consider how we integrate different perspectives into the systems we build.

Why does diversity matter to teams working in AI?

I am going to start with a clear answer to the above question: Diversity matters in teams working in AI because AI will impact the world. The world out there is more diverse than you can possibly imagine. If you do not serve your intended audience, or if the impact is found to be negative, contravening laws or social norms you will fail no matter how clever you think your solution is.

To quote a fellow council member: “Diversity matters everywhere, but in AI its importance gets amplified through the use of models in multiple settings.”

As someone who is on the philosophical end of the spectrum I know I need to work together with members of Team Maths (I am not good at reading Greek letter-based formulas) and as an ex-programmer I know I need to work with those with advanced humanities skills to understand the impact of what is created. This is why, while working with the team on the L6 Data Science apprenticeship I worked hard to champion Behaviour 2: “Empathy and positive engagement to enable working and collaborating in multi-disciplinary teams, championing and highlighting ethics and diversity in data work.

A question for you to answer

While this is an opinion piece based on my experience, I hope my diverse experience working in and out of AI for over 21 years is a perspective you’ve enjoyed reading. There are many other articles to be written on the nature of programming as a career and how that evolves with new advancements in AI. I also look forward to new studies on the nature of tech work, use of time, teams, and how it affects our choices, social interactions and outlook on life.

To end, I ask you this. When was the last time you looked at the makeup of your team? Not only from the perspective of physical diversity, but from one that lets you consider diversity of background, training, experience and perspective. How much could you grow, learn, do by adopting a new perspective? You won’t know until you do.

Interested in reading more? I can highly recommend the paper Implementing Grutter’s Diversity Rationale: Diversity and Empathy in Leadership (SSRN) by Rebecca K. Lee that argues that “greater diversity and empathy are needed for effective leadership in diverse settings.”

Artificial Intelligence & Machine Learning

Latest news