Building an Ethical AI Part II: Empathy

by cauri jaye

[This series of posts explores how we at Sesh are discovering the principles of building an ethical artificial intelligence through practical means.]

Can we understand how an artificial intelligence makes decisions better than we understand how we make our own?

My wife tells me that this reads more as an essay rather than an article. I decided to publish it anyway. How did I make that decision? How do we make any decision? 

Think about the last time you decided what to have for lunch. Why did you make that decision? Did you weigh options carefully or just use your gut instinct? Did you decide because it was convenient or because you had a craving? What drove your choice? If we can barely understand our own motivations, how can we understand those of a machine that we have taught to choose for itself?

Human Decisions 

When you or I make a decision we often think that we have made it for some rational reason. If asked, we can usually describe the logic that brought us to our choice.

On other occasions we make a quick decision and, after the fact, we ascribe it to some emotional driver, but we can still explain the choice. In reality, we have no real idea why we made that choice.

Recent discoveries in neuroscience and affective science (the science of emotions) show that we have it backwards. The processes that drive our decisions and emotions are 99% invisible. They happen below the threshold of our logical mind.

In essence, we ascribe the logic to the decision to justify it, not as a driver to the decision itself. The drivers come from a complex cacophony of memory, genetics, nutrition and health.

Here’s how this breaks down: 

  • our memory stores our cultural and contextual impulses; 
  • our genetics drive our nascent cognitive abilities and epigenetic memory; 
  • our nutrition drives our current flow of neurotransmitters and overall efficiency in our brain at any given moment;
  • and our physical and mental health dictates the effectiveness of our central nervous system

 

"We make decisions based on subconscious processes and then use conscious processes to justify them."

 

This summarises decades of complex research and some controversial views, some of which are still being debated in academia and will invite further discussion, disagreement and debate, which I welcome. As we continue to push the research forward, I feel confident with this summary: we make decisions based on subconscious processes and then use conscious processes to justify them.

How does this compare to our emotions?

Human feelings

Similarly, we have now found that we have got emotions backwards as well. Common understanding of emotions paints a picture of emotional circuitry in the brain, wired for sadness or elation or fear. Feeling fear? It has been thought that your neuronal fear connections light up in your brain, your blood quickens, your adrenaline spikes, your stomach tenses up. In fact, it does not work this way, at all, at all.

Imagine two recently graduated students, Julian and Eva, going for their first job interviews. Julian has lived a very sheltered, predictable life. His parents protected him, advised him and guided him. He knows he always feels nervous at these kinds of times. Sitting outside waiting for the interview, he worries what he will do if he fails. Will his life fall apart? Will he ever get another job? He tenses up, his heart races and his stomach turns into a knot. He feels worried.

Eva’s parents, on the other hand, have moved from place to place her whole life. She has had to make friends wherever she goes or make do with being alone. Her parents were often busy with work but supported her and loved her even when she rebelled as a teen. She knows she feels excited when arriving at a new place. Now, sitting waiting to be called in for her interview she tenses up, her heart races and her stomach turns into a knot. She grins from ear to ear; she feels so excited to start this new adventure no matter what comes.

Herein lie emotions: not pre-set feelings that all humans share, but a projected complex set of physiological responses based on our cultural upbringing and personality in a given situation that we each name differently. 

cluster graph

Eva and Julian projected what they would feel before they got there, creating responses in their bodies and minds. To make understanding emotions even more confusing they felt the exact same things in their bodies, but they interpreted them differently, and gave them a name appropriate to them: 'nervous' for Julian and 'excited 'for Eva. This then informed their actions. 

Our brain makes its best predictions on how to respond based on past experience. This prediction in turn creates physiological changes. We then name those responses and imbue them with meaning based on our context. 

Context has a broad purview: our species, our country, our ethnicity, our community, our family, our personality, our situation all contribute to this context. When talking to others from a similar context we tend to identify and interpret emotions similarly.

The farther we are from a shared context, the more difficult we find it to understand each other’s emotional reactions. Empathy requires contextual understanding.

What has this got to do with smart machines? 

Machine context

Well, everything. A machine that learns has the same issues as a human that learns: everything happens in context. In an artificial intelligence, layers of interrelated calculations and classifications create the context of every decision made, making it impossible to truly explain these choices.

 

"If a person could describe every facet of their personality, knowledge and emotions that drove any decision, the explanation would be as complex as their brain."

 

When relationships between data points get complex enough, no matter how detailed a visual we create, that visual cannot accurately represent the data transformations necessary to make a decision. 

As an analogy, a map does not equal the territory it represents. If the map becomes a completely accurate representation it would have all the details of the actual place and so would be indistinguishable and just as complex and large as the place itself, rendering it useless. Said differently, you cannot drink the word ‘water’.

In the same way, if a person could describe every facet of their personality, knowledge and emotions that drove any decision, the explanation would be as complex as their brain.

To describe a decision we simplify it, leaving only what is necessary to transmit ideas and understanding. When looking at an artificial intelligence, it has to do the same.

So how do we visualise the thoughts of a machine?

Machine thinking

Exposing an artificial intelligence’s thinking in any meaningful way is as messy as exposing a human being's thoughts: we can represent it visually or verbally but it only ever scratches the surface.

AI-visualization-map

We have three areas of visualisation: 

  • data: first, we can look at the data that informs the system. Think of these as equivalent to the memory of your experiences. For example, a table to you is made up of a lot of memories of tables you have seen that forms a ‘table’ concept. If you could line up a visual of every table you have ever seen in your life, this would be the dataset that forms the concept of ‘table’
  • model: second, we have a layer filled with learned concepts. The dataset from the data layer resolves into concepts which the machine can recognise, much like your concept of ‘table’. This layer results from the training of the artificial intelligence about the many concepts relating to its domain of the artificial intelligence
  • predictions: third, artificial intelligence takes inputs from the outside world and attempts to recognise concepts and make predictions. This is equivalent to you walking around and seeing something that is possibly a table and predicting that if you put a book on it, it would be appropriate

A lot of debate about artificial intelligence revolves on how much we let it act on its predictions. I’ll go into that in a later article, right now I want to focus on how we can understand its predictions in the first place. 

Visualising these layers of artificial intelligence has expanded the once-niche work of data visualisation into a whole field, worked on by some of the brightest minds on the planet. 

Data dreams

We use tools to export these layers of visualisation. We then look at the visuals and play with the controls to spot patterns and garner understanding. 

Tools like Google’s Facets and What if seek to display the data level of artificial intelligence. This visualisation helps us do things like spot biases in data. For example at Sesh, we look at a lot of videos of conversations between people (with appropriate consent, of course).  

Smart machines are notorious for having a hard time dealing with images that have low contrast, i.e. where there is little difference between light and dark pixels. 

This manifests in an awful way in an artificial intelligence like ours, in that someone with darker skin has naturally lower contrast, which is compounded in low light situations. 

So we use our tools to sample faces and create a map of some pixels of all the faces in the system. This shows us a colour graph which quickly highlights any skin tone bias in the data. In actuality it really represents low light situations, but the correlation between skin tone is high enough to be significant and allow us to monitor and avoid any racial bias that may creep in. 

colour graph

This externalisation of the internal processes of the system quickly gives deeper understanding of the influences in the system.

There exist many examples where data visualisation can reveal bias, error and omission from datasets. It forms a great bridge between machine and human understanding of data in the truest sense of augmented intelligence. 

Model dreams

When we teach a child a concept we often devise a lot of inefficient ways of assessing whether they have understood. With good language skills they can express it, but language itself uses complex concepts to describe most things. They can draw a picture of what they understood and that often more closely depicts their understanding. 

Tools such as Google’s  Embedding projector helps to visually display the logic of the artificial intelligence. Specifically it reveals the relationships and clustering of the data that map to new concepts. This visualisation can often reveal problems with the underlying data or, in the most revealing cases, a relationship between real world concepts that we never noticed before. 

These moments of genuine learning or confirmation of hypotheses form some of the most ground breaking in the world of artificial intelligence. They lead to many of the breakthroughs we see in the modern world, such as machine natural language processing and translation. 

Machine dreams 

At the moment, artificial intelligences specialise in specific areas of knowledge. The concepts they learn relate to language, images, business analytics, school grades, and so on. Once they learn their area of expertise, they can then make predictions or come to conclusions based on new inputs to their model. Each machine shows these results differently.

OpenAI’s GPT3, for example, can write whole texts that seem to come from a human (mostly). It expresses its knowledge in words. Netflix’s recommendation artificial intelligence shows you a stream of shows and movies that keep us glued to the sofa long after we ought to have gone to bed. It expresses its knowledge as a stream of posters for shows. Our artificial intelligence at Sesh looks at state of mind and expresses its knowledge as thought bubbles above people’s heads. 

The way smart machines express their inner thoughts will grow and change over time. More advanced artificial intelligences make choices as to the best way to express themselves visually. These expressions will only become more varied as their options grow. 

Every artificial intelligence exposes its predictions that drive its decision making differently, but each gives us insight into how it came to the conclusions it did.

Mutual understanding 

Humans do not have innate empathy, but we can learn it. We learn to understand  other people’s point of view. It does not inherently embody good or bad; some of the most empathetic people on the planet choose to work as con artists - a highly unethical practice.

When artificial intelligences  and humans communicate we can use what we learn from each other ethically or not. This collaborative and mutually beneficial interaction turns artificial intelligence into augmented intelligence.

ethical framework - ai comms

Pure understanding between a human and a machine requires an effort by both parties to listen when the other expresses itself. Artificial intelligences communicate through charts and graphs and words and images and voice and all the myriad of ways that we do as humans, and more. We use these visuals and expressions to understand the machine’s decision logic much the way we use words to understand that of each other. 

This basic empathy we have helps us understand the opportunities that augmented intelligence presents and learn something new about the smart machines and about ourselves.

 

READ PART I: 'How to Build Ethical AI Part I - Truth" 


Tools of the trade

If you want to dig deeper, we use these tools to visualise our AI's data, model and predictions:

 

Tags: AI, Ethical AI, Empathy, artificial intelligence

Group 63