How to Build Ethical AI Part I:Truth

by cauri jaye

[This series of posts explores how we at Sesh are discovering the principles of building an ethical artificial intelligence through practical means.]

Do we start with the principles or do we start with the practice?

Technology companies in the 2000s figured out that building a business does not start with a 60 page business plan, it starts by testing an idea in the market and then doing what works. 

Building a business product-first became known as Lean Startup. Similarly, over the past few years I realised that building artificial intelligence must not begin with designing a model, either, but with building on data and then doing what works.

This became apparent when our team at Sesh tried to create an AI ethical principles document to guide how we build our new product. We couldn’t do it. Here’s why.

Principles of ethical artificial intelligence

There are thousands of companies, organisations and governments around the world that have written principles of artificial intelligence. Recurring themes include privacy, transparency, and anti-bias - but very little practical methodology. For the most they feel like they were written by marketers rather than data scientists.

So we decided to think about how to develop practices that would ensure we build ethical AI. From these practices would emerge our principles. 

The problem with data 

AI relies on data to learn. This learning allows a machine to predict outcomes. Some machine learning requires mountains of data which it then uses to find the most likely patterns to determine how to make predictions (big data). Other models require small amounts of high quality data to suggest predictions and train. Either way: data rules.

Many machine learning projects are desperate for data, so when they get some they do a quick cleanup and immediately push it into the data model to see what they can learn. This often leads to a few changes and some increased precision in the decisions made. This is often the end of the journey until more data is found or created. So how do we erase bias? Well, the short answer is: you cannot. All information contains some bias.

 

"So how do we erase bias? Well, the short answer is: you cannot. All information contains some bias."

 

This eagerness for data has led to real world negative consequences as AI's make recommendations that people follow. It has created systems that incarcerate black people more than white in the USA, for example. However, more problematic, it introduces decisions that become new data for other artificial intelligences, government policies and corporate practices. This compounding of the problem can grow exponentially, embedding and cementing systemic negative biases into our laws, commerce and digital lives. 

ethical framework - data fidelity and bias

Finding Bias 

So how do we erase bias? Well, the short answer is: you cannot. All information contains some bias. That is the nature of information. Even a measurement of the speed of light, something considered a fixed constant, can be biased by how it was measured, calculated and interpreted. However, we can offset bias by understanding context. We have spent the last five hundred years perfecting a practice to assess context and reduce bias: the scientific method.

The method in short: hypothesise, test, measure, report - rinse and repeat. This last - rinse and repeat - might seem a bit of a throw away but on the contrary, it’s arguably the part that regulates bias the most. 

When an experiment runs and a group writes up their conclusions, they include exact instructions as to how to repeat the experiment. This experiment then gets repeated by a completely unbiased peer. If the peer review reaches the same conclusion, then we have a strong result.

Lean startup 

What has this got to do with AI? Let’s have a look at the method used in another domain first. 

Between 2005 and 2011 a new method emerged for creating businesses. Steve Blank identified a number of principles for developing customers, which were then encoded by Eric Reis as the Lean startup methodology. In essence this was the application of the scientific method to business. 

Hypothesise a product, design a test in the market, measure that test and draw a conclusion - rinse and repeat. Rather than a peer review, it tests the hypothesis in the market to get pure empirical response. This methodology launched many products that we have as household names today, such as Dropbox or Groupon, and many products from large companies like GE, AT&T and Google. 

Once Lean - as it became known -  began to dominate the technology world, the approach then crept its way into marketing. The emergence of growth hacking and high growth marketing used the exact same cycle: hypothesis, test, measure, learn. Similar to the product application, peer review happens mainly when your competitor successfully copies you!

Now we have a new area in which to apply those Lean principles: machine learning. Developing AI also relies on hypothesise, test, measure, learn - repeat.

Squelching bias - an example

Our company thrives on the scientific method. We use it to determine product features, design marketing language, guide user experience, create internal practices, even determine how we organise our team. It sits deep in our DNA. So no surprise that we apply it to our machine learning as well. 

Before we even get a new set of data, we model it in our system. For example, in dealing with emotions, different emotions elicit different levels of desire for more information. If we look at a happy state, people do not want a lot of new information as they generally want to maintain the happy state of mind and not disrupt it. We can see this in extreme happiness, when you are orgasmic, you do not want any additional information as anything would disrupt the emotion. 

On the other side when you feel angry, your desire for information rises. Think of a couple fighting “why would you say that?”, “what made you do that?”, “when did it happen?”, “are you crazy?!?”

stateofmind-wordcloud

The desire for additional information increases, whether to justify your feelings or end them, you want to know more.

Studies exist around this effect and we could simply get the data and assign it to each sensation: delighted: 0.5, happiness:-0.7, orgasmic:-1.0. However, neuroscience applied to affective science (the study of emotions) is fairly new and we are all learning.

At Sesh, we hypothesise that certain emotions will elicit a desire for more information. We use existing studies to estimate this desire for information. We build it into our technology but we do NOT let it affect the model that makes the emotional predictions.

As our system observes how people interact, it starts to measure the relationship between emotions and questions asked. It even starts to notice when someone displays a desire to ask a question and then stops it. This is ground-truthing - observing reality to measure effects. 

Over time we gather more and more data, proving or disproving our different assumptions. Maybe happiness shows a lower desire for information than the studies showed.

You might wonder why we bother with the assumptions from the studies in the first place if we’re just going to test it? Well, that’s where the bias comes in. When we see ground truth diverge from assumptions, that gap points to bias of some kind.

These biases can come from the assumptions themselves, the data itself, the method of gathering it, the people who gathered it, from how we modeled the hypothesis or even how we analysed it.

By using these indicators, we see bias before it hits our machine learning model. Once we have adapted for bias and offset it with ground truth, we can create a much more accurate model that can learn and innately adapt or highlight growing bias in itself over time.

empath python notebook

Empiricism and augmented intelligence

We’ve identified empiricism as one of the values of our Empath product - this involves the seeking of truth. We also identify humanity as another which means embracing all the fuzzy edges, messiness, emotion and opinions that come with being human.

In building an artificial intelligence that bridges the two, we hold it to a higher standard than most of us do our leaders and our stars: we demand unbiased proof embracing empathetic understanding. 

It turns out that using the scientific method creates a far more effective ethical guide for building an AI product than trying to apply theoretical ethical principles to a problem space.

 

READ PART II: Building an Ethical AI Part II: Empathy 

 

Tags: AI, Ethical AI, Lean startup, Scientific principle

Group 63