AIs need to be accountable when they makes choices.

pexels-photo-277593One type of AI software uses neural nets to recognise patterns in data – and it’s increasingly being used by tech firms like Google and IBM. This type of AI is good at spotting patterns but there is no way to explain why it does so. Which is a bit of a problem when the decisions need to be fully accountable and explainable. 

I could have called this post ‘One thing you cannot do with AI at the moment. There are many things that AIs are helping businesses with right now. But if your firm is going to use them then it’s important to know their limitations.

I remember doing my maths homework once and getting low marks even though I got the right answers. The reason I lost marks was because I didn’t show my working out.
Sometimes the way that the answer is produced needs to be clear as well.

It is like that with some AI technologies right now. There are types machine learning AI that are amazing at recognising patterns but there is no way to explain how they do it.

This lack of explainability can be a real barrier. For example, would you trust a military AI robot armed with machine guns and other weapons if you weren’t sure why it would use them?

Or in medicine, where certain treatments carry their own risks or other costs. Medics need to understand why an AI diagnosis had been made.

Or in law, where early versions of the EU’s General Data Protection Regulation (GDPR) introduce a “right to explanation” for decisions based on people’s data.

The problem is that for some types of machine learning, called “Deep Learning”, it is inherently difficult to understand how the software makes a decision.

Deep Learning technology uses software that mimics layers and layers of artificial neurons – neural networks. The different levels of layers are taught to recognise different levels of abstraction in images, sounds or whatever dataset they are trained with.

Lower level layers recognise simpler things and higher level layers recognise more complicated higher level structures. A bit like lower level staff working on the details and higher level managers dealing with the bigger picture.

Developers train the software by showing it examples of what they want to it recognise, they call this ‘training data’. The layers of neural networks link up in different ways until the inputs and the outputs in the training data line up. That’s what is mean by ‘learning’.

But neural network AIs are like ‘black boxes’. Yes, it is possible to find out exactly how the neurons are connected up to guess an output from a given input. But a map of these connections does not explain why these specific inputs create these specific outputs.

Neural network AIs like Google’s Deep Mind are being used to diagnose illnesses. And IBM’s Watson helps firms find patterns in their data and powers chatbots and virtual assistants.

But on its own a neural network AI cannot justify the pattern it finds. Knowing how the neurons are connected up does not tell us why we should use the pattern. These types of AIs just imitate their training data, they do not explain.

The problem is that lack of accountability and explainability. Some services need proof, provenance or a paper trail. For example, difficult legal rulings or risky medical decisions need some sort of justification before action is taken.

Sometimes transparency is required when making decisions. Or maybe we just need to generate a range of different options.

However, there are some possible solutions. Perhaps a neural network AI cannot tell us how it decides something. But we can give it some operating rules. These could be like the metal cages that shielded production workers from the uncertain movements of early industrial robots. As long as a person did not move into the volume that the robot could move through then they would be safe.

Like safe paces to cross a road. Operating rules would be like rules of warfare, ground rules, policy and safety guidelines. Structures that limited the extent of decisions when the details of why the decisions are made are not known.

A similar idea is to test the AI to understand the structure of what sort of decisions it might make. Sort of the reverse of the first idea. You could use one AI to test another but feeding it huge numbers of problems to get a feel for the responses that it would provide.

Another idea is to work the AI in reverse to get an indication of how it operates. Like this picture of an antelope generated by Google’s Deep Dream AI.

The antelope image that was generated by the AI shows a little about how the AI software considers to be separate objects in the original picture.

For example, the AI recognises that both antelopes are separate from their background – although the horns on the right hand antelope seem to extend and merge into the background.

Also, there is a small vertical line between the legs of the left hand antelope. This seems to be an artefact of the AI software rather than a part of the original photo. And knowing biases like that helps us to understand what an AI might do even if we do not know why.

But whatever the eventual solution, the fact that some AIs lose marks for not showing their working out highlights that there are many different types of AIs and they each have their strengths and weaknesses.

Leave a comment