Governance tips for ecosystems – use your platform as a governance tool

photo-1523886382163-bd684bcc34c6 v2

My new post on LinkedIn gives a few tips on governance. How your special perspective can help you to build a well-organised ecosystem.

Advertisements

AIs need to be accountable when they makes choices.

pexels-photo-277593One type of AI software uses neural nets to recognise patterns in data – and it’s increasingly being used by tech firms like Google and IBM. This type of AI is good at spotting patterns but there is no way to explain why it does so. Which is a bit of a problem when the decisions need to be fully accountable and explainable. 

I could have called this post ‘One thing you cannot do with AI at the moment. There are many things that AIs are helping businesses with right now. But if your firm is going to use them then it’s important to know their limitations.

I remember doing my maths homework once and getting low marks even though I got the right answers. The reason I lost marks was because I didn’t show my working out.
Sometimes the way that the answer is produced needs to be clear as well.

It is like that with some AI technologies right now. There are types machine learning AI that are amazing at recognising patterns but there is no way to explain how they do it.

This lack of explainability can be a real barrier. For example, would you trust a military AI robot armed with machine guns and other weapons if you weren’t sure why it would use them?

Or in medicine, where certain treatments carry their own risks or other costs. Medics need to understand why an AI diagnosis had been made.

Or in law, where early versions of the EU’s General Data Protection Regulation (GDPR) introduce a “right to explanation” for decisions based on people’s data.

The problem is that for some types of machine learning, called “Deep Learning”, it is inherently difficult to understand how the software makes a decision.

Deep Learning technology uses software that mimics layers and layers of artificial neurons – neural networks. The different levels of layers are taught to recognise different levels of abstraction in images, sounds or whatever dataset they are trained with.

Lower level layers recognise simpler things and higher level layers recognise more complicated higher level structures. A bit like lower level staff working on the details and higher level managers dealing with the bigger picture.

Developers train the software by showing it examples of what they want to it recognise, they call this ‘training data’. The layers of neural networks link up in different ways until the inputs and the outputs in the training data line up. That’s what is mean by ‘learning’.

But neural network AIs are like ‘black boxes’. Yes, it is possible to find out exactly how the neurons are connected up to guess an output from a given input. But a map of these connections does not explain why these specific inputs create these specific outputs.

Neural network AIs like Google’s Deep Mind are being used to diagnose illnesses. And IBM’s Watson helps firms find patterns in their data and powers chatbots and virtual assistants.

But on its own a neural network AI cannot justify the pattern it finds. Knowing how the neurons are connected up does not tell us why we should use the pattern. These types of AIs just imitate their training data, they do not explain.

The problem is that lack of accountability and explainability. Some services need proof, provenance or a paper trail. For example, difficult legal rulings or risky medical decisions need some sort of justification before action is taken.

Sometimes transparency is required when making decisions. Or maybe we just need to generate a range of different options.

However, there are some possible solutions. Perhaps a neural network AI cannot tell us how it decides something. But we can give it some operating rules. These could be like the metal cages that shielded production workers from the uncertain movements of early industrial robots. As long as a person did not move into the volume that the robot could move through then they would be safe.

Like safe paces to cross a road. Operating rules would be like rules of warfare, ground rules, policy and safety guidelines. Structures that limited the extent of decisions when the details of why the decisions are made are not known.

A similar idea is to test the AI to understand the structure of what sort of decisions it might make. Sort of the reverse of the first idea. You could use one AI to test another but feeding it huge numbers of problems to get a feel for the responses that it would provide.

Another idea is to work the AI in reverse to get an indication of how it operates. Like this picture of an antelope generated by Google’s Deep Dream AI.

The antelope image that was generated by the AI shows a little about how the AI software considers to be separate objects in the original picture.

For example, the AI recognises that both antelopes are separate from their background – although the horns on the right hand antelope seem to extend and merge into the background.

Also, there is a small vertical line between the legs of the left hand antelope. This seems to be an artefact of the AI software rather than a part of the original photo. And knowing biases like that helps us to understand what an AI might do even if we do not know why.

But whatever the eventual solution, the fact that some AIs lose marks for not showing their working out highlights that there are many different types of AIs and they each have their strengths and weaknesses.

My new guest blog for Control Shift

Control Shift, the personal data experts, asked me to do a blog on TACKLING THE DATA SHARING CHALLENGE.

There are many benefits to sharing more data between firms and other organisations but right now, as a society, we do not know how to do it safely.  In the blog I look at some of the opportunities and pitfalls, then I suggest a way forward.

Big Data and the Data Protection Act

I contributed comments to the recent Information Commissioner’s Office report on Big Data and Data Protection.

UPDATE: The above link is the ICO’s new report which includes Artificial Intelligence. The older report with my contribution is here: big-data-and-data-protection.

My new article in a report by the think tank Reform.

Reform asked me for a short piece on some of the implications of sharing data and mobile phone data – Our society needs to learn how to share personal data safely or we will all lose out.

Big Data Session @ Marketing Week Live

I’ll be speaking about what Big Data can do for marketers at Marketing Week Live on Wednesday.

We’re starting Big Data discovery projects with several firms right now to see how they can really sweat their data assets – so come along for some new ideas and a chat.

I’ll be in DD4 from 12.45 to 13.15.

New big data Business Analytics Strategy group at Nottingham University Business School.

Big data research partners wanted.

We are developing completely new ways to look for patterns in data. Our data scientists uncover patterns. Then we show you which of these patterns are most useful and how to use them to better meet your organisational objectives – and to get better objectives.

From data provenance to analytical discovery, data-led service development and product improvement, high-granularity marketing and sales strategies, big data supply chain and operations strategies, planning additionality and measuring ROI.

We can help you to use Big Data techniques in all the functions of your organisation. You can make strategic decisions, harness your creativity and business experience, monitor and manage operations and do business like no one has ever done before in your sector – because we are focused on discovering entirely new analytical techniques and the analytical strategies for generating value from them.

Typical project components

  • Developing new insight models based on text mining, mobile data, social data or 3rd party data.
  • Assessing your current data assets and requirements for data additives versus your commercial goals.
  • Using the latest Data Science techniques, e.g. machine learning techniques.
  • Getting more from your current data assets to improve products and services.
  • Big Data strategies for supply-side functions as well as the demand-side functions, like marketing & sales.
  • Developing your Business Analytics Strategy – for specific projects and to make your whole organisation more analysis-driven.

We combine the absolute newest research in Data Science with an intimate understanding of how your business model creates value. Data science uncovers new patterns in your organisational data, our analytical strategies fit them to your business context.

We are working with retailers, marketers, data firms and customer loyalty firms – we want to work with all business sectors and the public sector.

We are signing NDAs right now and there are a few places left on the first round of Analytics Discovery projects.

Use our ground-breaking academic research

Research projects normally start with a mutual NDA and we are more than happy to help you develop marketing content that takes advantage of your participation in developing state of the art business analytics hand-in-hand with ground-breaking academic research.

Get in touch to do something your competitors have never even heard of yet: duncan.shaw@nottingham.ac.uk.