Governance tips for ecosystems – use your platform as a governance tool

photo-1523886382163-bd684bcc34c6 v2

My new post on LinkedIn gives a few tips on governance. How your special perspective can help you to build a well-organised ecosystem.

AIs need to be accountable when they makes choices.

pexels-photo-277593One type of AI software uses neural nets to recognise patterns in data – and it’s increasingly being used by tech firms like Google and IBM. This type of AI is good at spotting patterns but there is no way to explain why it does so. Which is a bit of a problem when the decisions need to be fully accountable and explainable. 

I could have called this post ‘One thing you cannot do with AI at the moment. There are many things that AIs are helping businesses with right now. But if your firm is going to use them then it’s important to know their limitations.

I remember doing my maths homework once and getting low marks even though I got the right answers. The reason I lost marks was because I didn’t show my working out.
Sometimes the way that the answer is produced needs to be clear as well.

It is like that with some AI technologies right now. There are types machine learning AI that are amazing at recognising patterns but there is no way to explain how they do it.

This lack of explainability can be a real barrier. For example, would you trust a military AI robot armed with machine guns and other weapons if you weren’t sure why it would use them?

Or in medicine, where certain treatments carry their own risks or other costs. Medics need to understand why an AI diagnosis had been made.

Or in law, where early versions of the EU’s General Data Protection Regulation (GDPR) introduce a “right to explanation” for decisions based on people’s data.

The problem is that for some types of machine learning, called “Deep Learning”, it is inherently difficult to understand how the software makes a decision.

Deep Learning technology uses software that mimics layers and layers of artificial neurons – neural networks. The different levels of layers are taught to recognise different levels of abstraction in images, sounds or whatever dataset they are trained with.

Lower level layers recognise simpler things and higher level layers recognise more complicated higher level structures. A bit like lower level staff working on the details and higher level managers dealing with the bigger picture.

Developers train the software by showing it examples of what they want to it recognise, they call this ‘training data’. The layers of neural networks link up in different ways until the inputs and the outputs in the training data line up. That’s what is mean by ‘learning’.

But neural network AIs are like ‘black boxes’. Yes, it is possible to find out exactly how the neurons are connected up to guess an output from a given input. But a map of these connections does not explain why these specific inputs create these specific outputs.

Neural network AIs like Google’s Deep Mind are being used to diagnose illnesses. And IBM’s Watson helps firms find patterns in their data and powers chatbots and virtual assistants.

But on its own a neural network AI cannot justify the pattern it finds. Knowing how the neurons are connected up does not tell us why we should use the pattern. These types of AIs just imitate their training data, they do not explain.

The problem is that lack of accountability and explainability. Some services need proof, provenance or a paper trail. For example, difficult legal rulings or risky medical decisions need some sort of justification before action is taken.

Sometimes transparency is required when making decisions. Or maybe we just need to generate a range of different options.

However, there are some possible solutions. Perhaps a neural network AI cannot tell us how it decides something. But we can give it some operating rules. These could be like the metal cages that shielded production workers from the uncertain movements of early industrial robots. As long as a person did not move into the volume that the robot could move through then they would be safe.

Like safe paces to cross a road. Operating rules would be like rules of warfare, ground rules, policy and safety guidelines. Structures that limited the extent of decisions when the details of why the decisions are made are not known.

A similar idea is to test the AI to understand the structure of what sort of decisions it might make. Sort of the reverse of the first idea. You could use one AI to test another but feeding it huge numbers of problems to get a feel for the responses that it would provide.

Another idea is to work the AI in reverse to get an indication of how it operates. Like this picture of an antelope generated by Google’s Deep Dream AI.

The antelope image that was generated by the AI shows a little about how the AI software considers to be separate objects in the original picture.

For example, the AI recognises that both antelopes are separate from their background – although the horns on the right hand antelope seem to extend and merge into the background.

Also, there is a small vertical line between the legs of the left hand antelope. This seems to be an artefact of the AI software rather than a part of the original photo. And knowing biases like that helps us to understand what an AI might do even if we do not know why.

But whatever the eventual solution, the fact that some AIs lose marks for not showing their working out highlights that there are many different types of AIs and they each have their strengths and weaknesses.

My new guest blog for Control Shift

Control Shift, the personal data experts, asked me to do a blog on TACKLING THE DATA SHARING CHALLENGE.

There are many benefits to sharing more data between firms and other organisations but right now, as a society, we do not know how to do it safely.  In the blog I look at some of the opportunities and pitfalls, then I suggest a way forward.

Big Data and the Data Protection Act

I contributed comments to the recent Information Commissioner’s Office report on Big Data and Data Protection.

UPDATE: The above link is the ICO’s new report which includes Artificial Intelligence. The older report with my contribution is here: big-data-and-data-protection.

My new article in a report by the think tank Reform.

Reform asked me for a short piece on some of the implications of sharing data and mobile phone data – Our society needs to learn how to share personal data safely or we will all lose out.

Big Data Session @ Marketing Week Live

I’ll be speaking about what Big Data can do for marketers at Marketing Week Live on Wednesday.

We’re starting Big Data discovery projects with several firms right now to see how they can really sweat their data assets – so come along for some new ideas and a chat.

I’ll be in DD4 from 12.45 to 13.15.

New big data Business Analytics Strategy group at Nottingham University Business School.

Big data research partners wanted.

We are developing completely new ways to look for patterns in data. Our data scientists uncover patterns. Then we show you which of these patterns are most useful and how to use them to better meet your organisational objectives – and to get better objectives.

From data provenance to analytical discovery, data-led service development and product improvement, high-granularity marketing and sales strategies, big data supply chain and operations strategies, planning additionality and measuring ROI.

We can help you to use Big Data techniques in all the functions of your organisation. You can make strategic decisions, harness your creativity and business experience, monitor and manage operations and do business like no one has ever done before in your sector – because we are focused on discovering entirely new analytical techniques and the analytical strategies for generating value from them.

Typical project components

  • Developing new insight models based on text mining, mobile data, social data or 3rd party data.
  • Assessing your current data assets and requirements for data additives versus your commercial goals.
  • Using the latest Data Science techniques, e.g. machine learning techniques.
  • Getting more from your current data assets to improve products and services.
  • Big Data strategies for supply-side functions as well as the demand-side functions, like marketing & sales.
  • Developing your Business Analytics Strategy – for specific projects and to make your whole organisation more analysis-driven.

We combine the absolute newest research in Data Science with an intimate understanding of how your business model creates value. Data science uncovers new patterns in your organisational data, our analytical strategies fit them to your business context.

We are working with retailers, marketers, data firms and customer loyalty firms – we want to work with all business sectors and the public sector.

We are signing NDAs right now and there are a few places left on the first round of Analytics Discovery projects.

Use our ground-breaking academic research

Research projects normally start with a mutual NDA and we are more than happy to help you develop marketing content that takes advantage of your participation in developing state of the art business analytics hand-in-hand with ground-breaking academic research.

Get in touch to do something your competitors have never even heard of yet: duncan.shaw@nottingham.ac.uk.

Guardian Media Network live chat from 1-3pm

If you’ve got the time then join me for a live chat from 1-3pm on the relationship between marketing and Big Data – here

Want to measure your data’s value?: Open source it.

depthmeasure

As firms scale up the use of information in business using Big Data analytics there is an increasing interest in measuring the value of the data. Not just the ROI of the projects that use it, but the asset value of the data itself.

Pete Swabey recently wrote an article in Information Age about the connection between the value of a firm’s data assets and the market value of that firm. He highlighted how most firms do not formally measure the value of their data assets so their data’s value is not included in their market value. Commonly, they do not treat their data assets appropriately, and even more worryingly according to his article, their insurers do not recognise the value of their data assets.

However, measuring the value of data is easier said than done because valuation is ‘in the eye of the beholder’, i.e. value is personal and individual. Value is also dynamic because it is affected by personal experience and events, and it depends on personal context, i.e. the user’s past history and future goals. So valuation of a resource depends on a fit between the resource and each individual user’s ever changing personal needs.

Value is not some frozen and unchanging characteristic, it is ‘value-in-use’ and the value of data depends what you use it for – which makes things even more complicated because you can use and reuse your data assets many times without wearing them out. (Maybe you only ‘wear out’ data when you tell someone something that they already know).

Swabey’s article describes a few useful ways to start to calculate the value of your information assets but measuring value of your data is not straight forward when there are so many unexplored uses for it.

A solution to this problem needs to start with an understanding of how your data could be used and the individuals that could use it. I.e. start by mapping out all your uses of your data (by staff role and function); then do the same for your current business partners (since you are in the same supply chain or ecosystem); then think about opening it up even more.

Sounds risky? Well yes, there needs to be a set of safe guards and frameworks of use. Look what happened to Tom Tom when it shared its data.

But if the value of data is only really apparent when it is used, and its uses are mostly unknown, then the best way to explore this problem is to ask potential users for ideas, i.e. to open source it.

Conclusion

Opening your data up to new users and infomediaries would let you access new ideas for using it. And the value (and risks) of each new idea could then be assessed. This way also brings in highly engaged customers and partners for services that are based on your data assets.

They are looking at you: Google’s telescope versus Facebook’s telescope.

I love the way that Google+ is an even bigger reason to login and give Google a token to link together all my other interactions with Google products. Each of which tells Google a little bit about what I’m interested in.

Google products are not there to get customers to use the web, they’re there to watch customers use it. I work with loyalty card and customer data firms and Google’s array of products give a much better cumulative view of each customer’s interests than any single big retailer’s loyalty scheme, even those of Tesco Clubcard or Boots Advantage Card in the UK.

Think  about those lines of small radio telescopes that you see in the desert. Astronomers combine the data from each of the perspectives of the individual dishes into one big view – and the wider the telescopes are spaced out then the broader the perspectives have access to and the more insight they can gain.

In digital marketing and ecommerce insights are about ‘who’ is interested in ‘what’ products and services, ‘when’ and ‘where’ – even if they do not know it themselves.

Loyalty programmes do a great job of helping to figure this out but their insights are limited by the actual transactions and the relationship that generate the data. For example, an insurance company knows a lot about a customer’s ‘insurance life’ but that’s just like looking through a key hole at the rest of the customer’s life.

Supermarket chains get much broader perspectives than insurance companies because they sell customers things that help them in more diverse parts of their lives. But even that is a small part of their whole lives. Truly indispensible, personal and timely suggestions need to be in the context of large parts of who each customer is and what they do (and what they want to do). Especially if they do not know themselves.

Sure, you could buy-in data but bought-in data is generally more indirect and aggregated than the data that comes from your own relationship with that customer. The more removed from the particular relationship that you want to influence then the less relevant and understandable it is – bespoke always fits better than off the shelf.

Helen Taylor’s post on Econsultancy got me thinking about how Google has developed a very broad array of perspectives on each customer’s life and how it is using Google+ to glue them together and to dig deeper. The +1 button is the simplest way to tell Google what is interesting. But all of Google+’s features help to generate deeper insights and each one gives a subtly different perspective on customers’ interests:

Streams – tells Google different things that the member might be interested in. On timeline to enable insights about trends at the person and group levels.

Circles – tells Google which members might be interested in these different things. Members can segment by some preset categories (Friends,Family,Acquaintances, Following) and define more categories themselves. Analysis of these user-defined categories will give valuable insights into how members thing about their different interests in terms of interest-to-interest associations and higher level groupings of interests (like analysing the category structures of folksonomies and socially generated tag clouds).

Brands can segment by global versus local because its useful for them. So brand partners can also signal to Google what type of members interest them.

Hangouts – helps Google to get the sort of deep insights that only come from closely monitoring small groups of people talking openly. As Helen said, these are panel sessions. The Hangouts On-Air feature enables panel session content to be broadcast, stored and edited. The members and brand partners who choose to view this content are telling Google about their own interests.

The other features of Google+ (and other Google products) are designed to cumulatively generate live and updating ‘process Interest Graphs’, i.e. very wide arrays of perspectives on each member’s life.

Each perspective is a key hole on a person’s life and together they give much more diverse, and deeper, insights for Google’s brand partners than a loyalty programme can – maybe more than Hunch or Gravity can as well. So Google can partner with more brands and do so in more actionable ways.

Facebook, LinkedIn and Twitter have very different ‘arrays of telescopes’ to Google and these give them very different arrays of perspectives to look at their members’ interests from. First, each of these social networks focuses its arrays on different general aspects of their member’s lives. Although they do overlap – and Google+ seems to overlap the most:

Facebook – entertainment and social life, short-term issues, life curation

LinkedIn – work life, short-term issues and long-term projects, network curation

Twitter  – all your life, immediate issues, bare bones content

Facebook and LinkedIn have much tighter feedback loops between members – in terms of more levels of connection (ways to directly exchange content) and some features that enable actual two-way conversations.

Twitter is a bare bones way to connect with people who you think might have interesting things to say. Mostly its about broadcasting with some ability for loose two-way communications.

Second, each of these social networks uses different features to get the data that gives them their arrays of perspectives. Google use product-based features outside of their social network, as well as inside like the others.

The bottom line is that they all try to be really useful in their chosen aspect of their member’s lives because they know that being really useful help requires clarification and clarification leads to much deeper customer knowledge than bare transaction data.

Update: Facebook is looking at combining information across its other services here.

Big data + smart phone app = global as well as local, centralised as well as decentralised.

The huge data center which runs in the cloud can now connect to you via an app on your smart phone. This set up combines scale and power with precision and personalisation.

Firms are starting to make apps that are really useful helpers for shoppers. Like this one for shoppers at the DIY chain Lowe’s on econsultancy.

These apps are really useful because they can work at ‘global’ as well as ‘local’ scales. They have access to a firm’s worth of data and insights – including those of its partners and customers. But can they also serve up exactly what a specific shopper needs at a specific moment. They combine all-time with real-time.

Of course firms need to get past the data fragmentation barriers of big data, one one hand, and specific customisation requirements, on the other – because each user is a segment-of-one. But phone apps that are powered by big data technology link large scale resources to individual service moments.

The biggest potential benefits of big data come from its ability to act globally as well as locally – it is centralised as well as decentralised.

The implications of this are huge. Firstly, customers need to receive real-time services, i.e. services when they need them not just where they need them. But a real-time experience also means that they can play around with ideas.

Real-time means feedback, iteration and experimentation. A properly designed mobile app can give them suggestions that they would never have thought of. Or they can test out their ideas and compare what works best.

But most of all the local provision of global resources means that apps can be guides – life ‘sat navs’ – that help customers through all sections of the sales funnel. But guides need to accompany the customer, so they need to be immediate as well as continuous, i.e. that’s why they need to be real-time services.