Common threads running through three recent Big Data roundtables

Holy-grail-round-table-bnf-ms-120-f524v-14th-detail

Source: Wikimedia commons

In the last few weeks I’ve been to three roundtables that were full of experts on Big Data and Big Data analytics or business users of the insights from such analytics. There were also a few Members of Parliament and senior industry regulators.

The first roundtable was at Econsultancy, the second was at the think tank Reform and the third was a Personal Big Data roundtable. These round tables addressed three very different aspects of Big Data but there are some common threads that stretched through all of them.

The first common thread was the richness and variety of the topics that we discussed. Big Data is a new and emerging set of technologies and right now we are at Big Data 1.0 not Big Data 2.0. The discussion at these roundtables, just like Big Data articles on the web, was as unstructured as Big Data is itself. When a structure forms we will call it Big Data 2.0.

Some people were focused on the hardware, some liked to talk about the data it handles. There was a huge amount of discussion about ‘data’ and less about what data or which data. Indeed there was a general thirst for examples, case studies and illustrations of uses of Big Data.

There were also lots and lots of metaphors like ‘Data is the new oil’ or my own biased favourite ‘Big Data is like the minute-to-minute personal diary of everyone and everything’. When it is unclear what ‘something’ is, a something that is emerging as we develop how we use a these technologies, then metaphors are very useful. They help use to generate potential forms that we can check and test for usefulness.

Even the experts have different points of view and many questions about what form, or forms, Big Data will take. However, there are a few rough characteristics starting to take shape and this is what I hope to describe here.

The second thread that ran through these roundtables was that there was more talk of the hardware and the data themselves rather than of the actual services that Big Data analytics could create.

It is relatively easy to deconstruct a service after it has proved highly popular. But thinking up that highly popular service in the first place is very hard. Right now we have some new hardware, access to vast amounts of raw material data and a complicated range of analytical tools but it is unclear how to combine all these into specific configurations that produce the ‘killer apps’.

One way around this might be to start out with some commonly valued objectives and work backwards to try and connect them to the outputs that we know that our new analytical techniques can produce.

For example, both government and industry are perennially keen to [1] increase services or sales, and [2] make savings. And we know that a key role of these emerging analytical techniques is to help us accurately understand the needs of people – on a more personal and individual basis.

So we should be looking for analytical techniques that suggest the unmet needs of citizens and customers – because knowing unmet needs helps us to increase services or sales. And more precisely tailoring the services that we already provide could reduce wasted resources and make savings.

These analytical techniques are based on analysing the individual interest graphs and contexts of peoples’ lives, e.g. here, and they are the foundation of Big Data services.

The third thread was about balancing the societal and individual privacy aspects of Big Data. Economic growth from new Big Data firms and services depends on consumer trust. But these services depend on organisations sharing consumers’ data between themselves.

Few organisations share enough of a person’s life to understand their needs very deeply. But sharing data for good or for profit generates questions like How do I control my data? How do I share in the value that it is used to create? and How do I fix it when my data is hacked or stolen?

The Personal Big Data roundtable in March brought together some of the leading experts in data analytics, retail, healthcare, financial services and some key industry regulators. These questions were at the top of our agenda but they were also touched on in the other two roundtables.

The point is this: consumer trust depends on regulation, which depends on legislation, which in turn depends on policy. But current regulation, legislation and policy are inadequate for handling the opportunities and dangers that Big Data presents society – they are not so much out of date, it is more that they have been made technologically irrelevant.

From my research I am starting to see how the regulation and legislation could be developed in order to support the societal benefits that we hope to gain. To do this we need to help legislators and regulators to start this change process – stories and case studies will help but there are no case studies for some of the more complicated inter-relationships and business models that are yet to emerge.

The forth thread concerned the people, citizens and customers, that we are describing increasingly accurately with these new technologies. People do not only vary in terms of their needs for different services, which is why we analyse their data. They also vary in their attitudes to privacy – people exist on a spectrum of sensitivity with some not caring about data privacy and some being highly sensitive.

Also, people rarely read through user agreements – they do not have the time nor the training, e.g. when they download an app that will give the app firm access to the content of their mobile phone and their location on a 24 hour a day real-time basis.

But most interestingly of all, when you spend a lot of time surrounded by experts, it is worth noting that most people lack an awareness of just how the technologies that I talk about here are changing their personal and work lives right now. There is a huge need for education and awareness if people are to get the most out of these new services and use them safely.

There are two main implications from the discussions that I have had the pleasure of being part of in the last few weeks.

The first is that there seems to be some vacant niches in the Big Data ecosystem, to use another metaphor. There are some unfilled roles, like a broker that would manage a person’s data and deal with firms on their behalf; a defender of a person against harm; a fixer of such harm; an educator that teaches people what they need to know about our unfolding Big Data society; a new form of regulator to uphold public interest; or even a third party ‘dating agency’ for firms and their data.

These roles might not exist within the same organisation. Indeed some of these roles may be taken up by regulators or they may fall to multiple competing Third Parties rather than a single organisation.

The second implication is that there is a huge and complex gap between the raw material Big Data, on the one hand, and the consumer needs that it could be used to satisfy on the other. We know that we have lots of data and we know that we can buy-in, swap or access more data. We know that we have some sexy, fast, new hardware and unbelievably clever analytical software.  We even know that we want to hit the same old organisational targets of doing more with less.

But we do not know which particular data to use; which particular software to install and learn to use; which specific way of using the software, which analytical services to produce out of all those that we could; which consumer needs to target, even which consumers to target. The huge and complex gap is made up of all the dependencies in the last sentence and we are only now starting to come up with Analytical Strategies that can bridge it.

Leave a comment