Newsletter Subscribe

Bringing you thought leadership, news, infographics, resources and our own brand of comics each week to your inbox...

Hand Writing: Data, Data, Everywhere, But Let’s Just Stop And Think

Hand Writing: Data, Data, Everywhere, But Let’s Just Stop And Think

Surely nobody who has the slightest awareness of what’s going on in the world can be unaware of the phrase ‘big data’. Almost every day the newspapers and television make reference to it, and it’s ubiquitous on the web. In November, a Google search for the phrase ‘big data’ yielded 1.8 billion hits. Google Trends shows that the rate of searches for the phrase is now about ten times what it was at the start of 2011.

The phrase defies an exact definition: one can define it in absolute terms (so many gigabytes, petabytes, etc) or in relative terms (relative to your computational resources), and in other ways. The obvious way for data to be big is by having many units (e.g., stars in an astronomical database), but it could also be big in terms of the number of variables (e.g., genomic data), the number of times something is observed (e.g., high frequency financial data), or by virtue of its complexity (e.g., the number of potential interactions in a social network).


However one defines it, the point about ‘big data’ is the implied promise—of wonderful discoveries concealed within the huge mass, if only one can tease them out. That this is exactly the same promise that data mining made some twenty years ago is no accident. To a large extent, ‘big data’ is merely a media rebranding of ‘data mining’ (and of ‘business analytics’ in commercial contexts), and the media coining of the phrase ‘big data’ goes some way towards explaining the suddenness of the rise in interest.

Broadly speaking, there are two kinds of use of big data. One merely involves searching, sorting, matching, concatenating, and so on. So, for example, we get directions from Google maps, we learn how far away the next bus is, and we find a shop stocking the item we want. But the other use, and my personal feeling is there are more problems of this kind, involves inference. That is, we don’t actually want to know about the data we have but about data we might have had or might have in the future. What will happen tomorrow? Which medicine will make us better? What is the true value of some attribute? What would have happened had things been different? While computational tools are the keys to the first kind of problem, statistical tools are the keys to the second.

If big data is another take on data mining (looking at it from the resources end, rather than the tool end) then perhaps we can learn from the data mining experience. We might suspect, for example, that interesting and valuable discoveries will be few and far between, that many discoveries will turn out to be uninteresting, or obvious, or already well-known, and that most will be explainable by data errors. For example, big data sets are often accumulated as a side-effect of some other process—calculating how much to charge for a basket of supermarket purchases, deciding what prescription is appropriate for each patient, marking the exams of individual students—so we must be wary of issues such as selection bias. Statisticians are very aware of such things, but others are not.

As far as errors are concerned, a critical thing about big data is that the computer is a necessary intermediary: the only way you can look at the data is via plots, models, and diagnostics. You cannot examine a massive data set point by point. If data themselves are one step in a mapping from the phenomenon being studied, then looking at those data through the window of the computer is yet another step. No wonder errors and misunderstandings creep in.

Moreover, while there is no doubt that big data opens up new possibilities for discovery, that does not mean that ‘small data’ are redundant. Indeed, I might conjecture an informal theorem: the number of data sets of size n is inversely related to n. There will be vastly more small data sets than big ones, so we should expect proportionately more discoveries to emerge from small data sets.

Neither must we forget that data and information are not the same: it is possible to be data rich but information poor. The manure heap theorem is of relevance here. This mistaken theorem says that the probability of finding a gold coin in a heap of manure tends to 1 as the size of the heap tends towards infinity. Several times, after I’ve given talks about the potential of big data (stressing the need for effective tools, and describing the pitfalls outlined above), I have had people, typically from the commercial world, approach me to say that they’ve employed researchers to study their massive data sets, but to no avail: no useful information has been found.

Finally, the bottom line: to have any hope of extracting anything useful from big data, and to overcome the pitfalls outlined above, effective inferential skills are vital. That is, at the heart of extracting value from big data lies statistics.

David-J-HandBy David J Hand

David Hand is Senior Research Investigator and Emeritus Professor of Mathematics at Imperial College, London, and Chief Scientific Advisor to Winton Capital Management. He is a Fellow of the British Academy, and a recipient of the Guy Medal of the Royal Statistical Society. He has served (twice) as President of the Royal Statistical Society, and is on the Board of the UK Statistics Authority. He has published 300 scientific papers and 25 books.

Original post can be seen in the Institute of Mathematical Statistics Bulletin, January/February

About CloudTweaks

Established in 2009, CloudTweaks is recognized as one of the leading authorities in connected technology information and services.

We embrace and instill thought leadership insights, relevant and timely news related stories, unbiased benchmark reporting as well as offer green/cleantech learning and consultive services around the world.

Our vision is to create awareness and to help find innovative ways to connect our planet in a positive eco-friendly manner.

In the meantime, you may connect with CloudTweaks by following and sharing our resources.

Philips spotlights connected technology, predictive analytics software, and artificial intelligence advancing population health and precision medicine at HIMSS 2017 AMSTERDAM, Feb. 17, 2017 /PRNewswire/ -- Featuring new and enhanced connected health offerings at the 2017 HIMSS Conference & Exhibition (HIMSS17), Royal Philips (NYSE: PHG,AEX: PHIA), a global leader in health technology, will showcase a broad range of population health management, ...
Read More
Cupertino, California — Apple today announced its 28th annual Worldwide Developers Conference (WWDC) — hosting the world’s most talented developer community — will be held at the McEnery Convention Center in San Jose. The conference, kicking off June 5, will inspire developers from all walks of life to turn their passions into the next great innovations and apps that customers ...
Read More
When Cisco Systems Inc. reports earnings Wednesday, the big question will be if the networking giant’s repeated gambles on software can reverse a yearlong sales slide, or at least point to a reversal of that trend in the future. Cisco CSCO, +1.06%  is scheduled to report fiscal second-quarter earnings less than a month after announcing its latest multibillion-dollar software acquisition, ...
Read More
Offering Integrated and Automated Solutions, Expansive Partner Ecosystem, Advanced Architecture with Cross-Industry Collaboration SAN FRANCISCO, Feb. 14, 2017 – Today Intel Security outlined a new, unifying approach for the cybersecurity industry that strives to eliminate fragmentation through updated integrated solutions, new cross-industry partnerships and product integrations within the Intel Security Innovation Alliance and Cyber Threat Alliance (CTA). “Transforming isolated technologies ...
Read More
IoT Enablement, Analytics Offer Strong Monetisation Opportunities HAMPSHIRE, UNITED KINGDOM--(Marketwired - February 13, 2017) - A new study from Juniper Research has calculated that mobile network operators can realise an additional $85 billion in revenues over the next five years through the deployment and enhancement of non-core services including Big Data analytics and IoT (Internet of Things) enablement. Operators "Can ...
Read More