Friday, May 30, 2014

Crossing the Stream and Reaching the Sky

In the early stages of its economic reform, China chose to "cross a stream by feeling the rocks."

Limited by expertise and conditions at that time when there was no statistical infrastructure in China to provide accurate and reliable measurements, the chosen path was the only option.

In fact, this path was traveled by many nations, including the U.S.
At the beginning of the 20th century when the field of modern statistics had not taken shape, data were not believable or reliable even if they existed.   Well-known American writer and humorist Mark Twain once lamented about “lies, damned lies, and statistics," pointing out the data quality problem of the time.  During the past hundred years, statistics deployed an international common language and reliable data, establishing a long history of success with broad areas of application in the U.S.  This stage of statistics may be generally called Statistics 1.0.

Feeling the rocks may help to across a stream, but it would be difficult to land on the moon, even more difficult to create smart cities and an affluent society.  If one could scientifically measure the depth of the stream and build roads and bridges, it may be unnecessary to make trials and errors.

The long-term development of society must exit this transitional stage and enter a more scientifically-based digital culture where high-quality data and credible, reliable statistics serve to continuously enhance the efficiency, equity and sustainability of national policies. At the same time, specialized knowledge must be converted responsibly to practical useful knowledge, serving the government, enterprises and the people.

Today, technologies associated with Big Data are advancing rapidly.  A new opportunity has arrived to usher in the Statistics 2.0 era.

Simply stated, Statistics 2.0 elevates the role and technical level of descriptive statistics, extends the theories and methods of mathematical statistics to non-randomly collected data, and expands statistical thinking to include facing the future.

One may observe that in a digital society, whether it is from crossing a stream or reaching the sky, or from governance of a nation to the daily life of the common people, what were once "unimaginable" are now "reality."  Driverless cars, drone delivery of packages, and space travel are no longer imaginations in fictions.  Although their data that can be analyzed in a practical setting are still limited, they are within the realistic visions of Statistics 2.0.

In terms of social development, the U.S. and China are actively trying to improve people’s livelihood, enhance governance, and improve the environment. A harmonious and prosperous world cannot be achieved without vibrant and sustainable economies in both China and the U.S., and peaceful, mutually beneficial collaborations between the nations.

Statistics 2.0 can and should play an extremely important role in this evolution.

The WeChat platform Statistics 2.0 will not use low quality or duplicative information to clog already congested channels, but it values new thinking to share common interest in the study of Statistics 2.0, introducing state-of-the-art developments in the U.S. and China in a simple and timely manner, offering thoughts and discussions about classical issues, exploring innovative applications, and sharing the beauty of the science of data in theory and practice.

WeChat Platform: Statistics 2.0

Not All Data are Created Equal

Suppose we have data on 60,000 households.  Are they useful for analysis? If we add that the amount of data is very large, like 3 TB or even 30 TB, does it change your answer?

The U.S. government collects monthly data from 60,000 randomly selected households and reports on the national employment situation.  Based on these data, the U.S. unemployment rate is estimated to within a margin of sampling error of about 0.2%.  Important inferences are drawn and policies are made from these statistics about the U.S. economy comprised of 120 million households and 310 million individuals.

In this case, data for 60,000 households are very useful.

These 60,000 households represent only 0.05% of all the households in the U.S.  If they were not randomly selected, the statistics they generate will contain unknown and potentially large bias.  They are not reliable to describe the national employment situation.

In this case, data for 60,000 households are not useful at all, regardless of what the file size may be.

Suppose further that the 60,000 households are all located in a small city that has only 60,000 households.  In other words, they represent the entire universe of households in the city.  These data are potentially very useful.  Depending on its content and relevance to the question of interest, usefulness of the data may again range widely between two extremes.  If the content is relevant and the quality is good, file size may then become an indicator of the degree of usefulness for the data.

This simple line of reasoning shows that the original question is too incomplete for a direct, satisfactory answer.  We must also consider, for example, the sample selection method, representation of the sample in the population under study, and the relevance and quality of the data relative to a specified hypothesis that is being investigated.

The original question of data usefulness was seldom asked until the Big Data era began around 2000 when electronic data became widely available in massive amounts at relatively low cost.  Prior to this time, data were usually collected when they were driven and needed by a known specific purpose, such as an exploration to conduct, a hypothesis to test, or a problem to resolve.  It was costly to collect data.  When they were collected, they were already considered to be potentially useful for the intended analysis.

For example, when the nation was mired in the Great Depression, the U.S. government began to collect data from randomly selected households in the 1930s so that it could produce more reliable and timely statistics about unemployment. This practice has continued to this date.

Statisticians initially considered data mining to be a bad practice.   It was argued that without a prior hypothesis, false or misleading identification of “significant” relationships and patterns is inevitable by “fishing,” “dredging,” or “snooping” data aimlessly.  An analogy is the over interpretation or analysis of a person winning a lottery, not necessarily because the person possesses any special skill or knowledge about winning a lottery, but because random chance dictates that some person(s) must eventually win a lottery.

Although the argument of false identification remains valid today, it has also been overwhelmed by the abundance of available Big Data that are frequently collected without design or even structure.  Total dismissal of the data-driven approach bypasses the chance of uncovering hidden, meaningful relationships that have not been or cannot be established as a priori hypotheses.  An analogy is the prediction of hereditary disease and the study of potential treatment.  After data on the entire human genome are collected, they may be explored and compared for the systematic identification and treatment of specific hereditary diseases.

Not all data are created equal and have the same usefulness.

Complete and structured data can create dynamic frames that describe an entire population in detail over time, providing valuable information that has never been available in previous statistical systems.  On the other hand, fragmented and unstructured data may not yield any meaningful analysis no matter how large the file size may be.

As problem solving is rapidly expanding from a hypothesis-driven paradigm to include a data-driven approach, the fundamental questions about the usefulness and quality of these data have also increased in importance.  While the question of study interest may not be specified a priori, establishing it a posteriori to data collection is still necessary before conducting any analysis.  We cannot obtain a correct answer to non-existing questions.

How are the samples selected?  How much does the sample represent the universe of inference?  What is the relevance and quality of data relative to the posterior hypothesis of interest?   File size has little to no meaning if the usefulness of data cannot even be established in the first place.  

Ignoring these considerations may lead to the need to update a well-known quote: “Lies, Damned Lies, and Big Data.”