Saturday, August 23, 2014

2014 Workshop on Big Data and Urban Informatics

After more than a year of preparation, the Workshop on Big Data and Urban Informatics was held at the University of Illinois at Chicago on August 11-12, 2014.

More than 150 persons from at least 10 countries (Australia, Canada, China, Greece, Israel, Italy, Japan, Portugal, United Kingdom, and the U.S.) attended the forum sponsored by the National Science Foundation.  

Piyushimita (Vonu) Thakuriah, co-chair for the workshop, reported on the funding of Urban Big Data Center at the University of Glasgow in Scotland (http://bit.ly/1kXG2Uh).  Its mission is to “support research for improved understanding of urban challenges and to provide data, technology and services to manage, make policy, and innovate in cities.”  The Urban Big Data Center partners with five other universities including the University of Illinois at Chicago. Vonu, a transportation expert, is the director of the center.

In the course of two full days, 68 excellent presentations were made in total, far exceeding the expectations of the organizers a year ago.  These papers will be posted in the web in the near future.  

Two luncheon keynote speakers highlighted the workshop.  

Carlo Ratti presented the state-of-the-art work of the MIT SENSEable City Lab, which specializes in the deployment of sensors and hand-held electronics to study the environment.  Since conventional measures of air quality tend to be collected at stationary locations, they do not always represent the exposure of a mobile individual.  In one project titled “One Country, Two Lungs” (http://bit.ly/1nbSBXi), a team of human probes travelled between Shenzhen and Hong Kong to detect urban air pollution.  The video revealed the divisions in atmospheric quality and individual exposure between these two cities. 

Paul Waddell of the University of California at Berkeley presented his work on urban simulation and dynamic 3-D visualization of land use and transportation.  Some of his impressive work images can be found at http://bit.ly/1rn9hmj.  His video and examples reminded me about their potential applicability for creating the “Three Districts and Four Lines” in China’s National Urbanization Plan.  I also learned about a somewhat similar set of products from China’s supermap.com, a Geographic Information System software company based in Beijing. 

One of the 68 presentations described the use of smart card data to study the commuting patterns and volume in Beijing subways during rush hours.  One other presentation compared the characteristics of big data and statistics and raised the question of whether big data is a supplement or a substitute to statistics. 

The issue of data quality was seldom volunteered in the sessions, but questions about it came up frequently.  Through editing, filtering, cleaning, scrubbing, imputing, curating, re-structuring, and many other terms, it was clear that some presenters spent an enormous amount of their time and efforts to just get the data ready for very basic use.

Perhaps data quality is considered secondary in exploratory work.  However, there are good quality big data and bad quality big data.  When other options are available, spending too much time and effort on bad quality big data seems unwise because it does not project a practical, future purpose.

There were also few presentations that discussed the importance of data structure, whether it is already built in as design or created through metadata.  Structured data contain far more potential information content than unstructured ones and tend to be more efficient and optimal in information extraction, especially if they have the capability to be linked across multiple sources.  

For the purpose of governance, I was somewhat surprised that use of administrative records has not yet caught on in this workshop.  Accessibility and confidentiality appeared to be barriers.  It would seem helpful for future workshops to include city administrators and public officials to help bridge the gap between research and practical needs for day-to-day operations.  

Nations and cities share a common goal in urban planning and urban informatics – improve the quality of city life and service delivery to constituents and businesses alike.  On the other hand, there are drastic differences in their current standing and approach.

China is experiencing the largest human migration in history.  It has established goals and direction for urban development, but has little reliable, quantitative research or experience to support and execute its plans.  The West is transitioning from its century-old urban living to a future that is filled with exciting creativity and energy, but does not seem to have as clear a vision or direction.

Confidentiality is an issue that contrasts sharply between China and the West.  The Chinese plans show strong commitment to collect and merge linkable individual records extensively.  If implemented successfully, it will generate unprecedented amount of detailed information that can also be abused and misused.  The same approach would likely face much scrutiny and opposition in the West, which has to consider less reliable but more costly alternatives in order to meet the same needs. 

There is perhaps no absolute right or wrong approach to these issues.  The workshop and the international community being created offer a valuable opportunity to observe, discuss, and make comparisons in many globally common topics. 

Selected papers from the workshop will now undergo additional peer review.  They will be published in an edited volume titled “See Cities Through Big Data – Research, Methods and Applications in Urban Informatics.”

Sunday, August 3, 2014

Smoking Statistics in the U.S. and China

The U.S. Surgeon General released a landmark report on smoking and health in 1964, concluding that smoking caused lung cancer.  At that time, smoking was at its peak in the U.S. – more than half of the men and nearly one-third of the women were reported to be smokers. 

The U.S. Surgeon General released another report [1] in June this year, titled “The Health Consequences of Smoking - 50 Years of Progress.” 

A time plot based on the recent report [2] shows the trend of one statistic – adult per capita cigarette consumption – for the period of 1900-2012.  It reveals the rise of smoking in the U.S. in the first half of the 20th century, coinciding with the Great Depression and two world wars when the government supplied cigarettes as rations to soldiers.  There has been a steady decline in the last 50 years. 

When the 1964 report was released, an American adult was smoking more than 4,200 cigarettes a year on the average.   Today it is less than 1,300.  About 18% of Americans smoked in 2012, down from the overall 42% in 1964.  The difference between male and female smokers is relatively small – men at 20% and women at 16%.  According to a 2013 Gallup poll [3], 95% of the American public believed that smoking is very harmful or somewhat harmful, compared to only 44% of Americans who believed that smoking causes cancer in 1958.

After the release of the 1964 report, Congress required all cigarette packages to carry a health warning label in 1965.  Cigarette advertising on television and radio were banned effective in 1970.  Taxes on cigarettes were raised; treatments for nicotine introduced; non-smoker rights movement started.  Together laws, regulations, public education, treatment, taxation, and community efforts have all played an important role in transforming a national habit to a recognized threat to human health and quality of life in the last 50 years.  This was beyond my wildest imagination that it could happen in my lifetime.

Statistics has been at the center of this enormous social change from the beginning of the smoking and health issue. 

As early as 1928, statistical data began to appear and showed a higher proportion of heavy smokers among lung cancer patients [4].  A 10-member advisory committee prepared the 1964 report, spending over a year to review more than 7,000 scientific articles along with 150 consultants.  By design, the committee included five non-smokers and five smokers, representing disciplines in medicine, surgery, pharmacology, and STATISTICS.  The lone statistician was William G. Cochran, a smoker who was also a founding member of the Statistics Department at Harvard University and author of two classic books, “Experimental Design” and “Sampling Techniques.” 

During the past 50 years, an estimated 21 million Americans have died because of smoking, including nearly 2.5 million non-smokers due to second-hand smoke and 100,000 babies due to parental smoking. 

There are still about 42 million adult smokers and 3.5 million middle and high school students smoking cigarettes in the U.S. today.  Interestingly, Asian Americans have the lowest rate of smokers at 11% among all racial groups in the U.S.

China agreed to join the World Health Organization Framework Convention on Tobacco Control in 2003.  It reported [5] 356 million smokers in 2010, about 28% of its total population and practically unchanged from its 2002 level.  The gender difference was remarkable – 340 million male smokers (96%) and 16 million female smokers (4%).  About 1.2 million people die from smoking in China each year.  Among the remaining over 900 million non-smokers in China, about 738 million, including 182 million children, are exposed to second-hand smoke.   Only 20% of Chinese adults reportedly believed that smoking causes cancer in 2010 [6].

More detailed historical records on smoking in China are either inconsistent or fragmented.  One source outside of China [7] suggested that there were 281 million Chinese smokers in 2012 and an increase of 100 million smokers from 1980. 

China has been stumbling in its efforts to control smoking.

According to a 2013 survey by the Chinese Association on Tobacco Control [8], 50.2% of the male school teachers were smokers; male doctors 47.3%; and male public servants 61%.  Given these high rates for their important roles, there is concern and skepticism on how effective tobacco control can be implemented or enforced. 

Coupled with the institutional issues of its tobacco industry, China has been criticized for its ineffective tobacco control ineffective. While the size of some American Tobacco companies may be larger, they are not state-owned. China is the world's largest tobacco producer and consumer.  Its state-owned monopoly, China National Tobacco Corporation, is the largest company of this type in the world.

Nonetheless, the Chinese government has enacted a number of measures to restrict smoking in recent years.  The Ministry of Health took the lead in banning smoking in the medical and healthcare systems in 2009.  Smoking in public indoor spaces such as restaurants, hotels, and public transportation were banned beginning in 2011. 

According to the Chinese Tobacco Control Program (2012-2015) [9,10], China will ban cigarette advertising, marketing and sponsorship, setting a goal of reducing the smoking rate from 28.1% in 2010 to 25%.

Smoking is a social issue common to both the U.S. and China. 

Statistics facilitates understanding of the status and implications, as well as providing advice, assistance, and guidance for governance.  More statistics can certainly be cited about the ill effects of smoking in both nations.  At the end, it is the collective will and wisdom of each nation that will determine the ultimate course of actions.

REFERENCES

[1] U.S. Department of Health and Human Services. (2014). The Health Consequences of Smoking – 50 Years of Progress: A Report of the Surgeon General.  Retrieved from http://www.surgeongeneral.gov/library/reports/50-years-of-progress/full-report.pdf.
[2] Ferdman, Roberto. (2014). The young and poor are keeping big American tobacco alive.  The Washington Post.  Retrieved from http://www.washingtonpost.com/blogs/wonkblog/wp/2014/07/16/the-young-and-poor-are-keeping-the-u-s-tobacco-industry-alive/.
[3] Gallup Poll. Tobacco and Smoking. Retrieved from http://www.gallup.com/poll/1717/tobacco-smoking.aspx.
[4] National Library of Medicine. Profiles in Science. The Reports of the Surgeon General.  Retrieved from http://profiles.nlm.nih.gov/ps/retrieve/Narrative/NN/p-nid/58.
[5] The Central People’s Government of the People’s Republic of China. (2011, January 6) Population of tobacco remains high and not declining; smokers are still over 300 million.  Retrieved from http://www.gov.cn/jrzg/2011-01/06/content_1779597.htm.
[6] Xinhuanet.com. (2011, May 2). New smoking ban effective in China. Retrieved from  http://news.xinhuanet.com/english2010/video/2011-05/02/c_13855260.htm.
[7] Qin, Amy. (2014, January 9). Smoking Prevalence Steady in China, but Numbers Rise. The New York Times. Retrieved from http://sinosphere.blogs.nytimes.com/2014/01/09/smoking-prevalence-steady-in-china-but-numbers-rise/?_php=true&_type=blogs&_r=0.
[8] China News. (2013, December 31). Survey finds over 60% of male public servants smoke; half never quit.  Retrieved from http://www.chinanews.com/sh/2013/12-31/5680798.shtml.
[9] The Chinese Ministry of Health. 2013 Report on Tobacco Control in China – Total Prohibition of Tobacco Advertising, Marketing and Sponsorship. Retrieved from http://www.moh.gov.cn/ewebeditor/uploadfile/2013/05/20130531132109426.pdf.
[10] China Women’s Federation News. Banning Tobacco Advertising Cannot be Just Paper Planning. Retrieved from http://acwf.people.com.cn/n/2013/0603/c99013-21712571.html.

Friday, May 30, 2014

Crossing the Stream and Reaching the Sky

In the early stages of its economic reform, China chose to "cross a stream by feeling the rocks."

Limited by expertise and conditions at that time when there was no statistical infrastructure in China to provide accurate and reliable measurements, the chosen path was the only option.

In fact, this path was traveled by many nations, including the U.S.
At the beginning of the 20th century when the field of modern statistics had not taken shape, data were not believable or reliable even if they existed.   Well-known American writer and humorist Mark Twain once lamented about “lies, damned lies, and statistics," pointing out the data quality problem of the time.  During the past hundred years, statistics deployed an international common language and reliable data, establishing a long history of success with broad areas of application in the U.S.  This stage of statistics may be generally called Statistics 1.0.

Feeling the rocks may help to across a stream, but it would be difficult to land on the moon, even more difficult to create smart cities and an affluent society.  If one could scientifically measure the depth of the stream and build roads and bridges, it may be unnecessary to make trials and errors.

The long-term development of society must exit this transitional stage and enter a more scientifically-based digital culture where high-quality data and credible, reliable statistics serve to continuously enhance the efficiency, equity and sustainability of national policies. At the same time, specialized knowledge must be converted responsibly to practical useful knowledge, serving the government, enterprises and the people.

Today, technologies associated with Big Data are advancing rapidly.  A new opportunity has arrived to usher in the Statistics 2.0 era.

Simply stated, Statistics 2.0 elevates the role and technical level of descriptive statistics, extends the theories and methods of mathematical statistics to non-randomly collected data, and expands statistical thinking to include facing the future.

One may observe that in a digital society, whether it is from crossing a stream or reaching the sky, or from governance of a nation to the daily life of the common people, what were once "unimaginable" are now "reality."  Driverless cars, drone delivery of packages, and space travel are no longer imaginations in fictions.  Although their data that can be analyzed in a practical setting are still limited, they are within the realistic visions of Statistics 2.0.

In terms of social development, the U.S. and China are actively trying to improve people’s livelihood, enhance governance, and improve the environment. A harmonious and prosperous world cannot be achieved without vibrant and sustainable economies in both China and the U.S., and peaceful, mutually beneficial collaborations between the nations.

Statistics 2.0 can and should play an extremely important role in this evolution.

The WeChat platform Statistics 2.0 will not use low quality or duplicative information to clog already congested channels, but it values new thinking to share common interest in the study of Statistics 2.0, introducing state-of-the-art developments in the U.S. and China in a simple and timely manner, offering thoughts and discussions about classical issues, exploring innovative applications, and sharing the beauty of the science of data in theory and practice.

WeChat Platform: Statistics 2.0

Not All Data are Created Equal

Suppose we have data on 60,000 households.  Are they useful for analysis? If we add that the amount of data is very large, like 3 TB or even 30 TB, does it change your answer?

The U.S. government collects monthly data from 60,000 randomly selected households and reports on the national employment situation.  Based on these data, the U.S. unemployment rate is estimated to within a margin of sampling error of about 0.2%.  Important inferences are drawn and policies are made from these statistics about the U.S. economy comprised of 120 million households and 310 million individuals.

In this case, data for 60,000 households are very useful.

These 60,000 households represent only 0.05% of all the households in the U.S.  If they were not randomly selected, the statistics they generate will contain unknown and potentially large bias.  They are not reliable to describe the national employment situation.

In this case, data for 60,000 households are not useful at all, regardless of what the file size may be.

Suppose further that the 60,000 households are all located in a small city that has only 60,000 households.  In other words, they represent the entire universe of households in the city.  These data are potentially very useful.  Depending on its content and relevance to the question of interest, usefulness of the data may again range widely between two extremes.  If the content is relevant and the quality is good, file size may then become an indicator of the degree of usefulness for the data.

This simple line of reasoning shows that the original question is too incomplete for a direct, satisfactory answer.  We must also consider, for example, the sample selection method, representation of the sample in the population under study, and the relevance and quality of the data relative to a specified hypothesis that is being investigated.

The original question of data usefulness was seldom asked until the Big Data era began around 2000 when electronic data became widely available in massive amounts at relatively low cost.  Prior to this time, data were usually collected when they were driven and needed by a known specific purpose, such as an exploration to conduct, a hypothesis to test, or a problem to resolve.  It was costly to collect data.  When they were collected, they were already considered to be potentially useful for the intended analysis.

For example, when the nation was mired in the Great Depression, the U.S. government began to collect data from randomly selected households in the 1930s so that it could produce more reliable and timely statistics about unemployment. This practice has continued to this date.

Statisticians initially considered data mining to be a bad practice.   It was argued that without a prior hypothesis, false or misleading identification of “significant” relationships and patterns is inevitable by “fishing,” “dredging,” or “snooping” data aimlessly.  An analogy is the over interpretation or analysis of a person winning a lottery, not necessarily because the person possesses any special skill or knowledge about winning a lottery, but because random chance dictates that some person(s) must eventually win a lottery.

Although the argument of false identification remains valid today, it has also been overwhelmed by the abundance of available Big Data that are frequently collected without design or even structure.  Total dismissal of the data-driven approach bypasses the chance of uncovering hidden, meaningful relationships that have not been or cannot be established as a priori hypotheses.  An analogy is the prediction of hereditary disease and the study of potential treatment.  After data on the entire human genome are collected, they may be explored and compared for the systematic identification and treatment of specific hereditary diseases.

Not all data are created equal and have the same usefulness.

Complete and structured data can create dynamic frames that describe an entire population in detail over time, providing valuable information that has never been available in previous statistical systems.  On the other hand, fragmented and unstructured data may not yield any meaningful analysis no matter how large the file size may be.

As problem solving is rapidly expanding from a hypothesis-driven paradigm to include a data-driven approach, the fundamental questions about the usefulness and quality of these data have also increased in importance.  While the question of study interest may not be specified a priori, establishing it a posteriori to data collection is still necessary before conducting any analysis.  We cannot obtain a correct answer to non-existing questions.

How are the samples selected?  How much does the sample represent the universe of inference?  What is the relevance and quality of data relative to the posterior hypothesis of interest?   File size has little to no meaning if the usefulness of data cannot even be established in the first place.  

Ignoring these considerations may lead to the need to update a well-known quote: “Lies, Damned Lies, and Big Data.”

Tuesday, April 8, 2014

Lying with Big Data

About 45 years ago, I spent a whopping $1.95 on a little book titled "How to Lie with Statistics."

Besides the catchy title, its bright orange cover has a comic character sweeping numbers under a rug.  Darrell Huff, a magazine editor and a freelance writer, wrote the book in 1954.  It went on to become the most popular statistics book in the world for more than half a century.  A translated version was published in China around 2002.

It takes only a few hours to read the entire book of about 140 pages and 80 pictures leisurely, but it was a major reason why I pursued an education and a professional career in statistics.

The corners of the book are now worn; the pages have turned yellow.  One can identify some of the social changes in the last 60 years from the book.  For example, $25,000 is no longer an enviable annual salary; few of today’s younger generation may know what a “telegram” was; “gay” has a very different meaning now; and “African Americans” has replaced “Negroes” in daily usage.  As indicative of the bygone era, the image of a cigar, a cigarette, or a pipe appeared in at least one out of every five pictures in the book – even babies were puffing away in high chairs.  The word “computer” did not show up once among its 26,000 words.

Huff’s words were simple, but sharp and direct.   He provided example after example that the most respected magazines and newspapers of his time lie with statistics, just like the dreadful “advertising man” and politician.

According to Huff, most humans have “a bias to favor, a point to prove, and an axe to grind.”  They tend to over- or under-state the truth in responding to surveys; those who complete surveys are systematically different from those who do not respond; and built-in partiality occurs in the wording of a questionnaire, appearance of an interviewer, or interpretation of the results. 

There were no desktop computers or mobile devices; statistical charts and infographics were drawn by hand; data collection, especially complete counts like a census, was difficult and costly.  Huff conjectured, and the statistics profession has also concurred, that the only reliable small sample is one that is random and representative where all sources of bias have been removed.

Calling anyone a liar was harsh then, and it still is now.  The dictionary definition of a lie is a false statement made with deliberate intent to deceive.  Huff considered lying to include chicanery, distortion, manipulation, omission, and trickery; ignorance and incompetence were only excuses for not recognizing them as lies.  One may also lie by selectively using a mean, a median, or a mode to mislead readers although all of them are correct as an average.

No matter how broadly or narrowly lies may be defined, it cannot be denied that people do lie with statistics every day.  To some media’s credit, there are now fact-checkers who regularly examine stories or statements, most of them based on numbers, and evaluate their degree of truthfulness.

In the era of Big Data, lies occur in higher velocity with bigger volume and greater variety.

Moore’s law is not a legal, physical, or natural law, but a loosely-fitted regression equation in logarithmic scale.  Each of us has probably won the Nigerian lottery or its variations via email at least a few times.  While measures for gross domestic products or pollution are becoming more accurate because of Big Data, nations liberally use their aggregate or per capita average, depending on which favors their point of view.   

Heavy mining of satellite, radar, audio messages, sensor, and other Big Data may one day solve the tragic mystery of Malaysian Flight MH370, but the many pure speculations, conspiracy theories, accusations of wrongdoing, and irresponsible lies quoting these data have mercilessly added anguish and misery to the families of the passengers and the crew.  No one seems to be tracking the velocity, volume and variety of the false positives that have been generated for this event, or other data mining efforts with Big Data.

The responsibility is of course not on the data; it is on the people.  There is the old saying that “figures don’t lie, but liars figure.”  Big Data – in terms of advancing technology and availability of some massive amount of randomly and non-randomly collected electronic data - will undoubtedly expand the study of statistics and bring our understanding and governance to new heights.

Huff observed that “without writers who use the words with honesty and understanding and readers who know what they mean, the result can only be semantic nonsense.”  Today many statisticians are still using terms like “Type I error” and “Type II error” in promoting statistical understanding, while these concepts and underlying pitfalls are seldom mentioned in Big Data discussions.

At the end of his book, Huff suggested that one can try to recognize sound and usable data in the wilderness of fraud by asking five questions: Who says so? How does he know? What’s missing? Did somebody change the subject? Does it make sense?  They are not perfect, but they are worth asking.  On the other hand, healthy skepticism should not become overzealous in discrediting truly sound and innovative findings.

Faced with the self-raised question of why he wrote the book, especially with the title and content that provides ideas to use statistics to deceive and swindle, Huff responded that “[t]he crooks already know these tricks; honest men must learn them in defense.”

How I wish there is a book about how to lie with Big Data now!  In the meantime, Huff’s book remains as enlightening as it was 45 years ago although the price of the book has gone up to $5.98 and is almost matched by its shipping cost.

Jeremy S. Wu, Ph. D.