Beware the Rise of the Digital Oligarchy

The Guardian’s media network published a short article I wrote for them on 5th March. They commissioned an article of about 600 words, that appeared on the Guardian’s site, but the original version I wrote was around 1400. I agreed a week’s exclusivity with the Guardian, but now that’s up, the longer version is below (it’s about twice as long).

On a recent visit to Genova, during a walk through the town with my colleague Lorenzo, he pointed out what he said was the site of the world’s first commercial bank. The bank of St George, located just outside the city’s old port, grew to be one of the most powerful institutions in Europe, it bankrolled Charles V and governed many of Genova’s possessions on the republic’s behalf. The trust that its clients placed in the bank is shown in records of its account holders. There are letters from Christopher Columbus to the bank instructing them in the handling of his affairs. The influence of the bank was based on the power of accumulated capital. Capital they could accumulate through the trust of a wealthy client base. The bank was so important in the medieval world that Machiavelli wrote that “if even more power was ceded by the Genovan republic to the bank, Genova would even outshine Venice amongst the Italian city states.” The Bank of St George was once one of the most influential private institutions in Europe.

Today the power wielded by accumulated capital can still dominate international affairs, but a new form of power is emerging, that of accumulated data. Like Hansel and Grettel trailing breadcrumbs into the forest, people now leave a trail of data-crumbs wherever we travel. Supermarket loyalty cards, text messages, credit card transactions, web browsing and social networking. The power of this data emerges, like that of capital, when it’s accumulated. Data is the new currency.

Where does this power come from? Cross linking of different data sources can give deep insights into personality, health, commercial intent and risk. The aim is now to understand and characterize the population, perhaps down to the individual level. Personalization is the watch word for your search results, your social network news feed, your movie recommendations and even your friends. This is not a new phenomenon, psychologists and social scientists have always attempted to characterize the population, to better understand how to govern or who to employ. They acquired their data by carefully constructed questionnaires to better understand personality and intelligence. The difference is the granularity with which these characterizations are now made, instead of understanding groups and sub-groups in the population, the aim is to understand each person. There are wonderful possibilities, we should  better understand health, give earlier diagnoses for diseases such as dementia and provide better support to the elderly and otherwise incapacitated people. But there are also major ethical questions, and they don’t seem to be adequately addressed by our current legal frameworks. For Columbus it was clear, he was the owner of the money in his accounts. His instructions to the bank tell them how to distribute it to friends and relations. They only held his capital under license. A convenient storage facility. Ownership of data is less clear. Historically, acquiring data was expensive: questionnaires were painstakingly compiled and manually distributed. When answering, the risk of revealing too much of ourselves was small because the data never accumulated. Today we leave digital footprints in our wake, and acquisition of this data is relatively cheap. It is the processing of the data that is more difficult.

I’m a professor of machine learning. Machine learning is the main technique at the heart of the current revolution in artificial intelligence. A major aim of our field is to develop algorithms that better understand data: that can reveal the underlying intent or state of health behind the information flow. Already machine learning techniques are used to recognise faces or make recommendations, as we develop better algorithms that better aggregate data, our understanding of the individual also improves.

What do we lose by revealing so much of ourselves? How are we exposed when so much of our digital soul is laid bare? Have we engaged in a Faustian pact with the internet giants? Similar to Faust, we might agree to the pact in moments of levity, or despair, perhaps weakened by poor health. My father died last year, but there are still echoes of him on line. Through his account on Facebook I can be reminded of his birthday or told of common friends. Our digital souls may not be immortal, but they certainly outlive us. What we choose to share also affects our family: my wife and I may be happy to share information about our genetics, perhaps for altruistic reasons, or just out of curiosity. But by doing so we are also sharing information about our children’s genomes. Using a supermarket loyalty card gains us discounts on our weekly shop, but also gives the supermarket detailed information about our family diet. In this way we’d expose both the nature and nurture of our children’s upbringing. Will our decisions to make this information available haunt our children in the future? Are we equipped to understand the trade offs we make by this sharing?

There have been calls from Elon Musk, Stephen Hawking and others to regulate artificial intelligence research. They cite fears about autonomous and sentient artificial intelligence that  could self replicate beyond our control. Most of my colleagues believe that such breakthroughs are beyond the horizon of current research. Sentient intelligence is  still not at all well understood. As Ryan Adams, a friend and colleague based at Harvard tweeted:

Personally, I worry less about the machines, and more about the humans with enhanced powers of data access. After all, most of our historic problems seem to have come from humans wielding too much power, either individually or through institutions of government or business. Whilst sentient AI does seem beyond our horizons, one aspect of it is closer to our grasp. An aspect of sentient intelligence is ‘knowing yourself’, predicting your own behaviour. It does seem to me plausible that through accumulation of data computers may start to ‘know us’ even better than we know ourselves. I think that one concern of Musk and Hawking is that the computers would act autonomously on this knowledge. My more immediate concern is that our fellow humans, through the modern equivalents of the bank of St George, will be exploiting this knowledge leading to a form of data-oligarchy. And in the manner of oligarchies, the power will be in the hands of very few but wielded to the effect of many.

How do we control for all this? Firstly, we need to consider how to regulate the storage of data. We need better models of data-ownership. There was no question that Columbus was the owner of the money in his accounts. He gave it under license, and he could withdraw it at his pleasure. For the data repositories we interact with we have no right of deletion. We can withdraw from the relationship, and in Europe data protection legislation gives us the right to examine what is stored about us. But we don’t have any right of removal. We cannot withdraw access to our historic data if we become concerned about the way it might be used. Secondly, we need to increase transparency. If an algorithm makes a recommendation for us, can we known on what information in our historic data that prediction was based? In other words, can we know how it arrived at that prediction? The first challenge is a legislative one, the second is both technical and social. It involves increasing people’s understanding of how data is processed and what the capabilities and limitations of our algorithms are.

There are opportunities and risks with the accumulation of data, just as there were (and still are) for the accumulation of capital. I think there are many open questions, and we should be wary of anyone who claims to have all the answers. However, two directions seem clear: we need to both increase the power of the people; we need to develop their understanding of the processes. It is likely to be a fraught process, but we need to form a data-democracy: data governance for the people by the people and with the people’s consent.

Neil Lawrence is a Professor of Machine Learning at the University of Sheffield. He is an advocate of “Open Data Science” and an advisor to a London based startup, CitizenMe, that aims to allow users to “reclaim their digital soul”.

Advertisements

Open Data Science

Not sure if this is really a blog post, it’s more of a ‘position paper’ or a proposal, but it’s something that I’d be very happy to have comment on, so publishing it in the form of a blog seems most appropriate.

We are in the midst of the information revolution and it is being driven by our increasing ability to monitor, store, interconnect and analyse large interacting sets of data. Industrial mechanisation required a combination of coal and heat engine. Informational mechanisation requires the combination of data and data engines. By analogy with a heat engine, which takes high entropy heat energy, and converts it to low entropy, actionable, kinetic energy, a data engine is powered by large unstructured data sources and converts them to actionable knowledge. This can be achieved through a combination of mathematical and computational modelling and the combination of required skill sets falls across traditional academic boundaries.

Outlook for Compaines

From a commercial perspective companies are looking to characterise consumers/users in unprecedented detail. They need to characterize their users’ behavior in detail to

  1. provide better service to retain users,
  2. target those users with commercial opportunities.

These firms are competing for global dominance, to be the data repository. They are excited by the power of interconnected data, but made nervous about the natural monopoly that it implies. They view the current era as being analogous to the early days of ‘microcomputers’: competing platforms looking to dominate the market. They are nervous of the next stage in this process. They foresee the natural monopoly that the interconnectedness of data implies, and they are pursuing it with the vigour of a young Microsoft. They are paying very large fees to acquire potential competitors to ensure that they retain access to the data (e.g. Facebook’s purchase of Whatsapp for $19 billion) and they are acquiring expertise in the analysis of data from academia either through direct hires (Yann LeCun from NYU to Facebook, Andrew Ng from Stanford to found a $300 million Research Lab for Baidu) or purchasing academic start ups (Geoff Hinton’s DNNResearch from Toronto to Google, the purchase of DeepMind by Google for $400 million). The interest of these leading internet firms in machine learning is exciting and a sign of the major successes of the field, but it leaves a major challenge for firms that want to enter the market and either provide competing or introduce new services. They are debilitated by

  1. lack of access to data,
  2. lack of access to expertise.

 

Science

Science is far more evolved than the commercial world from the perspective of data sharing. Whilst its merits may not be
universally accepted by individual scientists, communities and funding agencies encourage widespread sharing. One of the most significant endeavours was the human genome project, now nearly 25 years old. In computational biology there is now widespread sharing of data and methodologies: measurement technology moves so quickly that an efficient pipeline for development and sharing is vital to ensure that analysis adapts to the rapidly evolving nature of the data (e.g. cDNA arrays to Affymetrix arrays to RNAseq). There are also large scale modelling and sharing challenges at the core of other disciplines such as astronomy (e.g. Sarah Bridle’s GREAT08 challenge for Cosmic Lensing) and climate science. However, for many scientists their access to these methodologies is restricted not by lack of availability of better methods, but through technical inaccessibility. A major challenge in science is bridging the gap between the data analyst and the scientist. Equipping the scientist with the fundamental concepts that will allow them to explore their own systems with a complete mathematical and computational toolbox, rather than being constrained by the provisions of a commercial ‘analysis toolbox’ software provider.

Health

Historically, in health, scientists have worked closely with clinicians to establish the cause of disease and, ideally, eradicate
them at source. Antibiotics and vaccinations have had major successes in this area. The diseases that remain are

  1. resulting from a large range of initial causes; and as a result having no discernible target for a ‘magic bullet’ cure (e.g. heart disease, cancers).
  2. difficult to diagnose at early stage, leading to identification only when progress is irreversible (e.g. dementias) or
  3. coevolving with our clinical advances developments to subvert our solutions (e.g. C difficile, multiple drug resistant tuberculosis).

Access to large scale interconnected data sources again gives the promise of a route to resolution. It will give us the ability to better characterize the cause of a given disease; the tools to monitor patients and form an early diagnosis of disease; and the biological
understanding of how disease agents manage to subvert our existing cures. Modern data allows us to obtain a very high resolution,
multifaceted perspective on the patient. We now have the ability to characterise their genotype (through high resolution sequencing) and their phenotype (through gene and protein expression, clinical measurements, shopping behaviour, social networks, music listening behaviour). A major challenge in health is ensuring that the privacy of patients is respected whilst leveraging this data for wider societal benefit in understanding human disease. This requires development of new methodologies that are capable of assimilating these information resources on population wide scales. Due to the complexity of the underlying system, the methodologies required are also more complex than the relatively simple approaches that are currently being used to, for example, understand commercial intent. We need more sophisticated and more efficient data engines.

International Development

The wide availability of mobile telephones in many developing countries provides opportunity for modes of development that differ considerably from the traditional paths that arose in the past (e.g. canals, railways, roads and fixed line telecommunications). If countries take advantage of these new approaches, it is likely that the nature of the resulting societies will be very different from those that arose through the industrial revolution. The rapid adoption of mobile money, which arguably places parts of the financial system in many sub-saharan African countries ahead of their apparently ‘more developed’ counterparts, illustrates what is possible. These developments are facilitated by low capital cost of deployment. They are reliant on the mobile telecommunications architecture and the widespread availability of handsets. The ease of deployment and development of mobile phone apps, and the rapidly increasing availability of affordable smartphone handsets presents opportunities that exploit the particular advantages of the new telecommunications ecosystem. A key strand to our thinking is that these developments can be pursued by local entrepeneurs and software developers (to see this in action check out the work of the AI-DEV group here). The two main challenges for enabling this to happen are mechanisms for data sharing that retain the individual’s control over their data and the education of local researchers and students. These aims are both facilitated by the open data science agenda.

Common Strands to these Challenges

The challenges described above have related strands to them that can be summarized in three areas:

  1. Access to data whilst balancing the individual’s right to privacy alongside the societal need for advance.
  2. Advancing methodologies: development of methodologies needed to characterize large interconnected complex data sets
  3. Analysis empowerment: giving scientists, clinicians, students, commercial and academic partners the ability to analyze their own data using the latest methodological advances.

The Open Data Science Idea

It now seems absurd to posit a ‘magic bullet cure’ for the challenges described above across such diverse fields, and indeed, the underlying circumstances of each challenge is sufficiently nuanced for any such sledge hammer to be brittle. However, we will attempt to describe a philosophical approach, that when combined with the appropriate domain expertise (whether that’s cultural, societal or technical)  will aim to address these issues in the long term.

Microsoft’s quasi-monopoly on desk top computing was broken by open source software. It has been estimated that the development cost of a full Linux system would be $10.8 billion dollars. Regardless of the veracity of this figure, we know that
several leading modern operating systems are based on open source (Android is based on Linux, OSX is based on FreeBSD). If it weren’t for open source software, then these markets would have been closed to Microsoft’s competitors due to entry costs. We can do much to celebrate the competition provided by OSX and Android and the contributions of Apple and Google in bringing them to market, but the enablers were the open source software community. Similarly, at launch both Google and Facebook’s architectures, for web search and social networking respectively, were entirely based on open source software and both companies have contributed informally and formally to its development.

Open data science aims to bring the same community resource assimilation together to capitalize on underlying social driver of this phenomenon: many talented people would like to see their ideas and work being applied for the widest benefit possible. The modern internet provides tools such as github, IPython notebook and reddit for easily distribution and comment on this material. In Sheffield we have started making our ideas available through these mechanisms. As academics in open data science part of our role should be to:

  1. Make new analysis methodologies available as widely and rapidly as possible with as few conditions on their use as possible
  2. Educate our commercial, scientific and medical partners in the use of these latest methodologies
  3. Act to achieve a balance between data sharing for societal benefit and the right of an individual to own their data.

We can achieve 1) through widespread distribution of our ideas under flexible BSD-like licenses that give commercial, scientific and medical partners as much flexibility to adapt our methods and analyses as possible to their own circumstances. We will achieve 2) through undergraduate courses, postgraduate courses, summer schools and widespread distribution of teaching materials. We will host projects from across the University from all departments. We will develop new programs of study that address the gaps in current expertise. Our actions regarding 3) will be to support and advise initiatives which look to return to the individual more control of their own data. We should do this simultaneously with engaging with the public on what the technologies behind data sharing are and how they will benefit society.

Summary

Open data science should be an inclusive movement that operates across traditional boundaries between companies and academia. It could bridge the technological gap between ‘data science’ and science. It could address the barriers to large scale analysis of health data and it will build bridges between academia and companies to ease access to methodologies and data. It will make our ideas publicly available for consumption by the individual; in developing countries, commercial organisations and public institutes.

In Sheffield we have already been actively pursuing this agenda through different strands: we have been making software available for over a decade, and now are doing so with extremely liberal licenses. We are running a series of Gaussian process summer schools, which have included roadshows in UTP, Colombia (hosted by Mauricio Alvarez) and Makerere University, Uganda (hosted by John Quinn). We have organised workshops targeted at Big Data and we are making our analysis approaches freely available. We have organised courses locally in Sheffield in programming targeted at biologists (taught by Marta Milo) and have begun a series of meetings on Data Science (speakers have included Fernando Perez, Fabian Pedregosa, Michael Betancourt and Mike Croucher). We have taught on the ML Summer School and at EBI Summer Schools focused on Computational Systems Biology. Almost all these activities have led to ongoing research collaborations, both for us and for other attendees. Open Data Science brings all these strands together, and it expands our remit to communicate using the latest tools to a wider cross section of clinicians and scientists. Driven by this agenda we will also expand our interaction with commercial partners, as collaborators, consultants and educators. We welcome other groups both in the UK and internationally in joining us in achieving these aims.