DALI Meeting + Views on Machine Learning and Artificial Intelligence

I just got back from the first DALI meeting, held in La Palma. I was a co-organiser with Zoubin Ghahramani, Thomas Hoffman and Bernhard Schoelkopf. The original vision was mainly driven by Bernhard, and the meeting is an attempt to recapture the spirit of some of the early NIPS conferences and the Snowbird meeting: a smaller meeting with some focus and a lot of informal debate. A schedule designed to encourage discussion and for people to engage across different fields and sub-fields.

The meeting was run as a day of workshops, followed by a day of plenary sessions and a further day of workshops. Zoubin organised the workshop schedule, and Thomas the plenary sessions. For the workshops we decided on topics and invited organisers who themselves invited the attendees, we heard about Probabilistic Programming, Networks and Causality, Deep Learning for Vision, Probabilistic Numerics and Statistical Learning Theory. We had plenaries from experts in machine learning as well as one by Metin Sitti on Mini/Micro/Nanorobotics. Thomas ended the plenary session with a panel discussion with Alex Graves, Ralf Herbrich, Yann LeCun, Bernhard Schoelkopf, Zoubin Ghahramani and myself, chaired by Thomas.

Thomas seeded the panel discussion by asking us to make three minute statements. He asked about several things, but the one that caught my eye was machine learning and artificial intelligence. Everyone had interesting things to day, and I don’t want to paraphrase them too much, but it distilled some of my thinking (being asked to summarize in 3 minutes) so I wanted to reflect that here.

I will only mention other’s views briefly, because I don’t want to misrepresent what they might have said, and that’s easy to do. But I’m happy for any of them to comment on the below. They had many interesting things to say about the topics also (probably much more so than me!).

I only had two ‘notes’ for the discussion which I spoke to off the cuff, so I’ll split the thoughts into those two sections. Those who know me know I can talk for a long time, and I was trying to limit this tendency!

Note 1: Perception and Self Perception

This note meant to me that perception was an area where we’ve been successful, but self-perception less so. I’ll try and clarify.

I’m probably using these terms too loosely, so let me define what I mean by ‘perception’.I mean the sensing of objects and our environment. The particular recent success of deep learning has been on sensing the environment, categorising objects, locating pedestrians. I’ve always felt the mathematical theory of how we should aim to do this was fairly clear: it’s summarised by Bayes’ rule which is widely used in robotics, vision, speech etc. The big recent change from the deep learning community has been the complexity of the mappings that we use to form this perception and our ability to learn from data. So I see this as a success.

For self-perception I mean the sensing of our selves, our prediction of our own interactions with the environment. How what we might do could affect the environment and how we will react to those effects. This has an interesting flavour of infinite regress. If we try and model ourselves and the environment we need a model that is larger than ourselves and the environment. However, that model is part of us, so we need another model on top of that. This is the infinite regress, and it’s non convergent. It strikes me that the only way we can get around that is to use a ‘compression’ of ourselves, i.e. have a model within our model in order to predict our interactions with the environment. This compressed model of ourselves will not be entirely accurate, and may mis-predict our own behaviour, but it is necessary to make the problem tractable.

A further complication is that our environment also contains other complex intelligent entities that try to second guess our behaviour. We need to also model them. I think one way we do this is by projecting our own model of ourselves onto them, i.e. using our own model of our own motivations, with appropriate modifications, to incorporate other people in our predictions. I see this as some form of ‘self-sensing’ and also sensing of others. I think doing it well may lead naturally to good planning algorithms, and planning was something that Yann mentioned we do badly. I don’t think we’re very good at this yet, and I think we would benefit from more open interaction with cognitive scientists and neuroscientists in understanding how humans do this. I know there’s a lot of research in this area in those fields, but I’m not an expert. Having a mathematical framework which shows how we can avoid this infinite regress through compression would be great.

These first thoughts were very much my thoughts about challenges for AI. The next thought tries to address AI in society.

Note 2: Creeping and Creepy AI

I think what we are seeing with successful AI is that it is emerging slowly, and without most people noticing. Large amounts of our interactions with computers are dictated by machine learning algorithms. We were lucky to have Lars Backstrom at the meeting who leads the team at Facebook that decides how to rank our news feed on the site. This is done by machine learning, but most people would be unaware that there is some ‘Artificial Intelligence’ underpinning it. Similarly, the ads we view across all sites are ranked by AI. Machine learning also recommends products on Amazon. Machine learning is becoming a core computational technique. I was sitting next to Ralf when Amazon launched their new machine learning services on AWS. Driverless cars are another good example, they are underpinned by a lot of machine learning ideas, but those technologies are also already appearing in normal cars. ‘Creeping AI’ is enhancing human abilities, improving us rather than replacing us. Allowing a seamless transition between what is human and what is computer. It demands better interaction between the human and computer, and better understanding between them.

However, this leads to another effect that could be seen as ‘creepy AI’. When the transition between computer and human is done well, it can be difficult to see when the human stops and the machine learning starts. Learning systems are already very capable of understanding our personalities and desires. They do this in very different ways to how humans do it (see self perception above!). They use large amounts of data about our previous behaviour and that of other humans to make predictions about our future behaviour. This can be seen as creepy. How do we avoid this? We need to improve people’s understanding of when AI is being used and what it is doing, improve their ability to control it. Improving our control of our data and developing legislation to protect us are things I think we need to do to address that.

We can avoid AI being creepy by remaining open to debate, understanding what users want, but also giving them what they need. In the long term they need a better understanding of our methodologies and their implications, as well as better control of how their data is being used. This is one of motivations of our open data science agenda.

Questions from the Audience

There were several questions from the audience, but the two that stuck out most for me were from Uli von Luxburg and Chris Watkins. Uli asked if we had a responsibility to worry about the moral side when developing these methods. I believe she phrased her question as to how much we should be worrying about ‘creepy AI’. I didn’t get my answer in initially, and before I could there was a follow up question from Chris about how we deal with the natural data monopoly. I’ve addressed these ideas before in the digital oligarchies post. Uli’s question is coming up more often, and a common answer to it is “this is something for society to decide”. I want to react strongly against that answer. Society is made up of people, who include experts. Those experts have a deeper understanding of the issues and implications than the general population. It’s true that there are philosophers and social scientists who can make important contributions to the debate, but it’s also true that amongst those with the best understanding of the implications of technology are those who are expert in it. If some of us don’t engage in the debate, then others will fill the vacuum. Uli’s question was probably more about whether an individual researcher should worry about these issues, rather than whether we should engage in debate. However, even if we don’t choose to contribute to the debate, I feel there is an obligation on us to be considering these issues in our research. In particular, the challenges we are creating by developing and sharing these technologies will require technological solutions as well as legislative change. These go hand in hand. Certainly those of us who are academics, and funded by the public, would not be doing our job well if we weren’t anticipating these needs and driving the technology towards answering them.

The good news is as follows, meetings like DALI are excellent for having such debates and engaging with different communities. I think when Bernhard initially envisaged the meeting, this atmosphere was what he was hoping for. That is also what got Thomas, Zoubin and myself excited about it. I think the meeting really achieved that.

The Meeting as a Whole

I haven’t mentioned too much of the thoughts as others, because they were offered informally, and often as a means to developing debate, but if I’ve misrepresented anything above please feel free to comment below. I also apologise for omitting all the interesting ideas others spoke about, but again I didn’t want to endanger the open atmosphere of the meeting by mistakenly misrepresenting someone else’s point of view (which may also have been presented in the spirit of the devil’s advocate). I think the meeting was a great success and we were already talking about venue for next year.


Legislation for Personal Data: Magna Carta or Highway Code?

Karl Popper is perhaps one of the most important thinkers from the 20th century. Not purely for his philosophy of science, but for giving a definitive answer to a common conundrum: “Which comes first, the chicken or the egg?”. He says that they were simply preceded by an ‘earlier type of egg’. I take this to mean that the answer is neither: they actually co-evolved. What do I mean by co-evolved? Well broadly speaking there once were two primordial entities which weren’t very chicken-like or egg-like at all, over time small changes occurred, supported by natural selection, rendering those entities unrecognisable from their origins into two of our most familiar foodstuffs of today.

I find the process of co-evolution remarkable, and to some extent unimaginable, or certainly it seems to me difficult to visualise the intermediate steps. Evolution occurs by natural selection: selection by the ‘environment’, but when we refer to co-evolution we are clarifying that this is a complex interaction. The primordial entities effect the environment around them, therefore changing the ‘rules of the game’ as far as survival is concerned. In such a convolved system certainties about the right action disappear very quickly.

What use are chickens and eggs when talking about personal data? Well, Popper used the question to illustrate a point about scientific endeavour. He was talking about science and reflecting on how scientific theories co-evolve with experiments. However, that’s not the point I’d like to make here. Co-evolution is very general, one area it arises is when technological advance changes society to such an extent that existing legislative frameworks become inappropriate. Tim Berners Lee has called for a Magna Carta for the digital age, and I think this is a worthy idea, but is it the right idea? A digital bill of rights may be the right idea in the longer run, but I don’t think we are ready to draft it yet. My own research is machine learning, the main technology underpinning the current AI revolution. A combination of machine learning, fast computers, and interconnected data means that the technological landscape is changing so fast that it is effecting society around us in ways that no one envisaged twenty years ago.

Even if we were to start with the primordial entities that presaged the chicken and the egg, and we knew all about the process of natural selection, could we have predicted or controlled the animal of the future that would emerge? We couldn’t have done. The chicken exists today as the product of its environmental experience, an experience that was unique to it. The end point we see is one of is highly sensitive to very small perturbations that could have occurred at the beginning.

So should we be writing legislation today which ties down the behaviour of future generations? There is precedent for this from the past. Before the printing press was introduced, no one would have begrudged the monks’ right to laboriously transcribe the books of the day. Printing meant it was necessary to protect the “copy rights” of the originator of the material. No one could have envisaged that those copyright laws would also be used to protect software, or digital music. In the industrial revolution the legal mechanism of ‘letters patent’ evolved to protect creative insight. Patents became protection of intellectual property, ensuring that inventors’ ideas could be shared under license. These mechanisms also protect innovation in the digital world. In some jurisdictions they are now applied to software and even user interface designs. Of course even this legislation is stretched in the face of digital technology and may need to evolve, as it has done in the past.

The new legislative challenge is not in protecting what is innovative about people, but what is commonplace about them. The new value is in knowing the nature of people: predicting their needs and fulfilling them. This is the value of interconnection of personal data. It allows us to make predictions about an individual by comparing him or her to others. It is the mainstay of the modern internet economy: targeted advertising and recommendation systems. It underpins my own research ideas in personalisation of health treatments and early diagnosis of disease. But it leads to potential dangers, particularly where the uncontrolled storage and flow of an individual’s personal information is concerned. We are reaching the point where some studies are showing that computer prediction of our personality is more accurate than that of our friends and relatives. How long before an objective computer prediction of our personality can outperform our own subjective assessment of ourselves? Some argue those times are already upon us. It feels dangerous for such power to be wielded unregulated by a few powerful groups. So what is the answer? New legislation? But how should it come about?

In the long term, I think we need to develop a set of rules and legislation, that include principles that protect our digital rights. I think we need new models of ownership that allow us to control our private data. One idea that appeals to me is extending data protection legislation with the right not only to view data held about us, but to also ask for it to be deleted. However, I can envisage many practical problems with that idea, and these need to be resolved so we can also enjoy the benefits of these personalised predictions.

As wonderful as some of the principles in the Magna Carta are, I don’t think it provides a good model for the introduction of modern legislation. It was actually signed under duress: under a threat of violent revolution. The revolution was threatened by a landed gentry, although the consequences would have been felt by all. Revolutions don’t always end well. They occur because people can become deadlocked: they envisage different futures for themselves and there is no way to agree on a shared path to different end points. The Magna Carta was also a deal between the king and his barons. Those barons were asking for rights that they had no intention of extending within their fiefdoms. These two characteristics: redistribution of power amongst a powerful minority, with significant potential consequences for the a disenfranchised majority, make the Magna Carta, for me, a poor analogy for how we would like things to proceed.

The chicken and the egg remind us that the actual future will likely be more remarkable than any of us can currently imagine. Even if we all seek a particular version of the future this version of the future is unlikely to ever exist in the form that we imagine. Open, receptive and ongoing dialogue between the interested and informed parties is more likely to bring about a societal consensus. But can this happen in practice? Could we really evolve a set of rights and legislative principles which lets us achieve all our goals? I’d like to propose that rather than taking as our example a mediaeval document, written on velum, we look to more recent changes in society and how they have been handled. In England, the Victorians may have done more than anyone to promote our romantic notion of the Magna Carta, but I think we can learn more by looking at how they dealt with their own legislative challenges.

I live in Sheffield, and cycle regularly in the Peak District national park. Enjoyment of the Peak Park is not restricted to our era. At 10:30 on Easter Monday in 1882 a Landau carriage, rented by a local cutler, was heading on a day trip from Sheffield to the village of Tideswell, in the White Peak. They’d left Sheffield via Ecclesall Road, and as they began to descend the road beneath Froggatt Edge, just before the Grouse Inn they encountered a large traction engine towing two trucks of coal. The Landau carriage had two horses and had been moving at a brisk pace of four and a half miles an hour. They had already passed several engines on the way out of Sheffield. However, as they moved out to pass this one, it let out a continuous blast of steam and began to turn across their path into the entrance of the inn. One of the horses took fright pulling the carriage up a bank, throwing Ben Deakin Littlewood and Mary Coke Smith from the carriage and under the wheels of the traction engine. I cycle to work past their graves every day. The event was remarkable at the time, so much so that is chiselled into the inscription on Ben’s grave.

The traction engine was preceded, as legislation since 1865 had dictated, by a boy waving a red flag. It was restricted to two and a half miles an hour. However, the boy’s role was to warn oncoming traffic. The traction engine driver had turned without checking whether the road was clear of overtaking traffic. It’s difficult to blame the driver though. I imagine that there was quite a lot involved in driving a traction engine in 1882. It turned out that the driver was also preoccupied with a broken wheel on one of his carriages. He was turning into the Grouse to check the wheel before descending the road.

This example shows how legislation can sometimes be extremely restrictive, but still not achieve the desired outcome. Codification of the manner in which a vehicle should be overtaken came later, at a time when vehicles were travelling much faster. The Landau carriage was overtaking about 100 meters after a bend. The driver of the traction engine didn’t check over his shoulder immediately before turning, although he claimed he’d looked earlier. Today both drivers’ responsibilities are laid out in the “Highway Code”. There was no “Mirror, Signal, Manoeuvre” in 1882. That came later alongside other regulations such as road markings and turn indicators.

The shared use of our road network, and the development of the right legislative framework might be a good analogy for how we should develop legislation for protecting our personal privacy. No analogy is ever perfect, but it is clear that our society both gained and lost through introduction of motorised travel. Similarly, the digital revolution will bring advantages but new challenges. We need to have mechanisms that allow for negotiated solutions. We need to be able to argue about the balance of current legislation and how it should evolve. Those arguments will be driven by our own personal perspectives. Our modern rules of the road are in the Highway Code. It lists responsibilities of drivers, motorcyclists, cyclists, mobility scooters, pedestrians and even animals. It gives legal requirements and standards of expected behaviour. The Highway Code co-evolved with transport technology: it has undergone 15 editions and is currently being rewritten to accommodate driverless cars. Even today we still argue about the balance of this document.

In the long term, when technologies have stabilised, I hope we will be able to distill our thinking to a bill of rights for the internet. But such a document has a finality about it which seems inappropriate in the face of technological uncertainty. Calls for a Magna Carta provide soundbites that resonate and provide rallying points. But they can polarise, presaging unhelpful battles. Between the Magna Carta and the foundation of the United States the balance between the English monarch and his subjects was reassessed through the English Civil War and the American Revolution. Wven tI don’t think we can afford such discord when drafting the rights of the digital age. We need mechanisms that allow for open debate, rather than open battle. Before a bill of rights for the internet, I think we need a different document. I’d like to sound the less resonant call for a document that allows for dialogue, reflecting concerns as they emerge. It could summarise current law and express expected standards of behaviour. With regular updating it would provide an evolving social contract between all the users of the information highway: people, governments, businesses, hospitals, scientists, aid organisations. Perhaps instead of a Magna Carta for the internet we should start with something more humble: the rules of the digital road.

This blog post is an extended version of an written for the Guardian’s media network: “Let’s learn the rules of the digital road before talking about a web Magna Carta”