DALI Meeting + Views on Machine Learning and Artificial Intelligence

I just got back from the first DALI meeting, held in La Palma. I was a co-organiser with Zoubin Ghahramani, Thomas Hoffman and Bernhard Schoelkopf. The original vision was mainly driven by Bernhard, and the meeting is an attempt to recapture the spirit of some of the early NIPS conferences and the Snowbird meeting: a smaller meeting with some focus and a lot of informal debate. A schedule designed to encourage discussion and for people to engage across different fields and sub-fields.

The meeting was run as a day of workshops, followed by a day of plenary sessions and a further day of workshops. Zoubin organised the workshop schedule, and Thomas the plenary sessions. For the workshops we decided on topics and invited organisers who themselves invited the attendees, we heard about Probabilistic Programming, Networks and Causality, Deep Learning for Vision, Probabilistic Numerics and Statistical Learning Theory. We had plenaries from experts in machine learning as well as one by Metin Sitti on Mini/Micro/Nanorobotics. Thomas ended the plenary session with a panel discussion with Alex Graves, Ralf Herbrich, Yann LeCun, Bernhard Schoelkopf, Zoubin Ghahramani and myself, chaired by Thomas.

Thomas seeded the panel discussion by asking us to make three minute statements. He asked about several things, but the one that caught my eye was machine learning and artificial intelligence. Everyone had interesting things to day, and I don’t want to paraphrase them too much, but it distilled some of my thinking (being asked to summarize in 3 minutes) so I wanted to reflect that here.

I will only mention other’s views briefly, because I don’t want to misrepresent what they might have said, and that’s easy to do. But I’m happy for any of them to comment on the below. They had many interesting things to say about the topics also (probably much more so than me!).

I only had two ‘notes’ for the discussion which I spoke to off the cuff, so I’ll split the thoughts into those two sections. Those who know me know I can talk for a long time, and I was trying to limit this tendency!

Note 1: Perception and Self Perception

This note meant to me that perception was an area where we’ve been successful, but self-perception less so. I’ll try and clarify.

I’m probably using these terms too loosely, so let me define what I mean by ‘perception’.I mean the sensing of objects and our environment. The particular recent success of deep learning has been on sensing the environment, categorising objects, locating pedestrians. I’ve always felt the mathematical theory of how we should aim to do this was fairly clear: it’s summarised by Bayes’ rule which is widely used in robotics, vision, speech etc. The big recent change from the deep learning community has been the complexity of the mappings that we use to form this perception and our ability to learn from data. So I see this as a success.

For self-perception I mean the sensing of our selves, our prediction of our own interactions with the environment. How what we might do could affect the environment and how we will react to those effects. This has an interesting flavour of infinite regress. If we try and model ourselves and the environment we need a model that is larger than ourselves and the environment. However, that model is part of us, so we need another model on top of that. This is the infinite regress, and it’s non convergent. It strikes me that the only way we can get around that is to use a ‘compression’ of ourselves, i.e. have a model within our model in order to predict our interactions with the environment. This compressed model of ourselves will not be entirely accurate, and may mis-predict our own behaviour, but it is necessary to make the problem tractable.

A further complication is that our environment also contains other complex intelligent entities that try to second guess our behaviour. We need to also model them. I think one way we do this is by projecting our own model of ourselves onto them, i.e. using our own model of our own motivations, with appropriate modifications, to incorporate other people in our predictions. I see this as some form of ‘self-sensing’ and also sensing of others. I think doing it well may lead naturally to good planning algorithms, and planning was something that Yann mentioned we do badly. I don’t think we’re very good at this yet, and I think we would benefit from more open interaction with cognitive scientists and neuroscientists in understanding how humans do this. I know there’s a lot of research in this area in those fields, but I’m not an expert. Having a mathematical framework which shows how we can avoid this infinite regress through compression would be great.

These first thoughts were very much my thoughts about challenges for AI. The next thought tries to address AI in society.

Note 2: Creeping and Creepy AI

I think what we are seeing with successful AI is that it is emerging slowly, and without most people noticing. Large amounts of our interactions with computers are dictated by machine learning algorithms. We were lucky to have Lars Backstrom at the meeting who leads the team at Facebook that decides how to rank our news feed on the site. This is done by machine learning, but most people would be unaware that there is some ‘Artificial Intelligence’ underpinning it. Similarly, the ads we view across all sites are ranked by AI. Machine learning also recommends products on Amazon. Machine learning is becoming a core computational technique. I was sitting next to Ralf when Amazon launched their new machine learning services on AWS. Driverless cars are another good example, they are underpinned by a lot of machine learning ideas, but those technologies are also already appearing in normal cars. ‘Creeping AI’ is enhancing human abilities, improving us rather than replacing us. Allowing a seamless transition between what is human and what is computer. It demands better interaction between the human and computer, and better understanding between them.

However, this leads to another effect that could be seen as ‘creepy AI’. When the transition between computer and human is done well, it can be difficult to see when the human stops and the machine learning starts. Learning systems are already very capable of understanding our personalities and desires. They do this in very different ways to how humans do it (see self perception above!). They use large amounts of data about our previous behaviour and that of other humans to make predictions about our future behaviour. This can be seen as creepy. How do we avoid this? We need to improve people’s understanding of when AI is being used and what it is doing, improve their ability to control it. Improving our control of our data and developing legislation to protect us are things I think we need to do to address that.

We can avoid AI being creepy by remaining open to debate, understanding what users want, but also giving them what they need. In the long term they need a better understanding of our methodologies and their implications, as well as better control of how their data is being used. This is one of motivations of our open data science agenda.

Questions from the Audience

There were several questions from the audience, but the two that stuck out most for me were from Uli von Luxburg and Chris Watkins. Uli asked if we had a responsibility to worry about the moral side when developing these methods. I believe she phrased her question as to how much we should be worrying about ‘creepy AI’. I didn’t get my answer in initially, and before I could there was a follow up question from Chris about how we deal with the natural data monopoly. I’ve addressed these ideas before in the digital oligarchies post. Uli’s question is coming up more often, and a common answer to it is “this is something for society to decide”. I want to react strongly against that answer. Society is made up of people, who include experts. Those experts have a deeper understanding of the issues and implications than the general population. It’s true that there are philosophers and social scientists who can make important contributions to the debate, but it’s also true that amongst those with the best understanding of the implications of technology are those who are expert in it. If some of us don’t engage in the debate, then others will fill the vacuum. Uli’s question was probably more about whether an individual researcher should worry about these issues, rather than whether we should engage in debate. However, even if we don’t choose to contribute to the debate, I feel there is an obligation on us to be considering these issues in our research. In particular, the challenges we are creating by developing and sharing these technologies will require technological solutions as well as legislative change. These go hand in hand. Certainly those of us who are academics, and funded by the public, would not be doing our job well if we weren’t anticipating these needs and driving the technology towards answering them.

The good news is as follows, meetings like DALI are excellent for having such debates and engaging with different communities. I think when Bernhard initially envisaged the meeting, this atmosphere was what he was hoping for. That is also what got Thomas, Zoubin and myself excited about it. I think the meeting really achieved that.

The Meeting as a Whole

I haven’t mentioned too much of the thoughts as others, because they were offered informally, and often as a means to developing debate, but if I’ve misrepresented anything above please feel free to comment below. I also apologise for omitting all the interesting ideas others spoke about, but again I didn’t want to endanger the open atmosphere of the meeting by mistakenly misrepresenting someone else’s point of view (which may also have been presented in the spirit of the devil’s advocate). I think the meeting was a great success and we were already talking about venue for next year.

Can you select for ‘robustness’?

20150315_165626
My mum and son ensuring preparing the ground for non-robust seeds

Was at the allotment the other day, and my son Frederick asked how the seeds we plant could ever survive when it took so much work and preparation to plant and support them. I said it was because they’ve been selected (by breeding) to produce high yield, and that tends to make them less robust (in comparison to e.g. weeds). So he asked why don’t we breed in robustness. I instinctively said that you can’t do that, because breeding involves selecting for a characteristic, whereas (I think) robustness implies performance under a range of different conditions, some of which will not even be known to us. Of course, I agree you can breed in resistance to a particular circumstance, but I think robustness is about resistance to many circumstances. I think a robust population will include wide variation in characteristics, whereas selection by breeding tends to refine the characteristics, reducing variation. My reply was instinctive, but I think it’s broadly speaking correct, although it would be nice to find some counter examples!

Questions on Deep Gaussian Processes

I was recently contacted by Chris Edwards, he’s putting together an article for Communications of the ACM on Deep Learning and had a few questions on deep Gaussian processes. He kindly agreed to let me use his questions and my answers in a blog post.
1) Are there applications that suit Gaussian processes well? Would they typically replace the neural network layers in a deep learning system or would they possibly be mixed and matched with neural layers, perhaps as preprocessors or using the neural layers for stuff like feature extraction (assuming that training algorithms allow for this)?
Yes, I think there are applications that suit Gaussian processes very well. In particular applications where data is scarce (this doesn’t necessarily mean small data sets, but when data is scarce relative to the complexity of the system being modeled). In these scenarios, handling uncertainty in the model appropriately becomes very important. Two examples which have exploited this characteristic in practice are GaussianFace by Lu & Tang, and Bayesian optimization (e.g. Snoek, Larochelle and Adams). Almost all my own group’s work also exploits this characteristic. A further manifestation of this effect is what I call “massively missing data”. Although we are getting a lot of data at the moment, when you think about it you realise that almost all the things we would like to know are still missing almost all of the time. Deep models have performed well in situations where data sets are very well characterised and labeled. However, one of the domains that inspires me is clinical data where this isn’t the case. In clinical data most people haven’t had most clinical tests applied to them most of the time. Also, the nature of clinical tests evolve (as do the diseases that affect patients). This is an example of massively missing data. I think Gaussian processes provide a very promising approach to handling this data.
With regard to whether they are a replacement for deep neural networks, I think in the end they may well be mixed and matched. From a Gaussian process perspective the neural network layers could be seen as a type of ‘mean function’ (a Gaussian process is defined by its mean function and its covariance function). So they can be seen as part of the deep GP framework: deep Gaussian processes enhance the toolkit available. So there is no conceptual reason why they shouldn’t be mixed and matched. I think you’re quite right that it might be that the low level feature extraction is still done by parametric models like neural networks, but it’s certainly important that we use the right techniques in the right domains and being able to interchange ideas enables that.
2) Are there training algorithms that allow Gaussian processes to be used today for deep-learning type applications or is this where work needs to be done?
There are algorithms, yes, we have three different approaches right now and its also clear that work in doubly stochastic variational inference (see for example Kingma and Welling  or Rezende, Mohamed and Wierstra) could also be applicable. But more work still needs to be done. In particular, a lot of the success of deep learning has been down to the engineering of the system. How to implement these models on GPUs and scale them to billions of data. We’ve been starting to look at this (Dai, Damianou, Hensman and Lawrence) but there’s no doubt we are far behind and it’s a steep learning curve! We also don’t have quite the same computational resource of Facebook, Microsoft and Google!
3) Is the computational load similar to that of deep-learning neural networks or are the applications sufficiently different that a comparison is meaningless?
We carry an additional algorithmic burden, that of propagating uncertainty around the network. This is where the algorithmic problems begin, but is also where we’ve had most of the breakthroughs. Propagating this uncertainty will always come with an additional load for a particular network, but it has particular advantages like dealing with the massively missing data I mentioned above and automatic regularisation of the system. This has allowed us to automatically determine aspects like the number of layers in the network and the number of hidden nodes in each layer. This type of structural learning is very exciting and was one of the original motivations for considering these models. This has enabled us to develop variants of Gaussian processes that can be used for multiview learning (Damianou, Ek, Titsias and Lawrence), we intend to apply these ideas to deep GPs also.
4) I think I saw a suggestion that GPs are reasonably robust when trained with small datasets – do they represent a way in for smaller organisation without bags of data? Is access to data a key problem when dealing with these data science techniques?
I think it’s a very good question, it’s an area we’re particularly interested in addressing. How can we bring data science to smaller organisations? I think it might relates to our ‘open data science’ initiative (see this blog post here). I refer to this idea as ‘analysis empowerment’. However, I hadn’t particularly thought deep GPs in this way before, but can I hazard a possible yes to that? Certainly with GaussianFace we saw they could outperform DeepFace (from Facebook) with a small fraction of the data. For us it wasn’t the main motivation for developing deep GPs, but I’d like to think it might be a characteristic of the models. The motivating examples we have are more in the domain of applications that the current generation of supervised deep learning algorithms can’t address: like interconnection of data sets in health. Many of my group’s papers are about interconnecting different views of the patient (genotype, environmental background, clinical data, survival information … with luck even information from social networks and loyalty cards). We approach this through Gaussian process frameworks to ensure that we can build models that will be fully interconnected in application. We call this approach “deep health”. We aren’t there yet, but I feel there’s a lot of evidence so far that we’re working with a class of models that will do the job. My larger concern is the ethical implications of pulling this scale and diversity of information together. I find the idea of a world where we have computer models outperforming humans in predicting their own behavior (perhaps down to the individual) quite disturbing. It seems to me that now the technology is coming within reach, we need to work hard to also address these ethical questions. And it’s important that this debate is informed by people who actually understand the technology.
5) On a more general point that I think can be explored within this feature, are techniques such as Gaussian processes at a disadvantage in computer science because of their heavy mathematical basis? (I’ve had interviews with people like Donald Knuth and Erol Gelenbe in the past where the idea has come up that computer science and maths should, if not merge, interact a lot more).
Yes, and no. It is true that people seem to have some difficulty with the concept of Gaussian processes. But it’s not that the mathematics is more complex than people are using (at the cutting edge) for deep neural networks. Any of the researchers leading the deep revolution could easily turn their hands to Gaussian processes if they chose to do so. Perhaps at ‘entry’ the concepts seem simpler in deep neural networks, but as you peer ‘deeper’ (forgive the pun) into those models it actually becomes a lot harder to understand what’s going on. The leading people (Hinton, Bengio, LeCun, etc) seem to have really good intuitions, but these are not always easy to teach. Certainly when Geoff Hinton explains something to me I always feel I’ve got a very good grasp of it at the time, but later, when I try and explain the same concept to someone else, I find I can’t always do it (i.e., he’s got better intuitions than me, and he’s better at explaining than I am). There may be similar issues for explaining deep GPs, but my hope is that once the conceptual hurdle of a GP is surmounted, the resulting models are much easier to analyze. Such analysis should also feed back into the wider deep learning community. I’m pleased that this is already starting to happen (see Duvenaud, Rippel, Adams and Ghahramani). Gaussian processes also generalise many different approaches to learning and signal processing (including neural networks), so understanding Gaussian processes well gives you an ‘in’ for many different areas. I agree, though, that the perception in the wider community matches your analysis. This is a major reason for the program of summer schools we’ve developed in Gaussian Processes. So far we’ve taught over 200 students, and we have two further schools planned for 2015 with a developing program for 2016. We’ve made material freely available on line including lectures (on YouTube) and lab notes. So I hope we are doing something to address the perception that these models are harder mathematically!
I totally agree on the Maths/CS interface. It is, however, slightly frustrating (and perhaps inevitable) how much different academic disciplines become dominated by a particular culture of research. This can create barriers, particularly when it comes to formal publication (e.g. in the ‘leading’ journals). My group’s been working very hard over the last decade to combat this through organization of workshops and summer schools that bridge the domains. It always seems to me that meeting people face to face helps us gain a shared understanding. For example, a lot of confusion can be generated by the slightly different ways we use technical terminology, it leads to a surprising number of misunderstandings that do take time to work through. However, through these meetings I’ve learned an enormous amount, particularly from the statistics community. Unfortunately, formal outlets and funding for this interface are still surprisingly difficult to find. This is not helped by the fact that the traditional professional societies don’t necessarily bridge the intellectual ground and sometimes engage in their own fights for territory. These cultural barriers also spill over into organization of funding. For example, in the UK it’s rare that my grant proposals are refereed by colleagues from Maths/Stats community or that their grant proposals are refereed by me. They actually go two totally separate parts of the relevant UK funding body. As a result both sets of proposals can be lost in the wider Maths and CS communities, which is not always conducive to expanding the interface. In the UK I’m hoping that the recent founding of the Alan Turing Institute will cause a bit of a shake up in this area, and that some of these artificial barriers will fall away. But in summary, I totally agree with the point, but also recognize that on both sides of the divide we have created communities which can make collaboration harder.

Open Collaborative Grant Writing

Thanks to an introduction to the Sage Math team by Fernando Perez, I just had the pleasure of participating in a large scale collaborative grant proposal construction exercise, co-ordinated Nicolas Thiéry. I’ve collaborated on grants before, but for me this was a unique experience because the grant writing was carried out in the open, on github.

The proposal, ‘OpenDreamKit’ is principally about doing as much as possible to smooth collaboration between mathematicians so that advances in maths can be delivered as rapidly as possible to teachers, researchers, technologists etc. Although, of course, I don’t have to tell you because you can read it on github.

It was a wonderful social experiment, and I think it really worked, although a lot of credit to that surely goes to the people involved (most of whom were there before I came aboard). I really hope this is funded, because collaborating with these people is going to be great.

For the first time on a proposal, I wasn’t the one who was most concerned about the latex template (actually second time … I’ve worked on a grant once with Wolfgang Huber). But this took things to another level, as soon as a feature was required the latex template seemed to be updated, almost in real time, I think mainly by Michael Kohlhase.

Socially it was very interesting, because the etiquette of how to interact (on the editing side) was not necessarily clear at the outset. For example, at one point I was tasked with proof reading a section, but ended up doing a lot of rephrasing. I was worried about whether people would be upset that their text had been changed, but actually there was a positive reaction (at least from Nicolas and Hans Fangohr!), which emboldened me to try more edits. As deadline approached I think others went through a similar transition, because the proposal really came together in the last few days. It was a little like a school dance, where at the start we were all standing at the edge of the room, eyeing each other up, but as DJ Nicolas ramped things up and the music became a little more hardcore (as dawn drew near), barriers broke down and everyone went a little wild. Nicolas produced a YouTube video, visualising the github commits.

As Alex Konovalov pointed out, we look like bees pollinating each other’s flowers!

I also discovered great new (for me) tools like appear.in that we used for brainstorming on ‘Excellence’ with Nicolas and Hans: much more convenient than Skype or Hangouts.

Many thanks to Nicolas, and all of the collaborators. I think it takes an impressive bunch of people to pull off such a thing, and regardless of outcome, which I very much hope will be positive, I look forward to further collaborations within this grouping.

Alan Turing Institute: Critical Mass or Incubated Lungs?

On Wednesday last week I attended an “Open Meeting” organised by the UK’s EPSRC Research Council on the Alan Turing Institute. The Turing Institute is a new government initiative that stems from a letter from our Chief Scientific advisor to our prime minister about the “age of algorithms”. It aims to provide an international centre of excellence in data science.

The government has provided 42 million pounds of funding (about 60-70 million dollars) and Universities interested in partnering in the Turing Institute are expected to bring 5 million pounds (8 million dollars) to the initiative, to be spent over 5 years.

It seemed clear that the EPSRC will require that the institute is located in one place, and there was much talk of ‘critical mass’, which made me think of what ‘critical mass’ is in data science, after all, we aren’t building a large hadron collider, and one of the most interesting challenges of the new age of data is its distributed nature. I asked a question about this and was given the answers you might expect: flagship international centre of excellence, stimulating environment, attracting the best of the best etc. Nothing was particularly specific to data science.

In my own area of machine learning the UK has a lot of international recognition, but one of the features I’ve always enjoyed is the distributed nature of the expertise. The groups that spring first to mind are Cambridge (Engineering), Edinburgh (Informatics), UCL (Computer Science and Gatsby) and recently Oxford has expanded significantly (CS, Engineering and Statistics). I’ve always enjoyed the robustness that such a network of leading groups brings. It’s evolved over a period of 20 years, and those of us that have watched it grow are incredibly proud of what the UK has been able to achieve with relatively few people.

Data science requires strong interactions between statisticians and computer scientists. It requires knowledge of classical techniques and modern computational capabilities. The pool of expertise is currently rather small relative to the demand. As a result I find my self constantly in demand within my own University, mainly to advise on the capabilities that current approaches to analysis have. A recent xkcd comic cleverly reminded us of how hard it can be to explain the gap between those things that are easy and those things that are virtually impossible. Although in many cases where advice is need it’s not the full explanation that’s required, just the knowledge. Many expensive errors can be avoided by just a little access to this knowledge. Back in July I posted a position paper on this  that was targeting exactly this problem and in Sheffield we are pursuing the “Open Data Science” agenda I proposed with vigour. Indeed, I sometimes wonder if my group is not more useful for this advice (which rarely involves any intellectual novelty) than for the ideas we push forward in our research. However, our utility as advisors is much more difficult to quantify, particularly because it often won’t lead to a formal collaboration.

I like analogies, but I think that ‘critical mass’ here is the wrong one. To give better access to expertise, what is required is a higher surface area to volume ratio, not a greater mass. Communication between experts is important, but we are fortunate in the UK to have a geographically close network of well connected Universities. Many international visitors take the time to visit two or three of the leading groups when they are here, so I think the idea of analogy of a lung is a far better one for describing what is required for UK data science. I’m pleased the government has recognised the importance of data science, I just hope that in their rush to create a flagship institute, with a large headline grabbing investment figure associated, they don’t switch off the incubator that sustains our developing lungs.

Gaussian Process Summer School

Yesterday we finished our third Sheffield school. As with the previous events we’ve ended with a one day workshop focussed on Gaussian processes, this time on using them for feature extraction. With such a busy summer it was pretty intimidating to take on the school so shortly after we have sent out decisions on NIPS. As ever the group came through with the organisation though. This time out Zhenwen Dai was the main organiser, but once again he could never have done it without the rest of the group chipping in. It’s another reminder that when you are working with great people, great things can happen.

The school always gives me a special kind of energy, that which you can only get from seeing people enthuse about the things you care about. We were very lucky to have such a great group of speakers: Carl Rasmussen, Dan Cornford, Mike Osbourne, Rich Turner, Joaquin Quinonero Candela, and then at the workshop Carl Henrik Ek, Andreas Damianou, Victor Prisacariu and Chaochao Lu. It always part feels like a family reunion (we had brief overlaps between Carl, Joaquin (Sheffield Tap!), Lehel Csato and Magnus Rattray, all four of whom were in Sheffield for the 2005 GPRT) and part like a welcoming event for new researchers. We covered important new developments in probabilistic numerics (Mike Osborne) and time series processing (Rich Turner) and Control (Carl Rasmussen). Joaquin also gave us insights into the evidence and then presented to a University-wide audience on machine learning at Facebook.

In the workshop we also saw how GPs can be used for multiview learning (Carl Henrik Ek) audio processing (Rich Turner) deep learning (Andreas Damianou) shape representation (Victor Prisacariu) and face identification (Chaochao Lu).

We’ve now taught around about 140 students through the schools in Sheffield and a further 60 through roadshows to Uganda and Colombia. Perhaps the best bit was watching everyone head for the Devonshire Cat after the last lecture to continue the debate. I think we all probably remember summer schools from our early times in research that were influential (for me the NATO ASI on Machine Learning and Generalisation, for many it will be the regular MLSS events). It’s nice to hope that this series of events may have also done something to influence others. The next scheduled events will be in roadshows in Australia in February with Trevor Cohn and Kenya in June with Ciira wa Maina and John Quinn (although we plan to make the Kenyan event it will be more data science focussed than GPs).

Thanks to all in the group for organising!

EPSRC College of Reviewers

Yesterday, I resigned from the EPSRC college of reviewers.

The EPSRC is the national funding body in the UK for Engineering and Physical Sciences. The college of reviewers is responsible for reading grant proposals and making recommendations to panels with regard to the quality, feasibility and utility of the underlying work.

The EPSRC aims to fund international quality science, but the college of reviewers is a national body of researchers. Allocation of proposals to reviewers is done within the EPSRC.

In 2012 I was asked to view only one proposal, in 2013 so far I have received none. The average number of review requests per college member in 2012 was 2.7.

It’s not that I haven’t been doing any proposal reviewing over the last 18 months, I’ve reviewed for the Dutch research councils, the EU, the Academy of Finland, the National Science Foundation (USA), BBSRC, MRC and I’m contracted as part of a team to provide a major review for the Canadian Institute for Advanced Research. I’d estimate that I’ve reviewed around 20 international applications in the area of machine learning and computational biology across this period.

I resigned from the EPSRC College of Reviewers because I don’t wish people to read the list of names in the college and assume that, as a member of the college, I am active in assessing the quality of work the EPSRC is funding. Looking back over the last ten years all the proposals I have reviewed come from a very small body of researchers, all of whom, I know, nominate me as a reviewer.

Each submitted proposal nominates a number of reviewers who the proposers consider to be appropriate. The EPSRC chooses one of these nominated reviewers, and selects the remainder from the wider college.

Over a 12 year period as an academic, I have never been selected to review an EPSRC proposal unless I’ve been nominated by the proposers to do so.

So in many senses this resignation changes nothing, but by resigning from the college I’m highlighting the fact that if you do think I am appropriate for reviewing your proposal, then the only way it will happen is if you nominate me.