One issue that can occur for a conference is differences in interpretation of the reviewing scale. For a number of years (dating back to at least NIPS 2002) mis-calibration between reviewers has been corrected for with a model. Area chairs see not just the actual scores of the paper, but also ‘corrected scores’. Both are used in the decision making process.

Reviewer calibration at NIPS dates back to a model first implemented in 2002 by John Platt when he was an area chair. It’s a regularized least squares model that Chris Burges and John wrote up in 2012. They’ve kindly made their write up available here.

Calibrated scores are used alongside original scores to help in judging the quality of papers.

We also knew that Zoubin and Max had modified the model last year, along with their program manager Hong Ge. However, before going through the previous work we first of all approached the question independently. However, the model we came up with turned out to be pretty much identical to that of Hong, Zoubin and Max, and the approach we are using to compute probability of accepts was also identical. The model is a probabilistic reinterpretation of the Platt and Burges model: one that treats the bias parameters and quality parameters as latent variables that are normally distributed. Marginalizing out the latent variables leads to an ANOVA style description of the data.

### The Model

Our assumption is that the score from the th reviewer for the th paper is given by

where is the *objective quality* of paper and is an *offset* associated with reviewer . is a *subjective quality* estimate which reflects how a specific reviewer’s opinion differs from other reviewers (such differences in opinion may be due to differing expertise or perspective). The underlying ‘objective quality’ of the paper is assumed to be the same for all reviewers and the reviewer offset is assumed to be the same for all papers.

If we have papers and reviewers then this implies + + values need to be estimated. Of course, in practice, the matrix is sparse, and we have no way of estimating the subjective quality for paper-reviewer pairs where no assignment was made. However, we can firstly assume that the subjective quality is drawn from a normal density with variance

which reduces us to + + 1 parameters. The Platt-Burges model then estimated these parameters by regularized least squares. Instead, we follow Zoubin, Max and Hong’s approach of treating these values as latent variables. We assume that the objective quality, , is also normally distributed with mean and variance ,

this now reduces us to $m$+3 parameters. However, we only have approximately $4m$ observations (4 papers per reviewer) so parameters may still not be that well determined (particularly for those reviewers that have only one review). We therefore also assume that the reviewer offset is a zero mean normally distributed latent variable,

leaving us only four parameters: , , and . When we combine these assumptions together we see that our model assumes that any given review score is a combination of 3 normally distributed factors: the objective quality of the paper (variance ), the subjective quality of the paper (variance ) and the reviewer offset (variance ). The *a priori* marginal variance of a reviewer-paper assignment’s score is the sum of these three components. Cross-correlations between reviewer-paper assignments occur if either the reviewer is the same (when the cross covariance is given by ) or the paper is the same (when the cross covariance is given by $\alpha_f$). With a constant mean coming from the mean of the ‘subjective quality’, this gives us a joint model for reviewer scores as follows:

where is a vector of stacked scores $\mathbf{1}$ is the vector of ones and the elements of the covariance function are given by

where and are the index of the paper and reviewer in the rows of and and are the index of the paper and reviewer in the columns of .

It can be convenient to reparameterize slightly into an overall scale $\alpha_f$, and normalized variance parameters,

which we rewrite to give two ratios: offset/objective quality ratio, and subjective/objective ratio ratio.

The advantage of this parameterization is it allows us to optimize directly through maximum likelihood (with a fixed point equation). This leaves us with two free parameters, that we might explore on a grid.

We expect both $\mu$ and $\alpha_f$ to be very well determined due to the number of observations in the data. The negative log likelihood is

where is the length of (i.e. the number of reviews) and is the scale normalised covariance. This negative log likelihood is easily minimized to recover

A Bayesian analysis of parameter is possible with gamma priors, but it would merely shows that this parameter is extremely well determined (the degrees of freedom parameter of the associated Student- marginal likelihood scales will the number of reviews, which will be around in our case.

We can set these parameters by maximum likelihood and then we can remove the offset from the model by computing the conditional distribution over the paper scores with the bias removed, . This conditional distribution is found as

where

and

and is the covariance associated with the quality terms only with elements given by,

.

We now use (which is both the mode and the mean of the posterior over ) as the calibrated quality score.

### Analysis of Variance

The model above is a type of Gaussian process model with a specific covariance function (or kernel). The variances are highly interpretable though, because the covariance function is made up of a sum of effects. Studying these variances is known as analysis of variance in statistics, and is commonly used for batch effects. It is known as an ANOVA model. It is easy to extend this model to include batch effects such as whether or not the reviewer is a student or whether or not the reviewer has published at NIPS before. We will conduct these analyses in due course. Last year, Zoubin, Max and Hong explored whether the reviewer confidence could be included in the model, but they found it did not help with performance on hold out data.

### Probability of Acceptance

To predict the probability of acceptance of any given paper, we sample from the multivariate normal that gives the posterior over . These samples are sorted according to the values of , and the top scoring papers are considered to be accepts. These samples are taken 1000 times and the probability of acceptance is computed for each paper by seeing how many times the paper received a positive outcome from the thousand samples.

I’d like to see how well your probability of acceptance predicts actual acceptance, as this seems to be the only metric of importance. The scores are only there to aid in this decision. Are the calibrated scores any better at predicting acceptance than the uncalibrated scores? The scatter plot indicates that generally the calibrated scores are pulled closer to the center of the scale (low scores tend to be lifted and high scores tend to be lowered), which makes the acceptance/rejection decision more difficult.

As you say, accept/reject decisions are based on what the area chair, reviewers and program chairs think about the paper. Accept/reject decisions follow a teleconference between area chairs and program chairs, as well as discussion between reviewers. The probability of acceptance is based on a model, and the simplifications of the model are outlined above. We are in the fortunate position of having a reviewing body who understand the limitations of models whilst appreciating the benefits they bring. One of the advantages of making the model public is it allows them to do this in an informed way.