1 About

In this topic we discuss how to do predictions with parts of the linear predictor. We use a general approach that should be applicable to all prediction scenarios.

The goal is to learn how to build up simulations for model fit and predictions. There are more convenient approaches if you only want to summarise or plot part of the posterior.

We recommend reading btopic102 first, as this explains the data and the model.

1.1 Initialise R session

We load any packages and set global options. You may need to install these libraries (Installation and general troubleshooting).

library(INLA)
rm(list=ls())
options(width=70, digits=2)
set.seed(2017)

2 Load data, rename, rescale

data(Seeds)
df = data.frame(y = Seeds$r, Ntrials = Seeds$n, Seeds[, 3:5])

3 Model

3.1 Observation Likelihood

family1 = "binomial"
control.family1 = list(control.link=list(model="logit"))
# number of trials is df$Ntrials

3.2 Formula

hyper1 = list(theta = list(prior="pc.prec", param=c(1,0.01)))
formula1 = y ~ x1 + x2 + f(plate, model="iid", hyper=hyper1)

4 Call INLA

res1 = inla(formula=formula1, data=df, 
            family=family1, Ntrials=Ntrials, 
            control.family=control.family1,
            control.predictor=list(compute=T),
            control.compute=list(config=T))

4.1 Look at results

summary(res1)
## 
## Call:
##    c("inla.core(formula = formula, family = family, contrasts = 
##    contrasts, ", " data = data, quantiles = quantiles, E = E, 
##    offset = offset, ", " scale = scale, weights = weights, 
##    Ntrials = Ntrials, strata = strata, ", " lp.scale = lp.scale, 
##    link.covariates = link.covariates, verbose = verbose, ", " 
##    lincomb = lincomb, selection = selection, control.compute = 
##    control.compute, ", " control.predictor = control.predictor, 
##    control.family = control.family, ", " control.inla = 
##    control.inla, control.fixed = control.fixed, ", " control.mode 
##    = control.mode, control.expert = control.expert, ", " 
##    control.hazard = control.hazard, control.lincomb = 
##    control.lincomb, ", " control.update = control.update, 
##    control.lp.scale = control.lp.scale, ", " control.pardiso = 
##    control.pardiso, only.hyperparam = only.hyperparam, ", " 
##    inla.call = inla.call, inla.arg = inla.arg, num.threads = 
##    num.threads, ", " blas.num.threads = blas.num.threads, keep = 
##    keep, working.directory = working.directory, ", " silent = 
##    silent, inla.mode = inla.mode, safe = FALSE, debug = debug, ", 
##    " .parent.frame = .parent.frame)") 
## Time used:
##     Pre = 2.82, Running = 0.466, Post = 0.0489, Total = 3.33 
## Fixed effects:
##              mean   sd 0.025quant 0.5quant 0.97quant  mode kld
## (Intercept) -0.39 0.18      -0.74    -0.39    -0.031 -0.40   0
## x1          -0.36 0.23      -0.84    -0.35     0.059 -0.34   0
## x2           1.03 0.22       0.59     1.03     1.452  1.04   0
## 
## Random effects:
##   Name     Model
##     plate IID model
## 
## Model hyperparameters:
##                      mean    sd 0.025quant 0.5quant 0.97quant mode
## Precision for plate 21.39 41.32       2.96    10.62     88.71 6.06
## 
## Marginal log-Likelihood:  -68.44 
##  is computed 
## Posterior summaries for the linear predictor and the fitted values are computed
## (Posterior marginals needs also 'control.compute=list(return.marginals.predictor=TRUE)')

5 Draw posterior samples

n.samples = 1000
samples = inla.posterior.sample(n.samples, result = res1)

This function draws posterior samples from the linear predictor and all its components. The samples from the hyper-parameters are only samples from the integration grid (which represents the only hyper-parameter values to compute the posterior of all the latent variables).

5.1 The structure of the samples

The samples is a list of posterior samples, let us look at the first sample.

str(samples[[1]])
## List of 3
##  $ hyperpar: Named num 1.75
##   ..- attr(*, "names")= chr "Precision for plate"
##  $ latent  : num [1:45, 1] -0.826 -0.402 -0.744 -0.303 -0.33 ...
##   ..- attr(*, "dimnames")=List of 2
##   .. ..$ : chr [1:45] "Predictor:1" "Predictor:2" "Predictor:3" "Predictor:4" ...
##   .. ..$ : NULL
##  $ logdens :List of 3
##   ..$ hyperpar: num -4.76
##   ..$ latent  : num -4.84
##   ..$ joint   : num -9.6

Here, $hyperpar is the part of the first sample representing the hyper-parameters, $latent represents all the latent variables, and $logdens is the prior and posterior log probability densities. The $latent, however, is quite complicated.

t(samples[[1]]$latent)
##      Predictor:1 Predictor:2 Predictor:3 Predictor:4 Predictor:5
## [1,]       -0.83        -0.4       -0.74        -0.3       -0.33
##      Predictor:6 Predictor:7 Predictor:8 Predictor:9 Predictor:10
## [1,]        0.91        0.95        0.64        0.55         0.37
##      Predictor:11 Predictor:12 Predictor:13 Predictor:14 Predictor:15
## [1,]         0.21         0.42        -0.64        -0.67        -0.45
##      Predictor:16 Predictor:17 Predictor:18 Predictor:19 Predictor:20
## [1,]         -1.2        -0.41         0.27        -0.38         0.44
##      Predictor:21 plate:1 plate:2 plate:3 plate:4 plate:5 plate:6
## [1,]        -0.55    -0.7   -0.28   -0.62   -0.18    -0.2    0.85
##      plate:7 plate:8 plate:9 plate:10 plate:11 plate:12 plate:13
## [1,]    0.89    0.58    0.49     0.31     0.14     0.89    -0.18
##      plate:14 plate:15 plate:16 plate:17 plate:18 plate:19 plate:20
## [1,]     -0.2    0.019    -0.72    -0.13     0.55     -0.1     0.72
##      plate:21 (Intercept):1  x1:1 x2:1
## [1,]    -0.27         -0.13 -0.34 0.19
# - transposed for shorter printing
# - this is a column vector

Here, the names show which part of the latent field these sample values are from. First comes the Predictor, wich is \(\eta\) the linear predictor, then comes the random effect named plate, with its values at each plate level. Then comes the fixed effects (one of them is called (Intercept)).

5.2 How to access the right part of $latent

The difficulty now is making generic code that will work for very complex models. E.g. in a spatial model, it would be horrific to have to read all the names and figure which was which.

I will now create a generic code though the use of a hidden part of the INLA result object.

res1$misc$configs$contents
## $tag
## [1] "Predictor"   "plate"       "(Intercept)" "x1"         
## [5] "x2"         
## 
## $start
## [1]  1 22 43 44 45
## 
## $length
## [1] 21 21  1  1  1

Here, you see the names of the latent components, and their start index and their length. Using this, we can find the indices for plate in the following way.

contents = res1$misc$configs$contents
effect = "plate"
id.effect = which(contents$tag==effect)
# - the numerical id of the effect
ind.effect = contents$start[id.effect]-1 + (1:contents$length[id.effect])
# - all the indices for the effect
# - these are the indexes in the sample[[1]]$latent !

The first sample from plate, and all the samples from plate can then be found.

## See an example for the first sample
samples[[1]]$latent[ind.effect, , drop=F]
##            [,1]
## plate:1  -0.700
## plate:2  -0.276
## plate:3  -0.618
## plate:4  -0.177
## plate:5  -0.204
## plate:6   0.846
## plate:7   0.888
## plate:8   0.580
## plate:9   0.491
## plate:10  0.310
## plate:11  0.144
## plate:12  0.888
## plate:13 -0.178
## plate:14 -0.204
## plate:15  0.019
## plate:16 -0.715
## plate:17 -0.128
## plate:18  0.553
## plate:19 -0.102
## plate:20  0.722
## plate:21 -0.272
## Draw this part of every sample
samples.effect = lapply(samples, function(x) x$latent[ind.effect])

5.3 Matrix output

We create a more readable version of these samples.

s.eff = matrix(unlist(samples.effect), byrow = T, nrow = length(samples.effect))
# - s.eff means samples.effect, just as a matrix
colnames(s.eff) = rownames(samples[[1]]$latent)[ind.effect]
# - retrieve names from original object
summary(s.eff)
##     plate:1         plate:2         plate:3         plate:4     
##  Min.   :-1.60   Min.   :-0.96   Min.   :-1.37   Min.   :-0.68  
##  1st Qu.:-0.49   1st Qu.:-0.23   1st Qu.:-0.50   1st Qu.: 0.07  
##  Median :-0.28   Median :-0.07   Median :-0.32   Median : 0.23  
##  Mean   :-0.30   Mean   :-0.07   Mean   :-0.34   Mean   : 0.24  
##  3rd Qu.:-0.10   3rd Qu.: 0.09   3rd Qu.:-0.15   3rd Qu.: 0.40  
##  Max.   : 0.51   Max.   : 0.61   Max.   : 0.47   Max.   : 1.11  
##     plate:5         plate:6         plate:7         plate:8     
##  Min.   :-0.82   Min.   :-1.11   Min.   :-0.54   Min.   :-0.37  
##  1st Qu.:-0.11   1st Qu.:-0.08   1st Qu.: 0.03   1st Qu.: 0.12  
##  Median : 0.03   Median : 0.09   Median : 0.17   Median : 0.30  
##  Mean   : 0.04   Mean   : 0.14   Mean   : 0.18   Mean   : 0.31  
##  3rd Qu.: 0.19   3rd Qu.: 0.33   3rd Qu.: 0.33   3rd Qu.: 0.47  
##  Max.   : 0.98   Max.   : 1.52   Max.   : 1.06   Max.   : 1.22  
##     plate:9         plate:10        plate:11        plate:12    
##  Min.   :-0.84   Min.   :-0.93   Min.   :-0.83   Min.   :-0.72  
##  1st Qu.:-0.21   1st Qu.:-0.35   1st Qu.:-0.05   1st Qu.: 0.00  
##  Median :-0.06   Median :-0.18   Median : 0.11   Median : 0.18  
##  Mean   :-0.06   Mean   :-0.19   Mean   : 0.14   Mean   : 0.22  
##  3rd Qu.: 0.10   3rd Qu.:-0.03   3rd Qu.: 0.32   3rd Qu.: 0.40  
##  Max.   : 1.20   Max.   : 0.44   Max.   : 1.51   Max.   : 1.35  
##     plate:13        plate:14        plate:15        plate:16    
##  Min.   :-0.95   Min.   :-1.02   Min.   :-0.39   Min.   :-1.43  
##  1st Qu.:-0.18   1st Qu.:-0.25   1st Qu.: 0.20   1st Qu.:-0.35  
##  Median : 0.00   Median :-0.07   Median : 0.39   Median :-0.11  
##  Mean   : 0.01   Mean   :-0.08   Mean   : 0.42   Mean   :-0.15  
##  3rd Qu.: 0.17   3rd Qu.: 0.09   3rd Qu.: 0.60   3rd Qu.: 0.07  
##  Max.   : 1.15   Max.   : 0.81   Max.   : 1.60   Max.   : 1.11  
##     plate:17        plate:18        plate:19        plate:20    
##  Min.   :-1.68   Min.   :-1.17   Min.   :-1.31   Min.   :-0.65  
##  1st Qu.:-0.56   1st Qu.:-0.22   1st Qu.:-0.29   1st Qu.:-0.02  
##  Median :-0.30   Median :-0.05   Median :-0.11   Median : 0.13  
##  Mean   :-0.34   Mean   :-0.06   Mean   :-0.12   Mean   : 0.15  
##  3rd Qu.:-0.09   3rd Qu.: 0.11   3rd Qu.: 0.06   3rd Qu.: 0.31  
##  Max.   : 0.60   Max.   : 0.70   Max.   : 0.86   Max.   : 1.17  
##     plate:21    
##  Min.   :-1.20  
##  1st Qu.:-0.28  
##  Median :-0.08  
##  Mean   :-0.09  
##  3rd Qu.: 0.10  
##  Max.   : 1.16

5.4 Verifying output

We compare these samples to the usual posterior summaries.

cbind(colMeans(s.eff), res1$summary.random$plate$mean)
##            [,1]   [,2]
## plate:1  -0.304 -0.309
## plate:2  -0.073 -0.087
## plate:3  -0.337 -0.333
## plate:4   0.241  0.227
## plate:5   0.040  0.056
## plate:6   0.141  0.111
## plate:7   0.178  0.167
## plate:8   0.309  0.300
## plate:9  -0.062 -0.063
## plate:10 -0.194 -0.195
## plate:11  0.142  0.128
## plate:12  0.215  0.222
## plate:13  0.010  0.021
## plate:14 -0.079 -0.065
## plate:15  0.419  0.410
## plate:16 -0.146 -0.141
## plate:17 -0.335 -0.316
## plate:18 -0.060 -0.060
## plate:19 -0.118 -0.114
## plate:20  0.151  0.135
## plate:21 -0.090 -0.091

6 Prediction where we have data

We want to compute a prediction where we already have a datapoint. The goal is to predict for observation 7, which is:

df[7, ]
##    y Ntrials x1 x2 plate
## 7 53      74  0  1     7

6.1 Using the linear predictor

The easiest approach is to use the linear predictor. Since this is the 7th row of df, we pick out the 7th predictor. Noe: The value of plate is not important, that is just a covariate value (which happens to be the same as the rown number). The linear predictor always start at the first element of $latent, so we do not need to look for it.

## see one example value
samples[[1]]$latent[7, , drop=F]
##             [,1]
## Predictor:7 0.95
## Draw this part of every sample
samples.pred7 = lapply(samples, function(x) x$latent[7])
samples.pred7 = unlist(samples.pred7)

Plot a density of this sample. You could also do a histogram with truehist.

plot(density(samples.pred7))
abline(v=df$y[7])

6.2 Transforming the predictor samples to data-value samples

We now need to transform through the link function and the data sampling. We know we use the logit transform, but we can doublecheck this:

res1$.args$control.family$control.link$model
## NULL

You can set up the logit function yourself, or use inla.link.invlogit(x). We transform the samples through the link.

samples.link7 = inla.link.invlogit(samples.pred7)

plot(density(samples.link7))

## Add the naive estimate of binomial probability y/N:
abline(v = df$y[7]/df$Ntrials[7], col="blue")

6.3 Adding observation likelihood randomness

The following code is specific to discrete likelihoods, as we can compute all the densities exactly. What we do here is discrete integration, computing the probabilities exactly (on the samples that we have).

## What range of observations do we want to compute
discrete.range = 0:100
## Initialize the probabilities 
probs = rep(0, length(discrete.range))

for (i in 1:length(samples.link7)) {
  probs = probs + dbinom(discrete.range, size=df$Ntrials[7], prob = samples.link7[i])
}
probs = probs / length(samples)
names(probs) = discrete.range

Let us now plot the results, together with the true value.

plot(probs, xlim=c(37, 67))
abline(v=df$y[7], col="blue")

6.4 Alternative way: Building up the samples model component by model component

It is important to know how to build up the (linear) predictor sample by adding together the model components (fixed effects and random effects), as this enables you to do anything! Knowing how to do this lets us predict in unsampled locations also.

## Remember the covariate values
df[7, ]
##    y Ntrials x1 x2 plate
## 7 53      74  0  1     7
## Remember the formula
res1$.args$formula
## y ~ x1 + x2 + f(plate, model = "iid", hyper = hyper1)
## NULL

For sample nr 57 (as an example) we add it all up:

nr = 57
s = samples[[nr]]$latent

## beta1 * 0
f.x1.0 = s[44, , drop=F] * 0
## beta1 * 1
f.x2.1 = s[45, , drop=F] * 1
## f(plate = 7)
f.plate.7 = s[28, , drop=F]
## The intercept
int = s[43, , drop=F]

sum = drop(f.x1.0 + f.x2.1 + f.plate.7 + int)
list(model.component.sum.7 = sum, linear.predictor.7 = samples.pred7[nr])
## $model.component.sum.7
## x1:1 
##    1 
## 
## $linear.predictor.7
## [1] 1

These two numbers are equal, or approximately equal, as they represent the exact same thing! To do this for all the sample numbers, create a for loop for (nr in 1:length(samples)).

7 Predicting for a different experiment on an old plate

Using a similar method to what we just did, we can create a new experiment, and predict it.

our.experiment = list(x1 = 0, x2 = 0, plate = 7)

nr = 57
s = samples[[nr]]$latent

## beta1 * 0
f.x1.0 = s[44, , drop=F] * our.experiment$x1
## beta1 * 1
f.x2.1 = s[45, , drop=F] * our.experiment$x2
## f(plate = 7)
# - the same plate as before
f.plate.7 = s[28, , drop=F]
## The intercept
int = s[43, , drop=F]

predictor.our.experiment = drop(f.x1.0 + f.x2.1 + f.plate.7 + int)

Again, to get all the samples, create a for loop. Then go through the link and the likelihood as we did in the predictor example.

8 Predicting for a different experiment on a new plate

Now, we cannot use the sample from plate number 7. We want a new plate, let us call it plate number 50. Indeed, the result of our INLA model does not include a plate number 50.

Even though there is no information in our data about what happens at plate number 50, there is information about what happens on plates “in general”. This information lies in the pooling parameter sigma/precision for plate. Let us find our sample for this sigma/precision.

## As before:
our.experiment = list(x1 = 0, x2 = 0, plate = 50)
nr = 57

## Want a hyper-parameter
plate.precision = samples[[nr]]$hyperpar[1]
plate.sigma = plate.precision^-0.5
names(plate.sigma) = "sigma"

print(plate.sigma)
## sigma 
##  0.59

Our model for plate was a Gaussian IID effect, with standard deviation sigma. So, we can draw a sample ourselves for this new plate:

sample.plate50 = rnorm(1, sd=plate.sigma)

And then we can complete our sample number nr:

## As before
s = samples[[nr]]$latent
f.x1.0 = s[44, , drop=F] * our.experiment$x1
f.x2.1 = s[45, , drop=F] * our.experiment$x2

## f(plate = 50)
f.plate.50 = sample.plate50
## The intercept
int = s[43, , drop=F]

predictor.experiment.newplate = drop(f.x1.0 + f.x2.1 + f.plate.50 + int)

A for-loop is needed to produce all the samples. We also need transformation through the link and adding the likelihood.

9 Comments