Introduction to Mixed Modelling first introduces the criterion of REstricted Maximum Likelihood (REML) for the fitting of a mixed model to data before illustrating how to apply mixed model analysis to a wide range of situations, how to estimate the variance due to each random-effect term in the model, and how to obtain and interpret Best Linear Unbiased Predictors (BLUPs) estimates of individual effects that take account of their random nature. It is intended to be an introductory guide to a "relatively" advanced specialised topic, and to convince the reader that mixed modelling is neither so specialised nor so difficult as it may at first appear.
This edition presents new material in the following areas:
- Use of mixed models for meta-analysis of a set of experiments, especially clinical trials.
- The Bayesian interpretation of best linear unbiased predictors (BLUPs).
- The multiple-testing problem and the shrinkage of BLUPs as a defence against the ‘Winner’s Curse’.
- The implementation of mixed models in the statistical software SAS.
- Increasing the precision of significance tests in mixed models, by estimation of the denominator degrees of freedom (the Kenward-Roger method).
- Tests for comparison of non-nested mixed models, using the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC).
Preface
1. The need for more than one random-effect term when fitting a regression line
2. The need for more than one random-effect term in a designed experiment
3. Estimation of the variances of random-effect terms
4. Interval estimates for fixed-effect terms in mixed models
5. Estimation of random effects in mixed models: Best Linear Unbiased Predictors (BLUPs)
6. More advanced mixed models for more elaborate data sets
7. Three case studies
8. Meta-analysis and the multiple testing problem
9. The use of mixed models for the analysis of unbalanced experimental designs
10. Beyond mixed modeling
11. Why is the criterion for fitting mixed models called REsidual Maximum Likelihood?
Index