To see accurate pricing, please choose your delivery country.
 
 
United States
£ GBP
All Shops

British Wildlife

8 issues per year 84 pages per issue Subscription only

British Wildlife is the leading natural history magazine in the UK, providing essential reading for both enthusiast and professional naturalists and wildlife conservationists. Published eight times a year, British Wildlife bridges the gap between popular writing and scientific literature through a combination of long-form articles, regular columns and reports, book reviews and letters.

Subscriptions from £33 per year

Conservation Land Management

4 issues per year 44 pages per issue Subscription only

Conservation Land Management (CLM) is a quarterly magazine that is widely regarded as essential reading for all who are involved in land management for nature conservation, across the British Isles. CLM includes long-form articles, events listings, publication reviews, new product information and updates, reports of conferences and letters.

Subscriptions from £26 per year
Academic & Professional Books  Reference  Data Analysis & Modelling  Data Analysis & Statistics

Doing Bayesian Data Analysis A Tutorial with R, JAGS, and Stan

Handbook / Manual
By: John K Kruschke(Author)
759 pages, ~175 colour illustrations
Publisher: Academic Press
Doing Bayesian Data Analysis
Click to have a closer look
  • Doing Bayesian Data Analysis ISBN: 9780124058880 Edition: 2 Hardback Dec 2014 Not in stock: Usually dispatched within 1-2 weeks
    £70.99
    #219583
Price: £70.99
About this book Contents Customer reviews Related titles Recommended titles

About this book

There is an explosion of interest in Bayesian statistics, primarily because recently created computational methods have finally made Bayesian analysis obtainable to a wide audience. Doing Bayesian Data Analysis: A Tutorial with R, JAGS and Stan provides an accessible approach to Bayesian data analysis, as material is explained clearly with concrete examples. The book begins with the basics, including essential concepts of probability and random sampling, and gradually progresses to advanced hierarchical modeling methods for realistic data. Included are step-by-step instructions on how to conduct Bayesian data analyses in the popular and free software R and WinBugs.

Doing Bayesian Data Analysis: A Tutorial with R, JAGS and Stan is intended for first-year graduate students or advanced undergraduates. It provides a bridge between undergraduate training and modern Bayesian methods for data analysis, which is becoming the accepted research standard. Knowledge of algebra and basic calculus is a prerequisite.

Contents

New to this Edition (partial list):

- There are all new programs in JAGS and Stan. The new programs are designed to be much easier to use than the scripts in the first edition. In particular, there are now compact high-level scripts that make it easy to run the programs on your own data sets. This new programming was a major undertaking by itself.
- The introductory Chapter 2, regarding the basic ideas of how Bayesian inference re-allocates credibility across possibilities, is completely rewritten and greatly expanded.
- There are completely new chapters on the programming languages R (Ch. 3), JAGS (Ch. 8), and Stan (Ch. 14). The lengthy new chapter on R includes explanations of data files and structures such as lists and data frames, along with several utility functions. (It also has a new poem that I am particularly pleased with.) The new chapter on JAGS includes explanation of the RunJAGS package which executes JAGS on parallel computer cores. The new chapter on Stan provides a novel explanation of the concepts of Hamiltonian Monte Carlo. The chapter on Stan also explains conceptual differences in program flow between it and JAGS.
- Chapter 5 on Bayes’ rule is greatly revised, with a new emphasis on how Bayes’ rule re-allocates credibility across parameter values from prior to posterior. The material on model comparison has been removed from all the early chapters and integrated into a compact presentation in Chapter 10.
- What were two separate chapters on the Metropolis algorithm and Gibbs sampling have been consolidated into a single chapter on MCMC methods (as Chapter 7).
- There is extensive new material on MCMC convergence diagnostics in Chapters 7 and 8. There are explanations of autocorrelation and effective sample size. There is also exploration of the stability of the estimates of the HDI limits. New computer programs display the diagnostics, as well.
- Chapter 9 on hierarchical models includes extensive new and unique material on the crucial concept of shrinkage, along with new examples.
- All the material on model comparison, which was spread across various chapters in the first edition, in now consolidated into a single focused chapter (Ch. 10) that emphasizes its conceptualization as a case of hierarchical modeling.
- Chapter 11 on null hypothesis significance testing is extensively revised. It has new material for introducing the concept of sampling distribution. It has new illustrations of sampling distributions for various stopping rules, and for multiple tests.
- Chapter 12, regarding Bayesian approaches to null value assessment, has new material about the region of practical equivalence (ROPE), new examples of accepting the null value by Bayes factors, and new explanation of the Bayes factor in terms of the Savage-Dickey method.
- Chapter 13, regarding statistical power and sample size, has an extensive new section on sequential testing, and making the research goal be precision of estimation instead of rejecting or accepting a particular value.
- Chapter 15, which introduces the generalized linear model, is fully revised, with more complete tables showing combinations of predicted and predictor variable types.
- Chapter 16, regarding estimation of means, now includes extensive discussion of comparing two groups, along with explicit estimates of effect size.
- Chapter 17, regarding regression on a single metric predictor, now includes extensive examples of robust regression in JAGS and Stan. New examples of hierarchical regression, including quadratic trend, graphically illustrate shrinkage in estimates of individual slopes and curvatures. The use of weighted data is also illustrated.
- Chapter 18, on multiple linear regression, includes a new section on Bayesian variable selection, in which various candidate predictors are probabilistically included in the regression model.
- Chapter 19, on one-factor ANOVA-like analysis, has all new examples, including a completely worked out example analogous to analysis of covariance (ANCOVA), and a new example involving heterogeneous variances.
- Chapter 20, on multi-factor ANOVA-like analysis, has all new examples, including a completely worked out example of a split-plot design that involves a combination of a within-subjects factor and a between-subjects factor.
- Chapter 21, on logistic regression, is expanded to include examples of robust logistic regression, and examples with nominal predictors.
- There is a completely new chapter (Ch. 22) on multinomial logistic regression. This chapter fills in a case of the generalized linear model (namely, a nominal predicted variable) that was missing from the first edition.
- Chapter 23, regarding ordinal data, is greatly expanded. New examples illustrate single-group and two-group analyses, and demonstrate how interpretations differ from treating ordinal data as if they were metric.
- There is a new section (25.4) that explains how to model censored data in JAGS.
- Many exercises are new or revised.


Table of contents:

1.) This Book’s Organization: Read Me First!
1.1 Real People Can Read This Book
1.2 Prerequisites
1.3 The Organization of This Book
1.3.1 What Are the Essential Chapters?
1.3.2 Where’s the Equivalent of Traditional Test X in This Book
1.4 Gimme Feedback (Be Polite)
1.5 Acknowledgments

Part 1.) The Basics: Parameters, Probability, Bayes’ Rule, and R
2.) Introduction: Models We Believe In
2.1 Models of Observations and Models of Beliefs
2.1.1 Prior and Posterior Beliefs
2.2 Three Goals for Inference from Data
2.2.1 Estimation of Parameter Values
2.2.2 Prediction of Data Values
2.2.3 Model Comparison
2.3 The R Programming Language
2.3.1 Getting and Installing R
2.3.2 Invoking R and Using the Command Line
2.3.3 A Simple Example of R in Action
2.3.4 Getting Help in R
2.3.5 Programming in R
2.4 Exercises

3.) What Is This Stuff Called Probability?
3.1 The Set of All Possible Events
3.1.1 Coin Flips: Why You Should Care
3.2 Probability: Outside or Inside the Head
3.2.1 Outside the Head: Long-Run Relative Frequency
3.2.2 Inside the Head: Subjective Belief
3.2.3 Probabilities Assign Numbers to Possibilities
3.3 Probability Distributions
3.3.1 Discrete Distributions: Probability Mass
3.3.2 Continuous Distributions: Rendezvous with Density
3.3.3 Mean and Variance of a Distribution
3.3.4 Variance as Uncertainty in Beliefs
3.3.5 Highest Density Interval (HDI)
3.4 Two-Way Distributions
3.4.1 Marginal Probability
3.4.2 Conditional Probability
3.4.3 Independence of Attributes
3.5 R Code
3.5.1 R Code for Figure 3.1
3.5.2 R Code for Figure 3.3
3.6 Exercises

4.) Bayes’ Rule
4.1 Bayes’ Rule
4.1.1 Derived from Definitions of Conditional Probability
4.1.2 Intuited from a Two-Way Discrete Table
4.1.3 The Denominator as an Integral over Continuous Values
4.2 Applied to Models and Data
4.2.1 Data Order Invariance
4.2.2 An Example with Coin Flipping
4.3 The Three Goals of Inference
4.3.1 Estimation of Parameter Values
4.3.2 Prediction of Data Values
4.3.3 Model Comparison
4.3.4 Why Bayesian Inference Can Be Difficult
4.3.5 Bayesian Reasoning in Everyday Life
4.4 R Code
4.4.1 R Code for Figure 4.1
4.5 Exercises

Part 2.) All the Fundamentals Applied to Inferring a Binomial Proportion
5.) Inferring a Binomial Proportion via Exact Mathematical Analysis
5.1 The Likelihood Function: Bernoulli Distribution
5.2 A Description of Beliefs: The Beta Distribution
5.2.1 Specifying a Beta Prior
5.2.2 The Posterior Beta
5.3 Three Inferential Goals
5.3.1 Estimating the Binomial Proportion
5.3.2 Predicting Data
5.3.3 Model Comparison
5.4 Summary: How to Do Bayesian Inference
5.5 R Code
5.5.1 R Code for Figure 5.2
5.6 Exercises

6.) Inferring a Binomial Proportion via Grid Approximation
6.1 Bayes’ Rule for Discrete Values of 0
6.2 Discretizing a Continuous Prior Density
6.2.1 Examples Using Discretized Priors
6.3 Estimation
6.4 Prediction of Subsequent Data
6.5 Model Comparison
6.6 Summary
6.7 R Code
6.7.1 R Code for Figure 6.2 and the Like
6.8 Exercises

7.) Inferring a Binomial Proportion via the Metropolis Algorithm
7.1 A Simple Case of the Metropolis Algorithm
7.1.1 A Politician Stumbles on the Metropolis Algorithm
7.1.2 A Random Walk
7.1.3 General Properties of a Random Walk
7.1.4 Why We Care
7.1.5 Why It Works
7.2 The Metropolis Algorithm More Generally
7.2.1 "Burn-in," Efficiency, and Convergence
7.2.2 Terminology: Markov Chain Monte Carlo
7.3 From the Sampled Posterior to the Three Goals
7.3.1 Estimation
7.3.2 Prediction
7.3.3 Model Comparison: Estimation of p(D)
7.4 MCMC in BUGS
7.4.1 Parameter Estimation with BUGS
7.4.2 BUGS for Prediction
7.4.3 BUGS for Model Comparison
7.5 Conclusion
7.6 R Code
7.6.1 R Code for a Home-Grown Metropolis Algorithm
7.7 Exercises

8.) Inferring Two Binomial Proportions via Gibbs Sampling
8.1 Prior, Likelihood, and Posterior for Two Proportions
8.2 The Posterior via Exact Formal Analysis
8.3 The Posterior via Grid Approximation
8.4 The Posterior via Markov Chain Monte Carlo
8.4.1 Metropolis Algorithm
8.4.2 Gibbs Sampling
8.5 Doing It with BUGS
8.5.1 Sampling the Prior in BUGS
8.6 How Different Are the Underlying Biases?
8.7 Summary
8.8 R Code
8.8.1 R Code for Grid Approximation (Figures 8. and 8.2)
8.8.2 R Code for Metropolis Sampler (Figure 8.3)
8.8.3 R Code for BUGS Sampler (Figure 8.6)
8.8.4 R Code for Plotting a Posterior Histogram
8.9 Exercises

9.) Bernoulli Likelihood with Hierarchical Prior
9.1 A Single Coin from a Single Mint
9.2 Multiple Coins from a Single Mint
9.2.1 Posterior via Grid Approximation
9.2.2 Posterior via Monte Carlo Sampling
9.2.3 Outliers and Shrinkage of Individual Estimates
9.2.4 Case Study: Therapeutic Touch
9.2.5 Number of Coins and Flips per Coin
9.3 Multiple Coins from Multiple Mints
9.3.1 Independent Mints
9.3.2 Dependent Mints
9.3.3 Individual Differences and Meta-Analysis
9.4 Summary
9.5 R Code
9.5.1 Code for Analysis of Therapeutic-Touch Experiment
9.5.2 Code for Analysis of Filtration-Condensation Experiment
9.6 Exercises

10.) Hierarchical Modeling and Model Comparison
10.1 Model Comparison as Hierarchical Modeling
10.2 Model Comparison in BUGS
10.2.1 A Simple Example
10.2.2 A Realistic Example with "Pseudopriors"
10.2.3 Some Practical Advice When Using Transdimensional MCMC with Pseudopriors
10.3 Model Comparison and Nested Models
10.4 Review of Hierarchical Framework for Model Comparison
10.4.1 Comparing Methods for MCMC Model Comparison
10.4.2 Summary and Caveats
10.5 Exercises

11.) Null Hypothesis Significance Testing
11.1 NHST for the Bias of a Coin
11.1.1 When the Experimenter Intends to Fix N
11.1.2 When the Experimenter Intends to Fix z
11.1.3 Soul Searching
11.1.4 Bayesian Analysis
11.2 Prior Knowledge about the Coin
11.2.1 NHST Analysis
11.2.2 Bayesian Analysis
11.3 Confidence Interval and Highest Density Interval
11.3.1 NHST Confidence Interval
11.3.2 Bayesian HDI
11.4 Multiple Comparisons
11.4.1 NHST Correction for Experimentwise Error
11.4.2 Just One Bayesian Posterior No Matter How You Look at It
11.4.3 How Bayesian Analysis Mitigates False Alarms
11.5 What a Sampling Distribution Is Good For
11.5.1 Planning an Experiment
11.5.2 Exploring Model Predictions (Posterior Predictive Check)
11.6 Exercises

12.) Bayesian Approaches to Testing a Point ("Null") Hypothesis
12.1 The Estimation (Single Prior) Approach
12.1.1 Is a Null Value of a Parameter among the Credible Values?
12.1.2 Is a Null Value of a Difference among the Credible Values?
12.1.3 Region of Practical Equivalence (ROPE)
12.2 The Model-Comparison (Two-Prior) Approach
12.2.1 Are the Biases of Two Coins Equal?
12.2.2 Are Different Groups Equal?
12.3 Estimation or Model Comparison?
12.3.1 What Is the Probability That the Null Value Is True?
12.3.2 Recommendations
12.4 R Code
12.4.1 R Code for Figure 12.5
12.5 Exercises

13.) Goals, Power, and Sample Size
13.1 The Will to Power
13.1.1 Goals and Obstacles
13.1.2 Power
13.1.3 Sample Size
13.1.4 Other Expressions of Goals
13.2 Sample Size for a Single Coin
13.2.1 When the Goal Is to Exclude a Null Value
13.2.2 When the Goal Is Precision
13.3 Sample Size for Multiple Mints
13.4 Power: Prospective, Retrospective, and Replication
13.4.1 Power Analysis Requires Verisimilitude of Simulated Data
13.5 The Importance of Planning
13.6 R Code
13.6.1 Sample Size for a Single Coin
13.6.2 Power and Sample Size for Multiple Mints
13.7 Exercises

Part 3.) Applied to the Generalized Linear Model
14.) Overview of the Generalized Linear Model
14.1 The Generalized Linear Model (GLM)
14.1.2 Scale Types: Metric, Ordinal, Nominal
14.1.3 Linear Function of a Single Metric Predictor
14.1.4 Additive Combination of Metric Predictors
14.1.5 Nonadditive Interaction of Metric Predictors
14.1.6 Nominal Predictors
14.1.7 Linking Combined Predictors to the Predicted
14.1.8 Probabilistic Prediction
14.1.9 Formal Expression of the GLM
14.1.10 Two or More Nominal Variables Predicting Frequency
14.2 Cases of the GLM
14.3 Exercises

15.) Metric Predicted Variable on a Single Group
15.1 Estimating the Mean and Precision of a Normal Likelihood
15.1.1 Solution by Mathematical Analysis
15.1.2 Approximation by MCMC in BUGS
15.1.3 Outliers and Robust Estimation: The t Distribution
15.1.4 When the Data Are Non-normal: Transformations
15.2 Repeated Measures and Individual Differences
15.2.1 Hierarchical Model
15.2.2 Implementation in BUGS
15.3 Summary
15.4 R Code
15.4.1 Estimating the Mean and Precision of a Normal Likelihood
15.4.2 Repeated Measures: Normal Across and Normal Within
15.5 Exercises

16.) Metric Predicted Variable with One Metric Predictor
16.1 Simple Linear Regression
16.1.1 The Hierarchical Model and BUGS Code
16.1.2 The Posterior: How Big Is the Slope?
16.1.3 Posterior Prediction
16.2 Outliers and Robust Regression
16.3 Simple Linear Regression with Repeated Measures
16.4 Summary
16.5 R Code
16.5.1 Data Generator for Height and Weight
16.5.2 BRugs: Robust Linear Regression
16.5.3 BRugs: Simple Linear Regression with Repeated Measures
16.6 Exercises

17.) Metric Predicted Variable with Multiple Metric Predictors
17.1 Multiple Linear Regression
17.1.1 The Perils of Correlated Predictors
17.1.2 The Model and BUGS Program
17.1.3 The Posterior: How Big Are the Slopes?
17.1.4 Posterior Prediction
17.2 Hyperpriors and Shrinkage of Regression Coefficients
17.2.1 Informative Priors, Sparse Data, and Correlated Predictors
17.3 Multiplicative Interaction of Metric Predictors
17.3.1 The Hierarchical Model and BUGS Code
17.3.2 Interpreting the Posterior
17.4 Which Predictors Should Be Included?
17.5 R Code
17.5.1 Multiple Linear Regression
17.5.2 Multiple Linear Regression with Hyperprior on Coefficients
17.6 Exercises

18.) Metric Predicted Variable with One Nominal Predictor
18.1 Bayesian Oneway ANOVA
18.1.1 The Hierarchical Prior
18.1.2 Doing It with R and BUGS
18.1.3 A Worked Example
18.2 Multiple Comparisons
18.3 Two-Group Bayesian ANOVA and the NHST t Test
18.4 R Code
18.4.1 Bayesian Oneway ANOVA
18.5 Exercises

19.) Metric Predicted Variable with Multiple Nominal Predictors
19.1 Bayesian Multifactor ANOVA
19.1.2 The Hierarchical Prior
19.1.3 An Example in R and BUGS
19.1.4 Interpreting the Posterior
19.1.5 Noncrossover Interactions, Rescaling, and Homogeneous Variances
19.2 Repeated Measures, a.k.a. Within-Subject Designs
19.2.1 Why Use a Within-Subject Design? And Why Not?
19.3 R Code
19.3.1 Bayesian Two-Factor ANOVA
19.4 Exercises

20.) Dichotomous Predicted Variable
20.1 Logistic Regression
20.1.1 The Model
20.1.2 Doing It in R and BUGS
20.1.3 Interpreting the Posterior
20.1.4 Perils of Correlated Predictors
20.1.5 When There Are Few 1’s in the Data
20.1.6 Hyperprior Across Regression Coefficient
20.2 Interaction of Predictors in Logistic Regression
20.3 Logistic ANOVA
20.3.1 Within-Subject Designs
20.4 Summary
20.5 R Code
20.5.1 Logistic Regression Code
20.5.2 Logistic ANOVA Code
20.6 Exercises

21.) Ordinal Predicted Variable
21.1 Ordinal Probit Regression
21.1.1 What the Data Look Like
21.1.2 The Mapping from Metric x to Ordinal y
21.1.3 The Parameters and Their Priors
21.1.4 Standardizing for MCMC Efficiency
21.1.5 Posterior Prediction
21.2 Some Examples
21.2.1 Why Are Some Thresholds Outside the Data?
21.3 Interaction
21.4 Relation to Linear and Logistic Regression
21.5 R Code
21.6 Exercises

22.) Contingency Table Analysis
22.1 Poisson Exponential ANOVA
22.1.1 What the Data Look Like
22.1.2 The Exponential Link Function
22.1.3 The Poisson Likelihood
22.1.4 The Parameters and the Hierarchical Prior
22.2 Examples
22.2.1 Credible Intervals on Cell Probabilities
22.3 Log Linear Models for Contingency Tables
22.4 R Code for the Poisson Exponential Model
22.5 Exercises

23.) Tools in the Trunk
23.1 Reporting a Bayesian Analysis
23.1.1 Essential Points
23.1.2 Optional Points
23.1.3 Helpful Points
23.2 MCMC Burn-in and Thinning
23.3 Functions for Approximating Highest Density Intervals
23.3.1 R Code for Computing HDI of a Grid Approximation
23.3.2 R Code for Computing HDI of an MCMC Sample
23.3.3 R Code for Computing HDI of a Function
23.4 Reparameterization of Probability Distributions
23.4.1 Examples
23.4.2 Reparameterization of Two Parameters

REFERENCES
INDEX

Customer Reviews

Handbook / Manual
By: John K Kruschke(Author)
759 pages, ~175 colour illustrations
Publisher: Academic Press
Media reviews

"fills a gaping hole in what is currently available, and will serve to create its own market"
– Prof. Michael Lee, U. of Cal., Irvine; pres. Society for Mathematical Psych.

"has the potential to change the way most cognitive scientists and experimental psychologists approach the planning and analysis of their experiments"
– Prof. Geoffrey Iverson, U. of Cal., Irvine; past pres. Society for Mathematical Psych.

"better than others for reasons stylistic [...] buy it – it's truly amazin'!"
– James L. (Jay) McClelland, Lucie Stern Prof. & Chair, Dept. of Psych., Stanford U.

"the best introductory textbook on Bayesian MCMC techniques"
– J. of Mathematical Psych.

"potential to change the methodological toolbox of a new generation of social scientists"
– J. of Economic Psych.

"revolutionary"
– British J. of Mathematical and Statistical Psych.

"writing for real people with real data. From the very first chapter, the engaging writing style will get readers excited about this topic"
– PsycCritiques

Current promotions
New and Forthcoming BooksNHBS Moth TrapBritish Wildlife MagazineBuyers Guides