Applied Statistical Modelling for Ecologists
A Practical Guide to Bayesian and Likelihood Inference Using R, JAGS, NIMBLE, Stan and TMB
- 1st Edition - July 18, 2024
- Authors: Marc Kéry, Kenneth F. Kellner
- Language: English
- Paperback ISBN:9 7 8 - 0 - 4 4 3 - 1 3 7 1 5 - 0
- eBook ISBN:9 7 8 - 0 - 4 4 3 - 1 3 7 1 6 - 7
Applied Statistical Modelling for Ecologists provides a gentle introduction to the essential models of applied statistics: linear models, generalized linear models, mixed and hiera… Read more
Purchase options
Institutional subscription on ScienceDirect
Request a sales quoteApplied Statistical Modelling for Ecologists provides a gentle introduction to the essential models of applied statistics: linear models, generalized linear models, mixed and hierarchical models. All models are fit with both a likelihood and a Bayesian approach, using several powerful software packages widely used in research publications: JAGS, NIMBLE, Stan, and TMB. In addition, the foundational method of maximum likelihood is explained in a manner that ecologists can really understand.
This book is the successor of the widely used Introduction to WinBUGS for Ecologists (Kéry, Academic Press, 2010). Like its parent, it is extremely effective for both classroom use and self-study, allowing students and researchers alike to quickly learn, understand, and carry out a very wide range of statistical modelling tasks.
The examples in Applied Statistical Modelling for Ecologists come from ecology and the environmental sciences, but the underlying statistical models are very widely used by scientists across many disciplines. This book will be useful for anybody who needs to learn and quickly become proficient in statistical modelling, with either a likelihood or a Bayesian focus, and in the model-fitting engines covered, including the three latest packages NIMBLE, Stan, and TMB.
- Contains a concise and gentle introduction to probability and applied statistics as needed in ecology and the environmental sciences
- Covers the foundations of modern applied statistical modelling
- Gives a comprehensive, applied introduction to what currently are the most widely used and most exciting, cutting-edge model fitting software packages: JAGS, NIMBLE, Stan, and TMB
- Provides a highly accessible applied introduction to the two dominant methods of fitting parametric statistical models: maximum likelihood and Bayesian posterior inference
- Details the principles of model building, model checking and model selection
- Adopts a “Rosetta Stone” approach, wherein understanding of one software, and of its associated language, will be greatly enhanced by seeing the analogous code in other engines
- Provides all code available for download for students, at https://www.elsevier.com/books-and-journals/book-companion/9780443137150
- Cover image
- Title page
- Table of Contents
- Copyright
- Dedication
- Foreword
- Acknowledgments
- Chapter 1. Introduction
- Abstract
- Chapter outline
- 1.1 Statistical models and why you need them
- 1.2 Why linear statistical models?
- 1.3 Why go beyond the linear model?
- 1.4 Random effects and why you need them
- 1.5 Why do you need both Bayesian and non-Bayesian statistical inference?
- 1.6 More reasons for why you should really understand maximum likelihood
- 1.7 The data simulation/model fitting dualism
- 1.8 The “experimental approach” to statistics
- 1.9 The first principle of modeling: start simple!
- 1.10 Overview of the ASM book and additional resources
- 1.11 How to use the ASM book for self-study and in courses
- 1.12 Summary and outlook
- Chapter 2. Introduction to statistical inference
- Abstract
- Chapter outline
- 2.1 Probability as the basis for statistical inference
- 2.2 Random variables and their description by probability distributions
- 2.3 Statistical models and their usages
- 2.4 The likelihood function
- 2.5 Classical inference by maximum likelihood and its application to a single-parameter model
- 2.6 Maximum likelihood estimation in a two-parameter model
- 2.7 Bayesian inference using posterior distributions
- 2.8 Bayesian computation by Markov chain Monte Carlo (MCMC)
- 2.9 So should you now be a Bayesian or a frequentist?
- 2.10 Summary and outlook
- Further reading
- Chapter 3. Linear regression models and their extensions to generalized linear, hierarchical, and integrated models
- Abstract
- Chapter outline
- 3.1 Introduction
- 3.2 Statistical distributions for the random variables in our model
- 3.3 Link functions to model parameters on a transformed scale
- 3.4 Linear modeling of covariate effects
- 3.5 Brief overview of linear, generalized linear, (generalized) linear mixed, hierarchical, and integrated models
- 3.6 Summary and outlook
- Chapter 4. Introduction to general-purpose model fitting engines and the “model of the mean”
- Abstract
- Chapter outline
- 4.1 Introduction
- 4.2 Data generation
- 4.3 Analysis using canned functions in R
- 4.4 JAGS
- 4.5 NIMBLE
- 4.6 Stan
- 4.7 Maximum likelihood in R
- 4.8 Maximum likelihood using Template Model Builder (TMB)
- 4.9 Comparison of engines and concluding remarks
- Chapter 5. Normal linear regression
- Abstract
- Chapter outline
- 5.1 Introduction
- 5.2 Data generation
- 5.3 Analysis with canned functions in R
- 5.4 Bayesian analysis with JAGS
- 5.5 Bayesian analysis with NIMBLE
- 5.6 Bayesian analysis with Stan
- 5.7 Do-it-yourself maximum likelihood estimates
- 5.8 Likelihood analysis with TMB
- 5.9 Comparison of the parameter estimates
- 5.10 Summary and outlook
- Chapter 6. Comparing two groups in a normal model
- Abstract
- Chapter outline
- 6.1 Introduction
- 6.2 Comparing two groups with equal variances
- 6.3 Comparing two groups with unequal variances
- 6.4 Summary and a comment on the modeling of variances
- Chapter 7. Models with a single categorical covariate with more than two levels
- Abstract
- Chapter outline
- 7.1 Introduction: fixed and random effects
- 7.2 Fixed-effects models
- 7.3 Random-effects models
- 7.4 Summary and outlook
- Chapter 8. Comparisons along two classifications in a model with two factors
- Abstract
- Chapter outline
- 8.1 Introduction: main and interaction effects
- 8.2 Data generation
- 8.3 Likelihood analysis with canned functions in R
- 8.4 An aside: using simulation to assess bias and precision of an estimator … and to understand what a standard error is
- 8.5 Bayesian analysis with JAGS
- 8.6 Bayesian analysis with NIMBLE
- 8.7 Bayesian analysis with Stan
- 8.8 Do-it-yourself maximum likelihood estimates
- 8.9 Likelihood analysis with TMB
- 8.10 Comparison of the parameter estimates
- 8.11 Summary and outlook
- Chapter 9. General linear model for a normal response with continuous and categorical explanatory variables
- Abstract
- Chapter outline
- 9.1 Introduction
- 9.2 Data generation
- 9.3 Likelihood analysis with canned functions in R
- 9.4 Bayesian analysis with JAGS
- 9.5 Bayesian analysis with NIMBLE
- 9.6 Bayesian analysis with Stan
- 9.7 Do-it-yourself maximum likelihood estimates
- 9.8 Likelihood analysis with TMB
- 9.9 Comparison of the parameter estimates
- 9.10 Summary and outlook
- Chapter 10. Linear mixed-effects model
- Abstract
- Chapter outline
- 10.1 Introduction
- 10.2 Data generation
- 10.3 Analysis under a random-intercepts model
- 10.4 Analysis under a random-coefficients model without correlation between intercept and slope
- 10.5 The random-coefficients model with correlation between intercept and slope
- 10.6 Summary and outlook
- Chapter 11. Introduction to the generalized linear model (GLM): comparing two groups in a Poisson regression
- Abstract
- Chapter outline
- 11.1 Introduction
- 11.2 An important but often forgotten issue with count data
- 11.3 How to deal with missing values in our data
- 11.4 Data generation
- 11.5 Likelihood analysis with canned functions in R
- 11.6 Bayesian analysis with JAGS
- 11.7 Bayesian analysis with NIMBLE
- 11.8 Bayesian analysis with Stan
- 11.9 Do-it-yourself maximum likelihood estimates
- 11.10 Likelihood analysis with TMB
- 11.11 Comparison of the parameter estimates
- 11.12 Summary and outlook
- Chapter 12. Overdispersion, zero inflation, and offsets in a Poisson GLM
- Abstract
- Chapter outline
- 12.1 Introduction
- 12.2 Overdispersion
- 12.3 Zero inflation
- 12.4 Offsets
- 12.5 Summary and outlook
- Chapter 13. Poisson GLM with continuous and categorical explanatory variables
- Abstract
- Chapter outline
- 13.1 Introduction
- 13.2 Data generation
- 13.3 Likelihood analysis with canned functions in R
- 13.4 Bayesian analysis with JAGS
- 13.5 Bayesian analysis with NIMBLE
- 13.6 Bayesian analysis with Stan
- 13.7 Do-it-yourself maximum likelihood estimates
- 13.8 Likelihood analysis with TMB
- 13.9 Comparison of the parameter estimates
- 13.10 Summary
- Chapter 14. Poisson generalized linear mixed model, or Poisson GLMM
- Abstract
- Chapter outline
- 14.1 Introduction
- 14.2 Data generation
- 14.3 Likelihood analysis with canned functions in R
- 14.4 Bayesian analysis with JAGS
- 14.5 Bayesian analysis with NIMBLE
- 14.6 Bayesian analysis with Stan
- 14.7 Do-it-yourself maximum likelihood estimates
- 14.8 Likelihood analysis with TMB
- 14.9 Comparison of the parameter estimates
- 14.10 Summary
- Chapter 15. Comparing two groups in a logistic regression model
- Abstract
- Chapter outline
- 15.1 Introduction
- 15.2 Data generation
- 15.3 Likelihood analysis with canned functions in R
- 15.4 Bayesian analysis with JAGS
- 15.5 Bayesian analysis with NIMBLE
- 15.6 Bayesian analysis with Stan
- 15.7 Do-it-yourself maximum likelihood estimates
- 15.8 Likelihood analysis with TMB
- 15.9 Comparison of the parameter estimates
- 15.10 Summary
- Chapter 16. Binomial GLM with continuous and categorical explanatory variables
- Abstract
- Chapter outline
- 16.1 Introduction
- 16.2 Data generation
- 16.3 Likelihood analysis with canned functions in R
- 16.4 Bayesian analysis with JAGS
- 16.5 Bayesian analysis with NIMBLE
- 16.6 Bayesian analysis with Stan
- 16.7 Do-it-yourself maximum likelihood estimates
- 16.8 Likelihood analysis with TMB
- 16.9 Comparison of the estimates
- 16.10 Summary
- Chapter 17. Binomial generalized linear mixed model
- Abstract
- Chapter outline
- 17.1 Introduction
- 17.2 Data generation
- 17.3 Likelihood analysis with canned functions in R
- 17.4 Bayesian analysis with JAGS
- 17.5 Bayesian analysis with NIMBLE
- 17.6 Bayesian analysis with Stan
- 17.7 Do-it-yourself maximum likelihood estimates
- 17.8 Likelihood analysis with TMB
- 17.9 Comparison of the parameter estimates
- 17.10 Summary
- Chapter 18. Model building, model checking, and model selection
- Abstract
- Chapter outline
- 18.1 Introduction
- 18.2 Why do we build a statistical model?
- 18.3 Ticklish rattlesnakes
- 18.4 How do we build a statistical model?
- 18.5 Model checking and goodness-of-fit testing
- 18.6 Model selection
- 18.7 Model averaging
- 18.8 Regularization: penalization, shrinkage, ridge, and lasso
- 18.9 Summary and outlook
- Chapter 19. Occupancy models
- Abstract
- Chapter outline
- 19.1 Introduction
- 19.2 Data generation
- 19.3 Likelihood analysis with canned functions in the R package unmarked
- 19.4 Bayesian analysis with JAGS
- 19.5 Bayesian analysis with NIMBLE
- 19.6 Bayesian analysis with Stan
- 19.7 Bayesian analysis with canned functions in the R package ubms
- 19.8 Do-it-yourself maximum likelihood estimates
- 19.9 Likelihood analysis with TMB
- 19.10 Comparison of the parameter estimates
- 19.11 Summary and outlook
- Chapter 20. Integrated models
- Abstract
- Chapter outline
- 20.1 Introduction
- 20.2 Data generation: simulating three abundance data sets with different observation/aggregation models
- 20.3 Fitting models to the three individual data sets first
- 20.4 Fitting the integrated model to all three data sets simultaneously
- 20.5 What do we gain by analyzing the joint likelihood in our analysis?
- 20.6 Summary and outlook
- Chapter 21. Conclusion
- Abstract
- Chapter outline
- 21.1 Looking back
- 21.2 Looking ahead
- Bibliography
- Index
- No. of pages: 550
- Language: English
- Edition: 1
- Published: July 18, 2024
- Imprint: Elsevier
- Paperback ISBN: 9780443137150
- eBook ISBN: 9780443137167
MK
Marc Kéry
KK