
In All Likelihood : Statistical Modelling and Inference Using Likelihood.
Title:
In All Likelihood : Statistical Modelling and Inference Using Likelihood.
Author:
Pawitan, Yudi.
ISBN:
9780191650574
Personal Author:
Physical Description:
1 online resource (586 pages)
Contents:
Cover -- Title Page -- Copyright Page -- Preface -- Contents -- 1 Introduction -- 1.1 Prototype of statistical problems -- 1.2 Statistical problems and their models -- 1.3 Statistical uncertainty: inevitable controversies -- 1.4 The emergence of statistics -- 1.5 Fisher and the third way -- 1.6 Exercises -- 2 Elements of likelihood inference -- 2.1 Classical definition -- 2.2 Examples -- 2.3 Combining likelihoods -- 2.4 Likelihood ratio -- 2.5 Maximum and curvature of likelihood -- 2.6 Likelihood-based intervals -- 2.7 Standard error and Wald statistic -- 2.8 Invariance principle -- 2.9 Practical implications of invariance principle -- 2.10 Exercises -- 3 More properties of likelihood -- 3.1 Sufficiency -- 3.2 Minimal sufficiency -- 3.3 Multiparameter models -- 3.4 Profile likelihood -- 3.5 Calibration in multiparameter case -- 3.6 Exercises -- 4 Basic models and simple applications -- 4.1 Binomial or Bernoulli models -- 4.2 Binomial model with under-or overdispersion -- 4.3 Comparing two proportions -- 4.4 Poisson model -- 4.5 Poisson with overdispersion -- 4.6 Traffic deaths example -- 4.7 Aspirin data example -- 4.8 Continuous data -- 4.9 Exponential family -- 4.10 Box-Cox transformation family -- 4.11 Location-scale family -- 4.12 Exercises -- 5 Frequentist properties -- 5.1 Bias of point estimates -- 5.2 Estimating and reducing bias -- 5.3 Variability of point estimates -- 5.4 Likelihood and P-value -- 5.5 CI and coverage probability -- 5.6 Confidence density, CI and the bootstrap -- 5.7 Exact inference for Poisson model -- 5.8 Exact inference for binomial model -- 5.9 Nuisance parameters -- 5.10 Criticism of CIs -- 5.11 Exercises -- 6 Modelling relationships: regression models -- 6.1 Normal linear models -- 6.2 Logistic regression models -- 6.3 Poisson regression models -- 6.4 Nonnormal continuous regression.
6.5 Exponential family regression models -- 6.6 Deviance in GLM -- 6.7 Iterative weighted least squares -- 6.8 Box-Cox transformation family -- 6.9 Location-scale regression models -- 6.10 Exercises -- 7 Evidence and the likelihood principle* -- 7.1 Ideal inference machine? -- 7.2 Sufficiency and the likelihood principles -- 7.3 Conditionality principle and ancillarity -- 7.4 Birnbaum's theorem -- 7.5 Sequential experiments and stopping rule -- 7.6 Multiplicity -- 7.7 Questioning the likelihood principle -- 7.8 Exercises -- 8 Score function and Fisher information -- 8.1 Sampling variation of score function -- 8.2 The mean of S (θ) -- 8.3 The variance of S (θ) -- 8.4 Properties of expected Fisher information -- 8.5 Cramér-Rao lower bound -- 8.6 Minimum variance unbiased estimation -- 8.7 Multiparameter CRLB -- 8.8 Exercises -- 9 Large-sample results -- 9.1 Background results -- 9.2 Distribution of the score statistic -- 9.3 Consistency of MLE for scalar θ -- 9.4 Distribution of MLE and the Wald statistic -- 9.5 Distribution of likelihood ratio statistic -- 9.6 Observed versus expected information* -- 9.7 Proper variance of the score statistic* -- 9.8 Higher-order approximation: magic formula* -- 9.9 Multiparameter case: θ ∈ Rp -- 9.10 Examples -- 9.11 Nuisance parameters -- 9.12 χ2 goodness-of-fit tests -- 9.13 Exercises -- 10 Dealing with nuisance parameters -- 10.1 Inconsistent likelihood estimates -- 10.2 Ideal case: orthogonal parameters -- 10.3 Marginal and conditional likelihoods -- 10.4 Comparing Poisson means -- 10.5 Comparing proportions -- 10.6 Modified profile likelihood* -- 10.7 Estimated likelihood -- 10.8 Exercises -- 11 Complex data structures -- 11.1 ARMA models -- 11.2 Markov chains -- 11.3 Replicated Markov chains -- 11.4 Spatial data -- 11.5 Censored/survival data -- 11.6 Survival regression models.
11.7 Hazard regression and Cox partial likelihood -- 11.8 Poisson point processes -- 11.9 Replicated Poisson processes -- 11.10 Discrete time model for Poisson processes -- 11.11 Exercises -- 12 EM Algorithm -- 12.1 Motivation -- 12.2 General specification -- 12.3 Exponential family model -- 12.4 General properties -- 12.5 Mixture models -- 12.6 Robust estimation -- 12.7 Estimating infection pattern -- 12.8 Mixed model estimation* -- 12.9 Standard errors -- 12.10 Exercises -- 13 Robustness of likelihood specification -- 13.1 Analysis of Darwin's data -- 13.2 Distance between model and the 'truth' -- 13.3 Maximum likelihood under a wrong model -- 13.4 Large-sample properties -- 13.5 Comparing working models with the AIC -- 13.6 Deriving the AIC -- 13.7 Exercises -- 14 Estimating equations and quasi-likelihood -- 14.1 Examples -- 14.2 Computing in nonlinear cases -- 14.3 Asymptotic distribution -- 14.4 Generalized estimating equation -- 14.5 Robust estimation -- 14.6 Asymptotic Properties -- 15 Empirical likelihood -- 15.1 Profile likelihood -- 15.2 Double-bootstrap likelihood -- 15.3 BC a bootstrap likelihood -- 15.4 Exponential family model -- 15.5 General cases: M-estimation -- 15.6 Parametric versus empirical likelihood -- 15.7 Exercises -- 16 Likelihood of random parameters -- 16.1 The need to extend the likelihood -- 16.2 Statistical prediction -- 16.3 Defining extended likelihood -- 16.4 Exercises -- 17 Random and mixed effects models -- 17.1 Simple random effects models -- 17.2 Normal linear mixed models -- 17.3 Estimating genetic value from family data * -- 17.4 Joint estimation of β and b -- 17.5 Computing the variance component via and -- 17.6 Examples -- 17.7 Extension to several random effects -- 17.8 Generalized linear mixed models -- 17.9 Exact likelihood in GLMM -- 17.10 Approximate likelihood in GLMM -- 17.11 Exercises.
18 Nonparametric smoothing -- 18.1 Motivation -- 18.2 Linear mixed models approach -- 18.3 Imposing smoothness using random effects model -- 18.4 Penalized likelihood approach -- 18.5 Estimate of f given σ2 and -- 18.6 Estimating the smoothing parameter -- 18.7 Prediction intervals -- 18.8 Partial linear models -- 18.9 Smoothing nonequispaced data* -- 18.10 Non-Gaussian smoothing -- 18.11 Nonparametric density estimation -- 18.12 Nonnormal smoothness condition* -- 18.13 Exercises -- Bibliography -- Index.
Abstract:
Based on a course in the theory of statistics this text concentrates on what can be achieved using the likelihood/Fisherian method of taking account of uncertainty when studying a statistical problem. It takes the concept ot the likelihood as providing the best methods for unifying the demands of statistical modelling and the theory of inference. Every likelihood concept is illustrated by realistic examples, which are not compromised by computational problems. Examples range from asimile comparison of two accident rates, to complex studies that require generalised linear or semiparametric modelling. The emphasis is that the likelihood is not simply a device to produce an estimate, but an important tool for modelling. The book generally takes an informal approach, where most important results are established using heuristic arguments and motivated with realistic examples. With the currently available computing power, examples are not contrived to allow a closed analytical solution, and the book can concentrate on the statistical aspects of the data modelling. In addition to classicallikelihood theory, the book covers many modern topics such as generalized linear models and mixed models, non parametric smoothing, robustness, the EM algorithm and empirical likelihood.
Local Note:
Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2017. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.
Subject Term:
Genre:
Electronic Access:
Click to View