Chapter 3: Parametric Inference and Likelihood Methods

This chapter marks a fundamental shift in perspective: from generating random samples (Chapter 2) to learning from observed data. Where Monte Carlo methods asked “given a distribution, how do we simulate from it?”, parametric inference asks the inverse question: “given data, what distribution generated it?” This reversal—from known model to unknown parameters—is the central problem of statistical inference.

We adopt the frequentist framework, treating parameters as fixed but unknown quantities with randomness arising solely from the sampling process. An estimator’s quality is judged by its behavior across hypothetical repeated samples from the same population. This perspective leads to concepts like bias, variance, consistency, and efficiency—properties that characterize how estimators perform “on average” over many realizations of the sampling mechanism.

The chapter opens with exponential families, a unifying framework encompassing most distributions encountered in practice. Understanding this structure reveals why certain distributions admit elegant sufficient statistics, why maximum likelihood takes a particularly simple form, and why generalized linear models work. We then develop maximum likelihood estimation—the workhorse of parametric inference—with both analytical solutions and numerical optimization. The theory of sampling variability formalizes how estimators behave across repeated samples, leading to standard errors, confidence intervals, and hypothesis tests. We turn to linear models, developing least squares estimation, the Gauss-Markov theorem, and comprehensive diagnostics. The chapter culminates with generalized linear models, extending regression to non-normal responses: logistic regression for binary outcomes, Poisson regression for counts, and beyond.

Learning Objectives: Upon completing this chapter, you will be able to:

Exponential Families

  • Recognize exponential family distributions and convert to canonical form

  • Extract moments from the log-partition function via differentiation

  • Apply the Neyman-Fisher factorization theorem to identify sufficient statistics

  • Construct conjugate priors for exponential family likelihoods

Maximum Likelihood Estimation

  • Derive maximum likelihood estimators analytically for common distributions

  • Implement numerical MLE via Newton-Raphson, Fisher scoring, and gradient methods

  • State asymptotic properties: consistency, normality, efficiency, and invariance

  • Construct likelihood ratio, Wald, and score tests for parametric hypotheses

Sampling Variability

  • Analyze estimator properties: bias, variance, mean squared error, consistency

  • Apply the delta method to derive standard errors for transformed parameters

  • Distinguish exact sampling distributions from asymptotic approximations

  • Implement robust standard errors when model assumptions may be violated

Linear Models

  • Derive OLS estimators via calculus and projection geometry

  • State and prove the Gauss-Markov theorem establishing OLS as BLUE

  • Implement residual diagnostics and influence measures

  • Apply robust standard errors (HC0–HC3) under heteroskedasticity

Generalized Linear Models

  • Specify GLMs through random component, systematic component, and link function

  • Implement IRLS as Fisher scoring for exponential family responses

  • Diagnose model fit using deviance residuals and overdispersion tests

  • Handle separation in logistic regression and overdispersion in count models

Sections