STAT 350: Introduction to Statistics

Course Content

  • Part I: Foundations of Probability and Computation
    • Chapter 1: Statistical Paradigms and Core Concepts
      • Paradigms of Probability and Statistical Inference
        • The Mathematical Foundation: Kolmogorov’s Axioms
        • Interpretations of Probability
        • Statistical Inference Paradigms
        • Historical and Philosophical Debates
        • Bringing It All Together
        • Looking Ahead: Our Course Focus
        • References and Further Reading
      • Probability Distributions: Theory and Computation
        • From Abstract Foundations to Concrete Tools
        • The Python Ecosystem for Probability
        • Introduction: Why Probability Distributions Matter
        • The Python Ecosystem for Probability
        • Discrete Distributions
        • Continuous Distributions
        • Additional Important Distributions
        • Summary and Practical Guidelines
        • Conclusion
        • References and Further Reading
      • Python Random Generation
        • From Mathematical Distributions to Computational Samples
        • The Python Ecosystem at a Glance
        • Understanding Pseudo-Random Number Generation
        • The Standard Library: random Module
        • NumPy: Fast Vectorized Random Sampling
        • SciPy Stats: The Complete Statistical Toolkit
        • Bringing It All Together: Library Selection Guide
        • Looking Ahead: From Random Numbers to Monte Carlo Methods
        • References and Further Reading
      • Chapter 1 Summary: Foundations in Place
        • The Three Pillars of Chapter 1
        • How the Pillars Connect
        • What Lies Ahead: The Road to Simulation
        • Chapter 1 Exercises: Synthesis Problems
        • References and Further Reading
  • Part II: Simulation-Based Methods
    • Chapter 2: Monte Carlo Simulation
      • Monte Carlo Fundamentals
        • The Historical Development of Monte Carlo Methods
        • The Core Principle: Expectation as Integration
        • Theoretical Foundations
        • Variance Estimation and Confidence Intervals
        • Worked Examples
        • Comparison with Deterministic Methods
        • Sample Size Determination
        • Convergence Diagnostics and Monitoring
        • Practical Considerations
        • Chapter 2.1 Exercises: Monte Carlo Fundamentals Mastery
        • Bringing It All Together
        • Bringing It All Together
        • Transition to What Follows
        • References
      • Uniform Random Variates
        • Why Uniform? The Universal Currency of Randomness
        • The Paradox of Computational Randomness
        • Chaotic Dynamical Systems: An Instructive Failure
        • Linear Congruential Generators
        • Shift-Register Generators
        • The KISS Generator: Combining Strategies
        • Modern Generators: Mersenne Twister and PCG
        • Statistical Testing of Random Number Generators
        • Practical Considerations
        • Chapter 2.2 Exercises: Uniform Random Variates Mastery
        • Bringing It All Together
        • Transition to What Follows
        • References
      • Inverse CDF Method
        • Mathematical Foundations
        • Continuous Distributions with Closed-Form Inverses
        • Numerical Inversion
        • Discrete Distributions
        • Mixed Distributions
        • Practical Considerations
        • Chapter 2.3 Exercises: Inverse CDF Method Mastery
        • Bringing It All Together
        • Bringing It All Together
        • Transition to What Follows
        • References
      • Transformation Methods
        • Why Transformation Methods?
        • The Box–Muller Transform
        • The Polar (Marsaglia) Method
        • Method Comparison: Box–Muller vs Polar vs Ziggurat
        • The Ziggurat Algorithm
        • The CLT Approximation (Historical)
        • Distributions Derived from the Normal
        • Multivariate Normal Generation
        • Implementation Guidance
        • Chapter 2.4 Exercises: Transformation Methods Mastery
        • Bringing It All Together
        • References
      • Rejection Sampling
        • The Dartboard Intuition
        • The Accept-Reject Algorithm
        • Efficiency Analysis
        • Choosing the Proposal Distribution
        • Python Implementation
        • The Squeeze Principle
        • Geometric Example: Sampling from the Unit Disk
        • Worked Examples
        • Limitations and the Curse of Dimensionality
        • Connections to Other Methods
        • Practical Considerations
        • Chapter 2.5 Exercises: Rejection Sampling Mastery
        • Bringing It All Together
        • References
      • Variance Reduction Methods
        • The Variance Reduction Paradigm
        • Importance Sampling
        • Control Variates
        • Antithetic Variates
        • Stratified Sampling
        • Common Random Numbers
        • Conditional Monte Carlo (Rao–Blackwellization)
        • Combining Variance Reduction Techniques
        • Practical Considerations
        • Bringing It All Together
        • Chapter 2.6 Exercises: Variance Reduction Mastery
        • References
      • Chapter Summary
        • The Complete Monte Carlo Workflow
        • Method Selection Guide
        • Quick Reference Tables
        • Common Pitfalls Checklist
        • Connections to Later Chapters
        • Learning Outcomes Checklist
        • Further Reading: Optimization and Missing Data
        • Final Perspective
        • References
    • Chapter 3: Parametric Inference and Likelihood Methods
      • Exponential Families
        • Historical Origins: From Scattered Results to Unified Theory
        • The Canonical Exponential Family
        • Converting Familiar Distributions
        • The Log-Partition Function: A Moment-Generating Machine
        • Sufficiency: Capturing All Parameter Information
        • Minimal Sufficiency and Completeness
        • Conjugate Priors and Bayesian Inference
        • Exponential Dispersion Models and GLMs
        • Python Implementation
        • Practical Considerations
        • Chapter 3.1 Exercises: Exponential Families Mastery
        • Bringing It All Together
        • References
      • Maximum Likelihood Estimation
        • The Likelihood Function
        • The Score Function
        • Fisher Information
        • Closed-Form Maximum Likelihood Estimators
        • Numerical Optimization for MLE
        • Asymptotic Properties of MLEs
        • The Cramér-Rao Lower Bound
        • The Invariance Property
        • Likelihood-Based Hypothesis Testing
        • Confidence Intervals from Likelihood
        • Practical Considerations
        • Connection to Bayesian Inference
        • Chapter 3.2 Exercises: Maximum Likelihood Estimation Mastery
        • Bringing It All Together
        • References
      • Sampling Variability and Variance Estimation
        • Statistical Estimators and Their Properties
        • Sampling Distributions
        • The Delta Method
        • The Plug-in Principle
        • Variance Estimation Methods
        • Applications and Worked Examples
        • Practical Considerations
        • Bringing It All Together
        • Exercises
        • References
      • Linear Models
        • Matrix Calculus Foundations
        • The Linear Model
        • Ordinary Least Squares: The Calculus Approach
        • Ordinary Least Squares: The Geometric Approach
        • Properties of the OLS Estimator
        • The Gauss-Markov Theorem
        • Estimating the Error Variance
        • Distributional Results Under Normality
        • Diagnostics and Model Checking
        • Bringing It All Together
        • Numerical Stability: QR Decomposition
        • Model Selection and Information Criteria
        • Regularization: Ridge and LASSO
        • Chapter 3.4 Exercises: Linear Models Mastery
        • References
      • Generalized Linear Models
        • Historical Context: Unification of Regression Methods
        • The GLM Framework: Three Components
        • Score Equations and Fisher Information
        • Iteratively Reweighted Least Squares
        • Logistic Regression: Binary Outcomes
        • Poisson Regression: Count Data
        • Gamma Regression: Positive Continuous Data
        • Inference in GLMs: The Testing Triad
        • Model Diagnostics
        • Model Comparison and Selection
        • Quasi-Likelihood and Robust Inference
        • Practical Considerations
        • Bringing It All Together
        • Further Reading
        • Chapter 3.5 Exercises: Generalized Linear Models Mastery
        • References
      • Chapter Summary
        • The Parametric Inference Pipeline
        • The Five Pillars of Chapter 3
        • How the Pillars Connect
        • Method Selection Guide
        • Quick Reference: Core Formulas
        • Connections to Future Material
        • Practical Guidance
        • Final Perspective
        • References
    • Chapter 4: Resampling Methods
      • The Sampling Distribution Problem
        • The Fundamental Target: Sampling Distributions
        • Historical Development: The Quest for Sampling Distributions
        • Three Routes to the Sampling Distribution
        • When Asymptotics Fail: Motivating the Bootstrap
        • The Plug-In Principle: Theoretical Foundation
        • Computational Perspective: Bootstrap as Monte Carlo
        • Practical Considerations
        • Bringing It All Together
        • Chapter 4.1 Exercises
        • References
      • The Empirical Distribution and Plug-in Principle
        • The Empirical Cumulative Distribution Function
        • Convergence of the Empirical CDF
        • Parameters as Statistical Functionals
        • The Plug-in Principle
        • When the Plug-in Principle Fails
        • The Bootstrap Idea in One Sentence
        • Computational Implementation
        • Bringing It All Together
        • Section 4.2 Exercises: ECDF and Plug-in Mastery
        • References
      • The Nonparametric Bootstrap
        • The Bootstrap Principle
        • Bootstrap Standard Errors
        • Bootstrap Bias Estimation
        • Bootstrap Confidence Intervals
        • Bootstrap for Regression
        • Bootstrap Diagnostics
        • When Bootstrap Fails
        • Practical Considerations
        • Bringing It All Together
        • Exercises
        • References
      • Section 4.4: The Parametric Bootstrap
        • The Parametric Bootstrap Principle
        • Location-Scale Families
        • Parametric Bootstrap for Regression
        • Confidence Intervals
        • Model Checking and Validation
        • When Parametric Bootstrap Fails
        • Parametric vs. Nonparametric: A Decision Framework
        • Practical Considerations
        • Bringing It All Together
        • References
      • Section 4.5: Jackknife Methods
        • Historical Context and Motivation
        • The Delete-1 Jackknife
        • Jackknife Bias Estimation
        • The Delete-\(d\) Jackknife
        • Jackknife versus Bootstrap
        • The Infinitesimal Jackknife
        • Practical Considerations
        • Bringing It All Together
        • Exercises
        • References
  • Part III: Bayesian Methods
    • Overview
      • Bayesian Philosophy
        • Introduction
        • Key Concepts
        • Mathematical Framework
        • Python Implementation
        • Examples
        • Summary
STAT 350: Introduction to Statistics
  • Search


© Copyright .

Built with Sphinx using a theme provided by Read the Docs.