Mathematical Statistics
The only difference between statistics majors and non-majors is, in fact, mathematical statistics.
It can be stated unequivocally that there is no corner of modern society where statistics is not used. No matter which field of science one pursues, to reach any significant level—whether through hypothesis testing or analytical techniques—one must learn statistics. The door to statistics is wide open even for countless non-majors, and in many cases experts use it more fluently in their own domains than statistics majors do.
How, then, are these experts distinguished from those who have formally majored in statistics? Certainly, one might point to the fact that majors learn numerous techniques in depth without being limited by the domain of the data, and that they develop a strong intuition for various types of data through experience. However, the most fundamental difference lies in the level of mathematical understanding.
Majoring in statistics is not merely about reviewing a few proofs of certain techniques and understanding the underlying mathematics. It requires an appreciation of the overarching concepts that permeate all of statistics, a clear grasp of the relationships among widely known distributions, and the ability to quickly understand the principles behind any new method encountered. Mathematical statistics is precisely the study and training geared toward developing that level of understanding, dealing with the mathematical theories that support the entirety of statistics.
Probability Theory
The level that employs measure theory🔥 has been set aside in the Probability Theory category.
Univariate Random Variables
- Random Variables and Probability Distributions
- Linear Combinations of Random Variables
- Convergence in Probability
- Convergence in Distribution
- Tightness
- Proof of the Weak Law of Large Numbers
- Proof of the Central Limit Theorem
Multivariate Random Vectors
- Multivariate Probability Distribution
- Transformation of Multivariate Random Variables
- Conditional Probability Distribution
- Independence of Random Variables
- Convergence in Probability for Multivariate Random Variables
- Convergence in Distribution for Multivariate Random Variables
- Principal Component Analysis (PCA)
- 🔒(25/05/06)mixture distributions
Moments
- Definitions of Expected Value, Mean, Variance, and Moments
- Covariance
- Skewness
- Kurtosis
- What is a Moment Generating Function?
- Weighted Mean
- Pooled Variance
- Expected Value of a Random Vector
Theory of Probability Distributions
The theory of probability distributions, as taught in mathematical statistics, is extremely important; however, at Raw Shrimp Sushi House its scope has grown so vast—and now covers topics beyond mathematical statistics—that it has been given its own category.
- Summation Theorem for Random Variables with a Specific Distribution
- Student’s Theorem
- For Two Normally Distributed Random Variables, Independence is Equivalent to Zero Covariance
- A Mathematical Statistics Proof of Stirling’s Approximation
- Delta Method
- Exponential Family of Distributions
- Location Family
- Scale Family
Quadratic Forms
- 🔒(25/04/09) Quadratic Form of a Random Vector
- 🔒(25/04/13) Expected Value of a Quadratic Form of a Random Vector
- 🔒(25/04/17) Sum of Squared Deviations Represented as a Quadratic Form of a Random Vector
- 🔒(25/04/21) Moment Generating Function of a Quadratic Form of a Normally Distributed Random Vector
- 🔒(25/04/25) Equivalence Conditions for the Chi-Square Property of a Quadratic Form of a Normally Distributed Random Vector
- 🔒(25/05/11) Proof of Craig’s Theorem
- 🔒(25/05/19) Proof of the Hog–Craig Theorem
- 🔒(25/05/27) Proof of Cochran’s Theorem
Statistical Inference
Test Statistics
Unbiased Estimation
- Unbiased Estimator
- Efficient Estimator
- Uniformly Minimum Variance Unbiased Estimator (UMVUE)
- Rao–Blackwell Theorem
- Lehmann–Scheffé Theorem
Sufficient Statistics
- Sufficient Statistics
- Neyman Factorization Theorem
- Minimal Sufficient Statistics
- Ancillary Statistics
- Complete Statistics
- Proof of Basu’s Theorem
Likelihood Estimation
- Likelihood Function
- Maximum Likelihood Estimator (MLE)
- Invariance Property of the Maximum Likelihood Estimator
- The Unique Maximum Likelihood Estimator is a Function of the Sufficient Statistic
Hypothesis Testing
- Definition of Hypothesis Testing
- Power Function
- Likelihood Ratio Test (LRT)
- Unbiased Test Power Function and Uniformly Most Powerful Test (UMP)
- Monotone Likelihood Ratio (MLR)
- p-value
Interval Estimation
- Interval Estimator
- One-to-One Correspondence Between Hypothesis Tests and Confidence Intervals
- Uniformly Most Accurate Confidence Set (UMA)
- Probabilistic Inversion and Confidence Intervals
- Shortest Confidence Interval for Unimodal Distributions
Bayesian
- Proof of Bayes’ Theorem and Prior and Posterior Distributions
- Bayesian Paradigm
- Laplace’s Rule of Succession
- Conjugate Prior Distribution
- Laplace Prior Distribution
- Jeffreys Prior
- Credible Interval
- Hypothesis Testing via Bayes Factors
References
- Casella. (2001). Statistical Inference(2nd Edition)
- Hogg et al. (2013). Introduction to Mathematical Statistics(7th Edition)
- 김달호. (2013). R과 WinBUGS를 이용한 베이지안 통계학
All posts
- Proof of Bayes' Theorem and Prior, Posterior Distributions
- Summation Summary of Random Variables Following a Specific Distribution
- Sample Standard Deviation and Standard Error Distinguished
- Differences between the Monte Carlo Method and Bootstrapping
- Viewing the Monty Hall Dilemma through Bayes' Theorem
- Bayesian Paradigm
- Laplace's Succession Rule
- Conjugate Prior Distribution
- Laplace Prior Distribution
- Jeffreys Prior Distribution
- Three Representative Values of Statistics: Mode, Median, Average
- Confidence Intervals
- Differences between Credit Intervals and Confidence Intervals
- Highest Posterior Density Credible Interval
- Hypothesis Testing Through Bayesian Factors
- Probability and the Addition Law of Probability in Mathematical Statistics
- Probability Variables and Probability Distribution in Mathematical Statistics
- Expectation, Mean, Variance, and Moments in Mathematical Statistics
- Mathematical Proof of the Properties of Representative Values
- Properties of Mean and Variance
- Pearson Correlation Coefficient
- Various Properties of Covariance
- Skewness in Mathematical Statistics
- Kurtosis in Mathematical Statistics
- What is the Moment Generating Function?
- If an nth Moment Exists, Moments of Lower Orders than n Also Exist
- Multivariate Probability Distributions in Mathematical Statistics
- Transformation of Multivariate Random Variables
- Probability Distributions under Conditional Probability in Mathematical Statistics
- Probability Variables Independence in Mathematical Statistics
- Independence and iid of Random Variables
- Bernstein Distributions: Pairwise Independence Does Not Imply Mutual Independence
- The Equivalence Between Two Normally Distributed Random Variables Being Independent and Having a Covariance of Zero
- Linear Combinations of Random Variables
- Random Sampling in Mathematical Statistics
- Statistical Measures and Estimators in Mathematical Statistics
- Easy Definition of Confidence Intervals
- Convenience in Mathematical Statistics
- Convenience-Variance Trade-off
- Unbiased Estimator
- Reason for Dividing the Sample Variance by n-1
- Order Statistics
- Probability Convergence in Mathematical Statistics
- Convergence of Distributions in Mathematical Statistics
- Probability Bounds in Mathematical Statistics
- Convergence in Probability Implies Convergence in Distribution
- Convergence in Distribution Implies Probability Bound
- Proof of the Weak Law of Large Numbers
- Central Limit Theorem Proof
- Covariance Matrix
- Multivariate Random Variables Probability Convergence
- Distribution Convergence of Multivariate Random Variables
- Student's t-test Proof
- Consistent Estimator
- Maximum Likelihood Estimator
- Regularity Conditions in Mathematical Statistics
- Fisher Information
- Bartlett's Identity
- Rao-Blackwell-Kolmogorov Theorem
- Efficient Estimator
- Sufficient Statistic
- Neumann Factorization Theorem Proof
- Rao-Blackwell Theorem Proof
- Expectation of the Sum of Random Variables in the Form of Functions
- Convolution Formula of Probability Density Functions
- Exponential Family of Probability Distributions
- Statistical Proof of Stirling's Formula
- Delta Method in Mathematical Statistics
- Definition of Likelihood Function
- Minimum Sufficient Statistic
- Auxiliary Statistics
- Location Family
- Scale Families
- Sufficient Statistic
- Proof of Bézout's Theorem
- Moment Method
- Complete Statistics of the Exponential Family of Probability Distributions
- The Variance of an Unbiased Estimator Given a Sufficient Statistic is Minimized
- Location-Scale Family Auxiliary Statistics
- Satterthwaite Approximation
- Proof of the Invariance Property of the Maximum Likelihood Estimator
- Unbiased Estimators and the Cramér-Rao Bound
- Best Unbiased Estimator, Minimum Variance Unbiased Estimator UMVUE
- Minimum Variance Unbiased Estimator Uniqueness
- Lemmas-Schep Theorem Proof
- The Unique Maximum Likelihood Estimator Depends on the Sufficient Statistic
- Random Sample's Sample Mean Average and Variance
- Mathematical Statistical Hypothesis Testing Definition
- Sufficient Statistics and Maximum Likelihood Estimates of the Location Family
- Definition of Likelihood Ratio Test in Mathematical Statistics
- Likelihood Ratio Test Including Sufficient Statistic
- Power Function of Hypothesis Testing
- Power of a Nuisance Test and the Most Powerful Test
- How to Sample Univariate Probability Variables
- Proof of the Neyman-Pearson Lemma
- Definition of Monotonic Probability
- Calin-Rubin Theorem Proof
- Most Powerful Test Containing Sufficient Statistics
- Mathematical Definition of Statistical Significance
- Interval Estimator
- Definition of a Mathematical-Statistical Confidence Set
- Definition of a Pivot in Mathematical Statistics
- Hypothesis Testing and the One-to-One Correspondence of Confidence Sets
- Most Accurate Confidence Set
- Stochastic Increment and Decrement Functions and Confidence Intervals
- Unimodal Distribution's Shortest Confidence Interval
- Standard Definition of Standard Error
- Definition of Weighted Average
- Definition of Congruent Covariance
- Conditional Expectation Minimizes the Sum of Squared Deviations
- Expectation of Random Vectors
- Principal Component Analysis (PCA) in Mathematical Statistics