Mathematical Statistics
The difference between statistics majors and non-majors truly lies in Mathematical Statistics
It’s undeniable that statistics play a crucial role in modern human society, with applications in every scientific field. Whether it’s hypothesis testing or analytical techniques, a thorough understanding of statistics is essential for mastery in any domain. While the door to statistics is wide open even for non-majors, and many experts use statistical methods fluently within their specific domains—sometimes even more adeptly than majors—the fundamental difference lies in the level of mathematical understanding.
Majoring in statistics doesn’t just involve skimming through a few proofs and understanding the mathematical background of certain techniques. It’s about empathizing with the overarching concepts that run through statistics, having a clear understanding of the relationships between well-known distributions, and quickly grasping the principles behind new methods. Mathematical Statistics serves as both the study and training for this, encompassing the mathematical theories that underpin the entire field of statistics.
Probability Theory
For discussions involving measure theory at the level of Probability Theory, refer to the Measure Theory category.
Univariate Random Variables
- Random Variables and Probability Distributions $X$
- Linear Combinations of Random Variables
- Probability Convergence $\overset{P}{\to}$
- Distribution Convergence $\overset{D}{\to}$
- Probability Bounded
- Proof of the Weak Law of Large Numbers
- Proof of the Central Limit Theorem
Multivariate Random Vectors
- Multivariate Probability Distributions $\mathbf{X}$
- Transformation of Multivariate Random Variables
- Conditional Probability Distributions
- Independence of Random Variables $X_{1} \perp X_{2}$
- Convergence of Multivariate Random Variables
- Convergence in Distribution for Multivariate Random Variables
- Principal Component Analysis (PCA)
Moments
- Expectation, Mean, Variance, Moments Definition $E X^{t}$
- Covariance
- Skewness
- Kurtosis
- Moment Generating Function $E e^{tX}$
- Weighted Average
- Pooled Variance $s_{p}^{2}$
- Expectation of a Random Vector
Probability Distribution Theory
The study of Probability Distribution Theory in mathematical statistics is crucial but expansive. In this blog, due to its vast scope and inclusion of topics beyond mathematical statistics, it has been made a separate category.
- Comprehensive Summary of the Addition of Random Variables with Specific Distributions
- Student’s Theorem
- Two Normally Distributed Random Variables being Independent is Equivalent to having a Covariance of $0$
- Proof of Stirling’s Approximation Formula in Mathematical Statistics
- Delta Method
- Exponential Family of Probability Distributions
- Location Family $f (x ; \theta)$
- Scale Family $f (x ; \sigma)$
Statistical Inference
Statistics
- Statistic and Estimator $T$
- Pivot $Q(\mathbf{X}, \theta)$
- Order Statistic $X_{(k)}$
- Consistent Estimator $T_{n} \overset{P}{\to} \theta$
- Method of Moments
Unbiased Estimation
- Unbiased Estimator $ET = \theta$
- Efficient Estimator
- Best Unbiased Estimator (UMVUE)
- Rao-Blackwell Theorem
- Lehmann-Scheffé Theorem
Sufficient Statistics
- Sufficient Statistic
- Neyman Factorization Theorem
- Minimal Sufficient Statistic
- Ancillary Statistic
- Complete Statistic
- Basu’s Theorem Proof
Likelihood Estimation
- Likelihood Function $L(\theta | \mathbf{x})$
- Maximum Likelihood Estimator (MLE)
- Invariance Property of MLE
- Unique MLE is Dependent on a Sufficient Statistic
Hypothesis Testing
- Definition of Hypothesis Testing $H_{0} \text{ vs } H_{1}$
- Power Function $\beta (\theta)$
- Likelihood Ratio Test (LRT)
- Unbiased Test Power Function and Uniformly Most Powerful Test (UMP)
- Monotone Likelihood Ratio (MLR)
- Significance Level $p(\mathbf{X})$
Interval Estimation
- Interval Estimator $\left[ L, U \right]$
- One-to-one Correspondence between Hypothesis Testing and Confidence Intervals
- Uniformly Most Accurate Confidence Set (UMA)
- Stochastic Increasing and Decreasing Functions and Confidence Intervals
- Shortest Confidence Interval for Unimodal Distributions
Bayesian
- Proof of Bayes’ Theorem and Prior, Posterior Distributions
- Bayesian Paradigm
- Laplace’s Succession Rule
- Conjugate Prior
- Laplace Prior
- Jeffreys Prior
- Credible Interval
- Hypothesis Testing using Bayes Factor
References
- Casella. (2001). Statistical Inference(2nd Edition)
- Hogg et al. (2013). Introduction to Mathematical Statistics(7th Edition)
- 김달호. (2013). R과 WinBUGS를 이용한 베이지안 통계학
All posts
- Proof of Bayes' Theorem and Prior, Posterior Distributions
- Summation Summary of Random Variables Following a Specific Distribution
- Sample Standard Deviation and Standard Error Distinguished
- Differences between the Monte Carlo Method and Bootstrapping
- Viewing the Monty Hall Dilemma through Bayes' Theorem
- Bayesian Paradigm
- Laplace's Succession Rule
- Conjugate Prior Distribution
- Laplace Prior Distribution
- Jeffreys Prior Distribution
- Three Representative Values of Statistics: Mode, Median, Average
- Confidence Intervals
- Differences between Credit Intervals and Confidence Intervals
- Highest Posterior Density Credible Interval
- Hypothesis Testing Through Bayesian Factors
- Probability and the Addition Law of Probability in Mathematical Statistics
- Probability Variables and Probability Distribution in Mathematical Statistics
- Expectation, Mean, Variance, and Moments in Mathematical Statistics
- Mathematical Proof of the Properties of Representative Values
- Properties of Mean and Variance
- Pearson Correlation Coefficient
- Various Properties of Covariance
- Skewness in Mathematical Statistics
- Kurtosis in Mathematical Statistics
- What is the Moment Generating Function?
- If an nth Moment Exists, Moments of Lower Orders than n Also Exist
- Multivariate Probability Distributions in Mathematical Statistics
- Transformation of Multivariate Random Variables
- Probability Distributions under Conditional Probability in Mathematical Statistics
- Probability Variables Independence in Mathematical Statistics
- Independence and iid of Random Variables
- Bernstein Distributions: Pairwise Independence Does Not Imply Mutual Independence
- The Equivalence Between Two Normally Distributed Random Variables Being Independent and Having a Covariance of Zero
- Linear Combinations of Random Variables
- Random Sampling in Mathematical Statistics
- Statistical Measures and Estimators in Mathematical Statistics
- Easy Definition of Confidence Intervals
- Convenience in Mathematical Statistics
- Convenience-Variance Trade-off
- Unbiased Estimator
- Reason for Dividing the Sample Variance by n-1
- Order Statistics
- Probability Convergence in Mathematical Statistics
- Convergence of Distributions in Mathematical Statistics
- Probability Bounds in Mathematical Statistics
- Convergence in Probability Implies Convergence in Distribution
- Convergence in Distribution Implies Probability Bound
- Proof of the Weak Law of Large Numbers
- Central Limit Theorem Proof
- Covariance Matrix
- Multivariate Random Variables Probability Convergence
- Distribution Convergence of Multivariate Random Variables
- Student's t-test Proof
- Consistent Estimator
- Maximum Likelihood Estimator
- Regularity Conditions in Mathematical Statistics
- Fisher Information
- Bartlett's Identity
- Rao-Blackwell-Kolmogorov Theorem
- Efficient Estimator
- Sufficient Statistic
- Neumann Factorization Theorem Proof
- Rao-Blackwell Theorem Proof
- Expectation of the Sum of Random Variables in the Form of Functions
- Convolution Formula of Probability Density Functions
- Exponential Family of Probability Distributions
- Statistical Proof of Stirling's Formula
- Delta Method in Mathematical Statistics
- Definition of Likelihood Function
- Minimum Sufficient Statistic
- Auxiliary Statistics
- Location Family
- Scale Families
- Sufficient Statistic
- Proof of Bézout's Theorem
- Moment Method
- Complete Statistics of the Exponential Family of Probability Distributions
- The Variance of an Unbiased Estimator Given a Sufficient Statistic is Minimized
- Location-Scale Family Auxiliary Statistics
- Satterthwaite Approximation
- Proof of the Invariance Property of the Maximum Likelihood Estimator
- Unbiased Estimators and the Cramér-Rao Bound
- Best Unbiased Estimator, Minimum Variance Unbiased Estimator UMVUE
- Minimum Variance Unbiased Estimator Uniqueness
- Lemmas-Schep Theorem Proof
- The Unique Maximum Likelihood Estimator Depends on the Sufficient Statistic
- Random Sample's Sample Mean Average and Variance
- Mathematical Statistical Hypothesis Testing Definition
- Sufficient Statistics and Maximum Likelihood Estimates of the Location Family
- Definition of Likelihood Ratio Test in Mathematical Statistics
- Likelihood Ratio Test Including Sufficient Statistic
- Power Function of Hypothesis Testing
- Power of a Nuisance Test and the Most Powerful Test
- How to Sample Univariate Probability Variables
- Proof of the Neyman-Pearson Lemma
- Definition of Monotonic Probability
- Calin-Rubin Theorem Proof
- Most Powerful Test Containing Sufficient Statistics
- Mathematical Definition of Statistical Significance
- Interval Estimator
- Definition of a Mathematical-Statistical Confidence Set
- Definition of a Pivot in Mathematical Statistics
- Hypothesis Testing and the One-to-One Correspondence of Confidence Sets
- Most Accurate Confidence Set
- Stochastic Increment and Decrement Functions and Confidence Intervals
- Unimodal Distribution's Shortest Confidence Interval
- Standard Definition of Standard Error
- Definition of Weighted Average
- Definition of Congruent Covariance
- Conditional Expectation Minimizes the Sum of Squared Deviations
- Expectation of Random Vectors
- Principal Component Analysis (PCA) in Mathematical Statistics