Let’s assume that the constant θ∈R and the sequence of random variables{Yn}n∈N converge in distribution to a normal distributionn(Yn−θ) following N(0,σ2).
[r−1] is the set {1,⋯,r−1} comprising natural numbers up to r−1.
Explanation
The Delta Method is widely used as an auxiliary theorem to explain many convergences in distribution in mathematical statistics.
Example
For instance, knowing the mean and variance of X,
g(X)=g(μ)+g′(μ)(X−μ)
then,
Eg(X)≈Varg(X)≈g(μ)[g′(μ)]2VarX
If one wonders about the mean and variance of the inverse of X, the following result can be obtained for the function g(x):=1/x.
EX1≈VarX1≈μ1[μ1]4VarX
Although this is not precisely the result of the Delta Method, it should remind us of the Delta Method as a tool that allows handling the function form of random variables.
Proof
Strategy: Essentially ends with Taylor expansion and Slutsky’s theorem.
Slutsky’s theorem: For a constant a,b and a random variable An,Bn,Xn,X, if an→Pa, Bn→Pb, and Xn→DX, then
An+BnXn→Da+bX
The proof of the generalized Delta Method is omitted.
Proof of the 1-Order Delta Method
Near Yn=θ, g(Yn) has a remainder term R where limYn→θR→0,
g(Yn)=g(θ)+g′(θ)(Yn−θ)+R
Moving g(θ) to the left side and multiplying by n,
n[g(Yn)−g(θ)]≈g′(θ)n(Yn−θ)
and the proof ends according to Slutsky’s theorem.
■
Proof of the 2-Order Delta Method
Similarly, near Yn=θ, g(Yn) has a remainder term R where limYn→θR→0,
g(Yn)==g(θ)+g′(θ)(Yn−θ)+2g′′(θ)(Yn−θ)2+Rg(θ)+0⋅(Yn−θ)+σ2σ22g′′(θ)(Yn−θ)2+R
Again, moving g(θ) to the left side and multiplying by n,
n[g(Yn)−g(θ)]≈σ22g′′(θ)σ2/n(Yn−θ)2
and the proof concludes with the chi-squared distribution χ12 converging in distribution to the square of a standard normal distribution, according to Slutsky’s theorem.