logo

Delta Method in Mathematical Statistics 📂Mathematical Statistics

Delta Method in Mathematical Statistics

Theorem

Let’s assume that the constant θR\theta \in \mathbb{R} and the sequence of random variables {Yn}nN\left\{ Y_{n} \right\}_{n \in \mathbb{N}} converge in distribution to a normal distribution n(Ynθ)\sqrt{n} \left( Y_{n} - \theta \right) following N(0,σ2)N \left(0, \sigma^{2} \right).

11-Order Delta Method 1

If g(θ)0g ' (\theta) \ne 0 exists, n[g(Yn)g(θ)]DN(0,σ2[g(θ)]2) \sqrt{n} \left[ g \left( Y_{n} \right) - g(\theta) \right] \overset{D}{\to} N \left( 0, \sigma^{2} \left[ g ' (\theta) \right]^{2} \right)

22-Order Delta Method 2

If g(θ)=0g ' (\theta) = 0 and g(θ)0g''(\theta) \ne 0 exist, n[g(Yn)g(θ)]Dσ2g(θ)2χ12 n \left[ g \left( Y_{n} \right) - g(\theta) \right] \overset{D}{\to} \sigma^{2} {{ g''\left( \theta \right) } \over { 2 }} \chi_{1}^{2}

Generalized Delta Method

Let’s assume some knk_{n} dependent on nn as kn(Ynθ)DYk_{n} \left( Y_{n} - \theta \right) \overset{D}{\to} Y. If

  • For k[r1]k \in [r-1], g(k)(θ)=0g^{(k)} (\theta) = 0
  • g(r)(θ)0g^{(r)} (\theta) \ne 0 exists
  • g(r)g^{(r)} is continuous in θ\theta

Then, knr[g(Yn)g(θ)]Dg(r)(θ)r!Yr k_{n}^{r} \left[ g \left( Y_{n} \right) - g (\theta) \right] \overset{D}{\to} {{ g^{(r)} (\theta) } \over { r! }} Y^{r}


  • D\overset{D}{\to} signifies convergence in distribution.
  • χ12\chi_{1}^{2} represents a chi-squared distribution.
  • g(k)g^{(k)} is the kk-th derivative.
  • [r1][r-1] is the set {1,,r1}\left\{ 1, \cdots, r-1 \right\} comprising natural numbers up to r1r-1.

Explanation

The Delta Method is widely used as an auxiliary theorem to explain many convergences in distribution in mathematical statistics.

Example

For instance, knowing the mean and variance of XX, g(X)=g(μ)+g(μ)(Xμ) g(X) = g (\mu) + g ' (\mu) \left( X - \mu \right) then, Eg(X)g(μ)Varg(X)[g(μ)]2VarX \begin{align*} E g(X) \approx & g (\mu) \\ \operatorname{Var} g (X) \approx & \left[ g ' (\mu) \right]^2 \operatorname{Var} X \end{align*} If one wonders about the mean and variance of the inverse of XX, the following result can be obtained for the function g(x):=1/xg(x) := 1/x. E1X1μVar1X[1μ]4VarX \begin{align*} E {{ 1 } \over { X }} \approx & {{ 1 } \over { \mu }} \\ \operatorname{Var} {{ 1 } \over { X }} \approx & \left[ {{ 1 } \over { \mu }} \right]^{4} \operatorname{Var} X \end{align*} Although this is not precisely the result of the Delta Method, it should remind us of the Delta Method as a tool that allows handling the function form of random variables.

Proof

Strategy: Essentially ends with Taylor expansion and Slutsky’s theorem.

Slutsky’s theorem: For a constant a,ba,b and a random variable An,Bn,Xn,XA_{n}, B_{n} ,X_{n} , X, if anPaa_{n} \overset{P}{\to} a , BnPb B_{n} \overset{P}{\to} b , and XnDX X_{n} \overset{D}{\to} X , then An+BnXnDa+bX A_{n} + B_{n} X_{n} \overset{D}{\to} a + b X

The proof of the generalized Delta Method is omitted.

Proof of the 11-Order Delta Method

Near Yn=θY_{n} = \theta, g(Yn)g \left( Y_{n} \right) has a remainder term RR where limYnθR0\lim_{Y_{n} \to \theta} R \to 0, g(Yn)=g(θ)+g(θ)(Ynθ)+R g \left( Y_{n} \right) = g (\theta) + g ' (\theta) \left( Y_{n} - \theta \right) + R Moving g(θ)g(\theta) to the left side and multiplying by n\sqrt{n}, n[g(Yn)g(θ)]g(θ)n(Ynθ) \sqrt{n} \left[ g \left( Y_{n} \right) - g(\theta) \right] \approx g ' (\theta) \sqrt{n} \left( Y_{n} - \theta \right) and the proof ends according to Slutsky’s theorem.

Proof of the 22-Order Delta Method

Similarly, near Yn=θY_{n} = \theta, g(Yn)g \left( Y_{n} \right) has a remainder term RR where limYnθR0\lim_{Y_{n} \to \theta} R \to 0, g(Yn)=g(θ)+g(θ)(Ynθ)+g(θ)2(Ynθ)2+R=g(θ)+0(Ynθ)+σ2σ2g(θ)2(Ynθ)2+R \begin{align*} g \left( Y_{n} \right) =& g (\theta) + g ' (\theta) \left( Y_{n} - \theta \right) + {{ g''(\theta) } \over { 2 }} \left( Y_{n} - \theta \right)^{2} + R \\ =& g (\theta) + 0 \cdot \left( Y_{n} - \theta \right) + {{ \sigma^{2} } \over { \sigma^{2} }} {{ g''(\theta) } \over { 2 }} \left( Y_{n} - \theta \right)^{2} + R \end{align*} Again, moving g(θ)g(\theta) to the left side and multiplying by nn, n[g(Yn)g(θ)]σ2g(θ)2(Ynθ)2σ2/n n \left[ g \left( Y_{n} \right) - g (\theta) \right] \approx \sigma^{2} {{ g''(\theta) } \over { 2 }} {{ \left( Y_{n} - \theta \right)^{2} } \over { \sigma^{2}/n }} and the proof concludes with the chi-squared distribution χ12\chi_{1}^{2} converging in distribution to the square of a standard normal distribution, according to Slutsky’s theorem.


  1. Casella. (2001). Statistical Inference(2nd Edition): p242. ↩︎

  2. Casella. (2001). Statistical Inference(2nd Edition): p244. ↩︎