Proof of Bézout's Theorem
📂Mathematical StatisticsProof of Bézout's Theorem
Theorem
If T(X) is a complete statistic as well as a minimal sufficient statistic, then T(X) is independent of all ancillary statistics.
Description
Basu’s theorem is one of the most important results, among those related to sufficient statistics, allowing for a very strong conclusion that certain statistics are independent. Intuitively, a sufficient statistic contains all the information about a parameter θ, and since ancillary statistics are not dependent on θ, they naturally seem independent. However, completeness, a ‘common sense’ property along with minimalism, is required for a statistic to ensure independence.
A classic result is that the sample mean X and the sample variance S2 from a normal distribution are independent, a fact that can also be shown without Basu’s theorem, for example, through Student’s theorem, but Basu’s theorem offers a slightly more general method of proof.
Proof
Strategy: Let’s prove for the discrete probability distribution case. Assuming S(X) is an ancillary statistic for parameter θ, then P(S(X)=s) is not dependent on θ, and according to the definition of sufficient statistics, the conditional probability
P(S(X)=s∣T(X)=t)=P(X∈{x:S(x)=s}∣T(X)=t)
is also not dependent on θ. Thus, it suffices to show the following:
P(S(X)=s∣T(X)=t)=P(S(X)=s),∀t
According to the law of addition of probabilities:
P(S(X)=s)=t∑P(S(X)=s∣T(X)=t)Pθ(T(X)=t)
Meanwhile, multiplying both sides of 1=∑tPθ(T(X)=t) by P(S(X)=s) gives:
P(S(X)=s)⋅1==P(S(X)=s)⋅t∑Pθ(T(X)=t)t∑P(S(X)=s)Pθ(T(X)=t)
Defining the following statistic:
g(t):=P(S(X)=s∣T(X)=t)−P(S(X)=s)
for all θ, yields:
Eθg(T)===t∑g(t)Pθ(T(X)=t)P(S(X)=s)−P(S(X)=s)0
Definition of a complete statistic:
∀θ,Eθg(T)=0⟹∀θ,Pθ(g(T)=0)=1
Since T(X) is assumed to be a complete statistic, for all possible t and all θ, it follows:
Pθ(g(T)=0)=1
In other words, the following holds:
P(S(X)=s∣T(X)=t)=P(S(X)=s),∀t
■