logo

Encyclopedia 📂Statistical Analysis

Encyclopedia

Definition

Prewhitening is a method that transforms time series data into white noise when calculating the Cross-Correlation Function (CCF) to more accurately identify the correlation between two datasets.

Practical Exercise 1

If possible, it is recommended to fully understand mathematically how this is achievable. As an example, let’s look at the following data.

1.png

bluebird consists of two time series data including the average price and sales volume of potato chips manufactured by the Bluebird company in New Zealand, both logged. Looking at the data, it seems that sales volume and price have a strong negative correlation, which we can confirm using CCF.

2.png

As thought, the two datasets have a negative correlation, as seen with our eyes. However, it is important to consider how to interpret the correlations not only at lag $k = 0$ but also around it. Essentially, whether there is a pattern or not, $X_{t}$, $X_{t-1}$, and $X_{t+1}$ should not differ significantly in their values. Although there are no spikes or dips, it is normal for the immediate prior and next values to be somewhat similar.

If $X_{t}$ and $Y_{t}$ have a correlation, then $X_{t}$ with $Y_{t-1}$ and $Y_{t+1}$ would also have a somewhat diluted correlation. This is not so much about having a real correlation or not but a mathematically inevitable conclusion, making it impossible to assertively say there is or isn’t a correlation.

3.png

On the other hand, after prewhitening, it becomes clear that there is a correlation only with respect to $k=0$, as shown. In R, this can be simply achieved by using prewhiten() instead of ccf(), allowing us to see the CCF of prewhitened data.

Prewhitening thus eliminates correlations that arise mathematically. If a correlation is still evident in the CCF of prewhitened datasets, it is considered that these variables are correlated. Conversely, if the CCF becomes insignificant after prewhitening, it is viewed as not a real correlation.

Mathematical Explanation

Mathematically, the rationale for prewhitening becomes clearer. For a simple example, consider $X_{t}$ coming from an ARIMA model $ARIMA(1,1,0)$. $$ \nabla X_{t} = \phi \nabla X_{t-1} + e_{t} $$ Expressing this with respect to $e_{t}$, $$ e_{t} = \nabla X_{t} - \phi \nabla X_{t-1} $$ Solving the difference $\nabla$ with respect to the backshift $B$, $$ \begin{align*} e_{t} =& (1 - B ) X_{t} - \phi (1 - B ) B X_{t} \\ =& \left[ 1 - ( 1 + \phi ) B + \phi B^2 \right] X_{t} \\ =& \left(1 - \pi_{1} B - \pi_{2} B^2 \right) X_{t} \end{align*} $$ A linear operator $\pi ( B) := 1 - \pi_{1} B - \pi_{2} B^2$ that satisfies this equation is called a filter, and the process through which $X_{t}$ is transformed into white noise $\tilde{X_{t}}$ is referred to as prewhitening2.

General ARIMA models also prewhiten in this way. If it’s done by a computer instead of by hand, the model is automatically fitted following a variable selection criterion, and its residuals are used.

While $\tilde{X_{t}}$ is considered white noise, it is only in relation to the information unique to $X_{t}$ itself. If there is any relationship with a new variable $Y_{t}$, that information is not considered during prewhitening, so any correlation remains in the residuals. Thus, calculating the CCF between prewhitened $\tilde{X_{t}}$ and $\tilde{Y_{t}}$ reveals the actual lag at which they have a cross-correlation.

The rationale for using CCF comes from the filter $\pi (B)$ being linear, preserving the characteristics of $X_{t}$ and $Y_{t}$. Assuming $X_{t}$ and $Y_{t}$ are independent, the standardized CCF (sCCF) $r_{k}$ follows a normal distribution as follows: $$ r_{k} \sim N \left( 0 , {{1} \over {n}} \left[ 1 + 2 \sum_{k=1}^{\infty} \rho_{k} (X_{t}) \rho_{k} (Y_{k} ) \right] \right) $$ Of course, $\rho_{k}$ refers to the autocorrelation function. However, once prewhitening is done, given $\rho_{k} \left( \tilde{X} \right) = 0$, the actual hypothesis testing assumes the null hypothesis $H_{0}$ as follows: $$ r_{k} \sim N \left( 0 , {{1} \over {n}} \right) $$ Simply put, the hypothesis testing itself does not change after prewhitening.

Code

library(TSA)
data("bluebird")
win.graph(6,5); plot(bluebird,yax.flip = T)
 
win.graph(6,4)
ccf(bluebird[,1],bluebird[,2],main="가격과 판매량의 CCF",ylab="CCF")
win.graph(6,4)
prewhiten(bluebird[,1],bluebird[,2],main="가격과 판매량의 사전백화 CCF",ylab="CCF")
 
out<-lm(log.sales~price,data=bluebird); summary(out); plot(rstudent(out))

  1. Cryer. (2008). Time Series Analysis: With Applications in R(2nd Edition): p268. ↩︎

  2. Cryer. (2008). Time Series Analysis: With Applications in R(2nd Edition): p265. ↩︎