logo

Autocorrelation Function 📂Statistical Analysis

Autocorrelation Function

Definition 1

Let $\left\{ Y_{t} \right\}_{t=1}^{n}$ be a stochastic process, and for lag $k$, let the residuals obtained by regressing $Y_{t}$ on $Y_{t-1}, \cdots , Y_{t-(k-1)}$ be $\widehat{e_{t}}$, and the residuals obtained by regressing $Y_{t-k}$ on $Y_{t-1}, \cdots , Y_{t-(k-1)}$ be $\widehat{e_{t-k}}$.

  1. The following defined $\phi_{kk}$ is referred to as the partial autocovariance function at lag $k$. $$ \phi_{kk} := \text{cor} ( \widehat{e_{t}} , \widehat{e_{t-k}} ) $$
  2. The following defined $\phi_{kk}$ is referred to as the sample partial autocovariance function at lag $k$. $$ \widehat{ \phi_{kk} } := {{ r_{k} - \sum_{j=1}^{k-1} \phi_{(k-1),j} r_{k-j} } \over { 1 - \sum_{j=1}^{k-1} \phi_{(k-1),j} r_{j} }} \\ \phi_{k,j} := \phi_{(k-1),j} - \phi_{kk} \phi_{(k-1),(k-j)} $$

Description

Partial autocorrelation function is about understanding the autocorrelation while eliminating the influence of $Y_{t-1}, \cdots , Y_{t-(k-1)}$ that lies between $Y_{t}$ and $Y_{t-k}$, focusing solely on the relationship between the two. Although the definition might initially appear complicated with sudden mentions of regression analysis, the concept is actually simple. Consider only $\widehat{e_{t}}$. Regressing $Y_{t}$ on $Y_{t-1}, \cdots , Y_{t-k+1}$ means to find the value of $\beta_{1} , \cdots , \beta_{k-1}$ that fits into the following equation: $$ Y_{t} = \beta_{1} Y_{t-1} + \cdots \beta_{k-1} Y_{t-(k-1)} + \widehat{e_{t}} $$ To rewrite it, $$ \widehat{e_{t}} = Y_{t} - \left( \beta_{1} Y_{t-1} + \cdots \beta_{k-1} Y_{t-(k-1)} \right) $$ This implies that the parts of $\widehat{e_{t}}$ that can be explained by $Y_{t-1}, \cdots , Y_{t-(k-1)}$ are eliminated. Similarly, $\widehat{e_{t-k}}$ has also removed any portion that could be explained by $Y_{t-1}, \cdots , Y_{t-(k-1)}$, meaning calculating $\text{cor} ( \widehat{e_{t}} , \widehat{e_{t-k}} )$ is essentially examining the correlation solely between $Y_{t}$ and $Y_{t-k}$ without $Y_{t-1}, \cdots , Y_{t-(k-1)}$. This focus only on the variables of interest is why the term ‘partial’ autocorrelation function is fitting. [ NOTE: Despite the simplicity of the concept, calculating the sPACF was quite challenging until Levinson and Durbin proposed a method that facilitated the recursive calculation of $\widehat{ \phi_{kk} }$. ]

Mathematical Explanation

Mathematically, considering that $Y_{t}$ comes from $AR(p)$, and since $\displaystyle Y_{t} = \sum_{k=1}^{p} \phi_{k} Y_{t-k} + e_{t}$, calculating the coefficient $\phi_{k}$ for $Y_{t-k}$ by excluding other variables aids in identifying the $AR(p)$ model.

sPACF $\widehat{\phi_{kk}}$ is an estimator for PACF $\phi_{kk}$, and if $Y_{t}$ originates from a $AR(p)$ model, then for $k>p$, it follows a normal distribution $\displaystyle N \left( 0 , {{ 1 } \over { n }} \right)$. Represented as $$ \widehat{\phi_{kk}} \sim N \left( 0 , {{ 1 } \over { n }} \right) $$ , this is utilized for hypothesis testing.

Test

Given $\displaystyle Y_{t} = \sum_{k=1}^{p} \phi_{k} Y_{t-k} + e_{t}$ and assuming $k = 1 , \cdots , p$,

  • $H_{0}$: $AR(0) \iff \theta_{k} = 0$, meaning, $Y_{t}$ does not follow an autoregressive model.
  • $H_{1}$: $AR(k) \iff \theta_{k} \ne 0$, meaning, $Y_{t}$ has a partial autocorrelation at lag $k$.

Interpretation

Under the null hypothesis, both $p=0$ and $\widehat{\phi_{kk}} \sim N \left( 0 , {{ 1 } \over { n }} \right)$ are assumed, and the standard error becomes $\displaystyle {{1} \over {\sqrt{n}}}$. Therefore, if wanting to perform hypothesis testing at significance level $\alpha$, check if $| \phi_{k} |$ surpasses the upper confidence limit $\displaystyle {{ z_{1 - \alpha/2} } \over { \sqrt{n} }}$. If it exceeds, it’s considered a significant lag; if not, it’s deemed to have no partial autocorrelation.

Practice

20190724\_101017.png

ar1.s data comes from a $AR(1)$ model in the TSA package. When analyzing with an actual ARIMA model, it’s also crucial to determine if the absolute value of the estimate exceeds twice the standard error to consider it significant.

3.png

Additionally, using the acf() function in the TSA package draws a correlogram for various $k$ like above. Without needing to mentally calculate, if it exceeds the line, it’s significant; if not, it’s considered insignificant. It’s typically calculated at significance level $5 \%$.

4.png

The method of directly drawing lines as shown above is recommended to verify understanding of hypothesis testing using the partial autocorrelation function. Though it’s just one line of code in R, executing it even once allows acceptance that $\widehat{\phi}_{kk}$ follows a normal distribution, with its standard error calculated as $\displaystyle \text{se} ( r_{k} ) = {{1} \over {\sqrt{n}}}$.

Code

library(TSA)
data(ar1.s); win.graph(6,4); pacf(ar1.s)
arima(ar1.s, order=c(1,0,0))
abline(h=1.96*1/sqrt(length(ar1.s)),col='red')

See Also


  1. Cryer. (2008). Time Series Analysis: With Applications in R(2nd Edition): p112. ↩︎