logo

Arch Effect 📂Statistical Analysis

Arch Effect

Definition 1

The term ARCH effect refers to the ‘AutoRegressive Conditional Heteroscedasticity,’ which literally translates to ‘autoregressive conditional heteroscedastic effect.’ Therefore, it is not neutralized because it is interpreted as such.

Description

In simpler terms, if the volatility of data changes and can be explained by previous data, it is said that the data exhibits the ARCH effect.

The model that statistically explains this ARCH effect is called the ARCH model, and usually, the generalized version, GARCH Model, is used.

Practice

2.png

The graph above shows the returns plotted using only DAX from the built-in data EuStockMarkets. At a glance, it seems like there is an ARCH effect, but let’s detect the ARCH effect using the returns.

origin.png

The first method that comes to mind is to apply ACF and PACF to the returns to see if there is any autocorrelation. However, the results do not necessarily indicate the presence of an ARCH effect.

At this point, we need to think differently. The goal was to understand how ‘volatility’ changes. While the ARMA model actually cares about the ups and downs of the values, and whether they are negative or positive is also important information, in this case, we only need to care about the ‘magnitude of change.’

abs.png

Plotting the graph of the absolute values of the returns is much easier to understand because we only need to look at the ‘height,’ not the ’thickness’ of the returns. In fact, let’s see the results of applying ACF and PACF here.

3.png

We can see autocorrelation becoming markedly apparent, which was not visible until now. Therefore, it seems plausible to assume that there is an ARCH effect in the data.

4.png

Of course, squaring the data to make it positive is another method. At first glance, it may look different from the case with absolute values, but the point being made is still ‘it seems like there is an ARCH effect,’ so there’s no need to worry about the differences. However, for proper analysis or when developing formulas, squaring is typically considered because the return $r_{t}$ is an expression to identify heteroscedasticity, and variance is usually written as $\sigma^2$, which makes sense.

ml.png

In addition to these methods, the McLeod-Li test can also be considered. If the p-value falls below the red dashed line, it suggests the presence of an ARCH effect. In this case, the test was performed using the DAX returns, and such results strongly indicate the presence of an ARCH effect.

Thus, assuming that there is an ARCH effect, what remains is to analyze with the GARCH model.

Code

library(TSA)
returnize <- function(data) {return(diff(log(data)))}

DAX <- ts(EuStockMarkets[,1],start=1)
r.DAX <- returnize(DAX)
win.graph(6,3); par(mfrow=c(1,2))
acf(r.DAX,main='리턴 ACF')
pacf(r.DAX,main='리턴 PACF')
win.graph(6,4)
plot(abs(r.DAX),type='h',main='DAX의 리턴의 절대값')
win.graph(6,3); par(mfrow=c(1,2))
acf(abs(r.DAX),main='리턴의 절댓값 ACF')
pacf(abs(r.DAX),main='리턴의 절댓값 PACF')
win.graph(6,3); par(mfrow=c(1,2))
acf(r.DAX^2,main='리턴의 제곱 ACF')
pacf(r.DAX^2,main='리턴의 제곱 PACF')
win.graph(6,3)
McLeod.Li.test(y=r.DAX)

  1. Cryer. (2008). Time Series Analysis: With Applications in R(2nd Edition): p283. ↩︎