**Have a Question?**

**Phone:** +1 (888) 427-9486

+1 (312) 257-3777

Contact Us

# Volatility 201

This is the third entry in our ongoing series on **volatility modeling**. In an earlier issue, we introduced the broad concept of volatility in financial time series, defined its general characteristics (e.g. clustering, mean-reversion) and identified important volatility terms in financial time series. We explored holding periods, volatility scaling, the serial correlation assumption, multi-period volatility (i.e. term structure) and discussed a few non-parametric methods to estimate volatility using historical data.

In this issue, we’ll take the prior discussion further and develop an understanding of **autoregressive conditional heteroscedasticity (ARCH)** volatility modeling

### Why should you care?

Once again, the concepts discussed here are pivotal to a solid understanding of financial time series volatility.

### Background

Let’s consider the dependent asset’s log return time series. First, let’s model the return as the sum of two components:

Where

- is the conditional mean (non-stochastic) component
- is the innovation, error-term or shock (stochastic) component

To compute the multi-period volatility:

And

Where

- is the unconditional (long-run) mean of the time series

But,

Thus:

In sum, the covariance is a function of the conditional mean and unconditional mean.

The multi-period volatility (term structure) depends not only on the conditional volatility of each period, but on the conditional mean as well.

### Assumption 1:

Let's assume that

And

**Note:** The assumption is not contrary to what we see in financial time series, as the majority of time series don’t possess significant mean or any serial correlation; at the same time, they all exhibit time-varying volatility (heteroskedasticity).

Next, let’s assume the innovation, error term or shock term can be represented as follows:

Where

- is the conditional volatility (scalar) at time t
- is the random variable with zero mean (), and unit variance ()
- is serially uncorrelated, but still dependent (higher-order)

### Assumption 2:

let’s assume the random variable () is identically distributed over time (not necessarily Gaussian).

#### Definition:

Let’s define as the squared of the mean-adjusted asset’s returns:

And

Let’s now examine the methods used in earlier issues to estimate volatility:

#### 1. Equal weighted moving standard deviation

#### 2. Exponentially weighted moving average (EWMA^{i})

### Probability Distribution of

So far, we have not assumed any functional form for the probability distribution of , but here’s a few observations about the candidate distribution function:

- (i.e.
- is asymmetric
- is positively (aka right) skewed
- is the distribution of the squared innovations or shocks

#### Derivation

Let be the probability density function with zero mean and variance.

Assume is a symmetrical distribution

#### Case 1

Let’s use the standardized residuals of a Gaussian distribution.

The distribution of the squared values of a Gaussian distributed random variable is Chi-square with one degree of freedom.

#### Case 2

Case 2: Let’s use the standardized residuals of the student’s t distribution.

Alright, the equation is getting a bit complicated here, but the general principles are applicable to how we related the conditional probability distribution of the squared time series with the original conditional probability distribution.

### Modeling

So far, we have examined the squared innovations properties (i.e. at a single time instance) at a given time instance (i.e. t), but how do we describe the evolution over time?

Similar to ARMA^{i}/ARIMA^{i} modeling, we examine the ACF^{i}/PACF^{i} correlogram for the squared time series in an attempt to identify a dependency between lagged time series, and propose a model.

There are two main categories of statistical volatility models:

- Deterministic form - exact function to govern the evolution of volatility (e.g. ARCH, GARCH
^{i}, EGARCH^{i}, etc.)

- Stochastic form – use of a stochastic equation, i.e. allowing a innovation/shock term in the volatility equation (e.g. stochastic volatility model

**Note**: The conditional volatility values are computed indirectly, not directly observed, which further complicates the process.

**Autoregressive Conditional Heteroskedasticity (ARCH)** Model

The first model that provides a systematic framework for volatility modeling is Engle’s autoregressive conditional heteroskedasticity **(ARCH)** model (1982). This is a good model to start with due to its simplicity and relevance to other models.

**The autoregressive conditional heteroskedasticity (ARCH)** model is an AR(p) for times series, but without the error terms or the shocks. Alternatively, we can view the **ARCH**model as a weighted moving average of the squared time series (WMA) with a constant.

The coefficient’s value must meet some regulatory requirement to ensure that (1) conditional variance is always positive, and (2) the unconditional variance is finite and positive.

Volatility clustering: the ARCH model captures the volatility clustering observed in assets returns: a large past-squared shock implies a large conditional variance ( ) for the mean-corrected return . Consequently, tends to be followed by a large value (in absolute terms) due to the large variance, and vice versa for smaller shocks.

### ARCH(1) Model

Thus, for a positive conditional variance and a finite unconditional variance, then and .

### Model's Parameters

In the earlier issue (volatility 101), we did not assume any distribution for the time series and thus used the root mean square error as our utility function, searching for a set of parameters’ values that minimize the RMSE^{i}.

In the ARCH model, the mean-corrected returns ( ) are inter-dependent (e.g. clustering) and are not identically distributed, so how do we go about estimating an efficient set of values for its parameters?

And

For an initial set of , we recursively compute the conditional volatility values and revise the alpha values in an effort to maximize the overall likelihood.

### Model Checking

The **Autoregressive conditional heteroskedasticity (ARCH)** model does not assume i.i.d assumption among the mean-corrected returns , but the standardized residuals are i.i.d.

In short, we need to examine the standardized residuals for independence (e.g. the white-noise test and arch effect test) and the normality distribution assumption.

### Model Extension

In some applications, it is more appropriate to assume that standardized residuals follow a heavy –tailed distribution such as the student’s t distribution or the generalized error distribution (GED).

This extension affects the computation of the log-likelihood function (LLF) (using the alternative probability density function), and the interpretation of conditional volatility.

To illustrate, let’s take the student’s t-distribution for

Where

- is the standardized student’s t distribution (zero mean and unit variance)
- is the degrees of freedom of the student’s distribution ()

To yield a standardized t-distribution, with zero skew and finite excess kurtosis;

The standardized t-distribution exhibit a fat tail (heteroskedasticity) with excess kurtosis = .

### Forecasting

For the first p-steps out-of-sample, the forecast formula includes a mix of squared residuals and estimated variances .

For a longer forecast horizon, the estimated conditional volatility converge to a long term value determined by the model parameters; for instance ARCH(1) has the following long-run variance:

### ARCH Effect

An ARCH effect is a characteristic used to describe whether a given time series exhibits correlation among its squared data point values.

The original test conducted by Engle (1982) is using the LaGrange multiplier (LM) and ordinary least squares regression.

Alternatively, we can use the Ljung-Box test on the **squared (mean-adjusted) time series**, compute modified Q(m) and test whether the data exhibits a significant serial correlation or not.

### Conclusion

In this paper, we built upon several bedrock lessons from earlier issues and constructed a general framework for volatility modeling. In the beginning, we looked into correlated returns and derived a relationship between term structure (i.e. multi-period) volatility and conditional means.

Next, assuming a mean-adjusted asset’s returns time series, we proceed with our analysis to volatility. In practice, the models describing volatility evolution over time are categorized into two groups: (1) deterministic functional form based models (e.g. **ARCH**, GARCH, etc.), and (2) stochastic models which permit the volatility model to include a shock/innovation term.

Finally, we examined in depth a rather important model – the **autoregressive conditional heteroskedasticity (ARCH)** model (Engle 1980); a building block mode for many models (e.g. GARCH, EGARCH, etc.), which we will cover in future issues. The **ARCH** can be thought of as a weighted moving average of the squared time series, but the weights are relatively constrained to yield a positive variance and (existent) finite long-run variance. Nevertheless, it does not provide an insight into the volatility process and treats positive and negative shock indiscriminately, which is contrary to what has been observed/documented in financial time series.

### Attachments

The PDF version of this issue along with the excel spreadsheet can be found below: