MA Model Examples
Site: | Saylor Academy |
Course: | CS250: Python for Data Science |
Book: | MA Model Examples |
Printed by: | Guest user |
Date: | Friday, 4 April 2025, 6:50 AM |
Description
This tutorial provides several examples of MA models of various orders. In addition, the partial autocorrelation (PACF) function is introduced. The ACF and PACF are important tools for estimating the order of a model based on empirical data.
Overview
This week we'll look at a variety of topics in preparation for the full-scale look at ARIMA time series models that we'll do in the next few weeks. Topics this week are MA models, partial autocorrelation, and notational conventions.
Source: The Pennsylvania State University, https://online.stat.psu.edu/stat510/lesson/2 This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 License.
Moving Average (MA) Models
Time series models known as ARIMA models may include autoregressive terms and/or moving average terms. In Week 1, we learned an autoregressive term in a time series model for the variable is a lagged value of
. For instance, a lag 1 autoregressive term is
(multiplied by a coefficient). This lesson defines moving average terms.
A moving average term in a time series model is a past error (multiplied by a coefficient).
Let , meaning that the wt are identically, independently distributed, each with a normal distribution having mean 0 and the same variance.
The 1st order moving average model, denoted by MA(1) is:
The 2nd order moving average model, denoted by MA(2) is:
The qth order moving average model, denoted by MA(q) is:
Note!
Many textbooks and software programs define the model with negative signs before the terms. This doesn’t change the general theoretical properties of the model, although it does flip the algebraic signs of estimated coefficient values and (unsquared)
terms in formulas for ACFs and variances. You need to check your software to verify whether negative or positive signs have been used in order to correctly write the estimated model. R uses positive signs in its underlying model, as we do here.
Theoretical Properties of a Time Series with an MA(1) Model
Note!
That the only nonzero value in the theoretical ACF is for lag 1. All other autocorrelations are 0. Thus a sample ACF with a significant autocorrelation only at lag 1 is an indicator of a possible MA(1) model.
For interested students, proofs of these properties are in the appendix.
Example 2-1
Suppose that an MA(1) model is , where
. Thus the coefficient
. The theoretical ACF is given by:
A plot of this ACF follows:

The plot just shown is the theoretical ACF for an MA(1) with . In practice, a sample won’t usually provide such a clear pattern. Using R, we simulated n = 100 sample values using the model
where
. For this simulation, a time series plot of the sample data follows. We can’t tell much from this plot.

The sample ACF for the simulated data follows. We see a "spike" at lag 1 followed by generally non-significant values for lags past 1. Note that the sample ACF does not match the theoretical pattern of the underlying MA(1), which is that all autocorrelations for lags past 1 will be 0. A different sample would have a slightly different sample ACF shown below but would likely have the same broad features.
Theoretical Properties of a Time Series with an MA(2) Model
For the MA(2) model, theoretical properties are the following:
Note!
The only nonzero values in the theoretical ACF are for lags 1 and 2. Autocorrelations for higher lags are 0. So, a sample ACF with significant autocorrelations at lags 1 and 2 but non-significant autocorrelations for higher lags indicate a possible MA(2) model.
Example 2-2
Consider the MA(2) model , where
. The coefficients are
and
. Because this is an MA(2), the theoretical ACF will have nonzero values only at lags 1 and 2.
Values of the two nonzero autocorrelations are:
A plot of the theoretical ACF follows:

As nearly always is the case, sample data won’t behave quite so perfectly as theory. We simulated n = 150 sample values for the model , where
. The time series plot of the data follows. As with the time series plot for the MA(1) sample data, you can’t tell much from it.

The sample ACF for the simulated data follows. The pattern is typical for situations where an MA(2) model may be useful. There are two statistically significant "spikes" at lags 1 and 2, followed by non-significant values for other lags. Note that due to sampling error, the sample ACF did not match the theoretical pattern exactly.

ACF for General MA(q) Models
A property of MA(q) models, in general, is that there are nonzero autocorrelations for the first q lags and autocorrelations = 0 for all lags > q.
Non-uniqueness of connection between values of and
in MA(1) Model.
In the MA(1) model, for any value of , the reciprocal
gives the same value for:
As an example, use +0.5 for , and then use 1/(0.5) = 2 for
. You’ll get
in both instances.
To satisfy a theoretical restriction called invertibility, we restrict MA(1) models to have values with absolute values less than 1. In the example just given, will be an allowable parameter value, whereas
will not.
Invertibility of MA models
An MA model is said to be invertible if it is algebraically equivalent to a converging infinite-order AR model. By converging, we mean that the AR coefficients decrease to 0 as we move back in time.
Invertibility is a restriction programmed into time series software used to estimate the coefficients of models with MA terms. It’s not something that we check for in the data analysis. Additional information about the invertibility restriction for MA(1) models is given in the appendix.
Advanced Theory Note!
For a MA(q) model with a specified ACF, there is only one invertible model. The necessary condition for invertibility is that the coefficients have values such that the equation
has solutions for
that fall outside the unit circle.
R Code for the Examples
In Example 1, we plotted the theoretical ACF of the model , and then simulated n = 150 values from this model and plotted the sample time series and the sample ACF for the simulated data. The R commands used to plot the theoretical ACF were:
acfma1=ARMAacf(ma=c(0.7), lag.max=10) # 10 lags of ACF for MA(1) with theta1 = 0.7
lags=0:10 #creates a variable named lags that ranges from 0 to 10.
plot(lags,acfma1,xlim=c(1,10), ylab="r",type="h", main = "ACF for MA(1) with theta1 = 0.7")
abline(h=0) #adds a horizontal axis to the plot
The first command determines the ACF and stores it in an object named acfma1 (our choice of name).
The plot command (the 3rd command) plots lags versus the ACF values for lags 1 to 10. The ylab parameter labels the y-axis, and the "main" parameter puts a title on the plot.
To see the numerical values of the ACF, simply use the command acfma1.
The simulation and plots were done with the following commands:
xc=arima.sim(n=150, list(ma=c(0.7))) #Simulates n = 150 values from MA(1)
x=xc+10 # adds 10 to make mean = 10. Simulation defaults to mean = 0.
plot(x,type="b", main="Simulated MA(1) data")
acf(x, xlim=c(1,10), main="ACF for simulated sample data")
In Example 2, we plotted the theoretical ACF of the model
acfma2=ARMAacf(ma=c(0.5,0.3), lag.max=10)
acfma2
lags=0:10
plot(lags,acfma2,xlim=c(1,10), ylab="r",type="h", main = "ACF for MA(2) with theta1 = 0.5,theta2=0.3")
abline(h=0)
xc=arima.sim(n=150, list(ma=c(0.5, 0.3)))
x=xc+10
plot(x, type="b", main = "Simulated MA(2) Series")
acf(x, xlim=c(1,10), main="ACF for simulated MA(2) Data")
Appendix: Proof of Properties of MA(1)
For interested students, here are proofs for the theoretical properties of the MA(1) model.
The 1st order moving average model, denoted by MA(1) is
When , the previous expression =
. For any
, the previous expression = 0. The reason is that, by definition of independence of the
,
for any
. Further, because the
have mean 0,
.
For a time series,
Apply this result to get the ACF given above.
Invertibility Restriction:
An invertible MA model is one that can be written as an infinite order AR model that converges so that the AR coefficients converge to 0 as we move infinitely back in time. We’ll demonstrate invertibility for the MA(1) model.
The MA(1) model can be written as .
If we let , then the MA(1) model is
(1) .
At time , the model is
which can be reshuffled to
(2) .
We then substitute relationship (2) for in equation (1)
(3)
At time , equation (2) becomes
(4) .
We then substitute relationship (4) for in equation (3)
If we were to continue (infinitely), we would get the infinite order AR model
Note!
However, if , the coefficients multiplying the lags of
will increase (infinitely) in size as we move back in time. To prevent this, we need
. This is the condition for an invertible MA(1) model.
Infinite Order MA model
In week 3, we’ll see that an AR(1) model can be converted to an infinite order MA model:
This summation of past white noise terms is known as the causal representation of an AR(1). In other words, is a special type of MA with an infinite number of terms going back in time. This is called an infinite order MA or MA(
). A finite order MA is an infinite order AR, and any finite order AR is an infinite order MA.
Recall in Week 1, we noted that a requirement for a stationary AR(1) is that . Let’s calculate the
using the causal representation.
This last step uses a basic fact about geometric series that requires ; otherwise, the series diverges.
Partial Autocorrelation Function (PACF)
In general, a partial correlation is a conditional correlation. It is the correlation between two variables under the assumption that we know and takes into account the values of some other set of variables. For instance, consider a regression context in which y is the response variable and ,
, and
are predictor variables. The partial correlation between y and
is the correlation between the variables determined, taking into account how both y and
are related to
and
.
In regression, this partial correlation could be found by correlating the residuals from two different regressions:
- Regression in which we predict y from
and
,
- regression in which we predict
from
and
. Basically, we correlate the "parts" of y and
that are not predicted by
and
.
More formally, we can define the partial correlation just described as
Note!
That this is also how the parameters of a regression model are interpreted. Think about the difference between interpreting the regression models:
In the first model, can be interpreted as the linear dependency between
and y. In the second model,
would be interpreted as the linear dependency between
, and y WITH the dependency between x and y already accounted for.
For a time series, the partial autocorrelation between and
is defined as the conditional correlation between
and
, conditional on
, ... ,
, the set of observations that come between the time points
and
.
- The 1st order partial autocorrelation will be defined to equal the 1st order autocorrelation.
- The 2nd order (lag) partial autocorrelation is
This is the correlation between values two time periods apart, conditional on knowledge of the value in between. (By the way, the two variances in the denominator will equal each other in a stationary series).
- The 3rd order (lag) partial autocorrelation is
And so on, for any lag.
Typically, matrix manipulations having to do with the covariance matrix of a multivariate distribution are used to determine estimates of the partial autocorrelations.
Some Useful Facts About PACF and ACF Patterns
Identification of an AR model is often best done with the PACF.
- For an AR model, the theoretical PACF "shuts off" past the order of the model. The phrase "shuts off" means that, in theory, the partial autocorrelations are equal to 0 beyond that point. Put another way, the number of non-zero partial autocorrelations gives the order of the AR model. By the "order of the model", we mean the most extreme lag of x that is used as a predictor.
Example: In Lesson 1.2, we identified an AR(1) model for a time series of annual numbers of worldwide earthquakes having a seismic magnitude greater than 7.0. Following is the sample PACF for this series. Note that the first lag value is statistically significant, whereas partial autocorrelations for all other lags are not statistically significant. This suggests a possible AR(1) model for these data.

Identification of an MA model is often best done with the ACF rather than the PACF.
For an MA model, the theoretical PACF does not shut off but instead tapers toward 0 in some manner. A clearer pattern for an MA model is in the ACF. The ACF will have non-zero autocorrelations only at lags involved in the model.
Lesson 2.1 included the following sample ACF for a simulated MA(1) series. Note that the first lag autocorrelation is statistically significant, whereas all subsequent autocorrelations are not. This suggests a possible MA(1) model for the data.
Theory Note!
The model used for the simulation was . In theory, the first lag autocorrelation
and autocorrelations for all other lags = 0.

The underlying model used for the MA(1) simulation in Lesson 2.1 was . Following is the theoretical PACF (partial autocorrelation) for that model. Note that the pattern gradually tapers to 0.

The PACF just shown was created in R with these two commands:
ma1pacf = ARMAacf(ma = c(.7),lag.max = 36, pacf=TRUE)
plot(ma1pacf,type="h", main = "Theoretical PACF of MA
Notational Conventions
Backshift Operator
Using B before either a value of the seriesA "power" of B means to repeatedly apply the backshift in order to move back a number of time periods that equals the "power." As an example,
AR Models and the AR Polynomial
AR models can be written compactly using an "AR polynomial" involving coefficients and backshift operators. Let p = the maximum order (lag) of the AR terms in the model. The general form for an AR polynomial isUsing the AR polynomial, one way to write an AR model is
Examples 2-3
Consider the AR(1) modeland the model can be written
To check that this works, we can multiply out the left side to get
Then, swing the
An AR(2) model is
The AR(2) model could be written as
An AR(p) model is
MA Models
A MA(1) modelA MA(2) model is defined as
In general, the MA polynomial is
Models with Both AR and MA Terms
A model that involves both AR and MA terms might be writtenDifferencing
Often differencing is used to account for nonstationarity that occurs in the form of trend and/or seasonality.The difference
An alternative notation for a difference is
Thus
A subscript defines a difference of a lag equal to the subscript. For instance,
This type of difference is often used with monthly data that exhibits seasonality. The idea is that differences from the previous year may be, on average, about the same for each month of a year.
A superscript says to repeat the differencing the specified number of times. As an example,
In words, this is a first difference of the first differences.