Time Series Basics
Site: | Saylor Academy |
Course: | CS250: Python for Data Science |
Book: | Time Series Basics |
Printed by: | Guest user |
Date: | Tuesday, August 6, 2024, 12:24 PM |
Description
A time series is a set of points that are ordered in time. Each time point is usually assigned an integer index to indicate its position within the series. For example, you can construct a time series by measuring and computing an average daily temperature. When the outcome of the next point in a time series is unknown, the time series is said to be random or "stochastic" in nature. A simple example would be creating a time series from a sequential set of coin flips with outcomes of either heads or tails. A more practical example is the time series of prices of a given stock.
When the unconditional joint probability distribution of the series does not change with time (it is time-invariant), the stochastic process generating the time series is said to be stationary. Under these circumstances, parameters such as the mean and standard deviation do not change over time. Assuming the same coin for each flip, the coin flip is an example of a stationary process. On the other hand, stock price data is not a stationary process.
This unit aims to use your knowledge of statistics to model time series data for random processes. Even though the outcome of the next time point is unknown, given the time series statistics, it should be possible to make inferences if you can create a model. The concept of a stationary random process is central to statistical model building. Since nonstationary processes require a bit more sophistication than stationary processes, it is important to understand what type of time series is being modeled. Our first step in this direction is to introduce the autoregressive (AR) model. This linear model can be used to estimate current time series values based on known past time series values. Read through this article which introduces the idea behind AR models and additionally explains the autocorrelation function (ACF).
Overview
This first lesson will introduce you to time series data and important characteristics of time series data. We will also begin some basic modeling. Topics covered include first-order autoregressive models and the autocorrelation function.
Source: The Pennsylvania State University, https://online.stat.psu.edu/stat510/lesson/1
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 License.
Overview of Time Series Characteristics
In this lesson, we'll describe some important features that we must consider when describing and modeling a time series. This is meant to be an introductory overview, illustrated by example, and not a complete look at how we model a univariate time series. Here, we'll only consider univariate time series. We'll examine relationships between two or more time series later on.
Univariate Time Series
A univariate time series is a sequence of measurements of the same variable collected over time. Most often, the measurements are made at regular time intervals.
One difference from standard linear regression is that the data are not necessarily independent and not necessarily identically distributed. One defining characteristic of a time series is that it is a list of observations where the ordering matters. Ordering is very important because there is dependency, and changing the order could change the meaning of the data.
Basic Objectives of the Analysis
The basic objective usually is to determine a model that describes the pattern of the time series. Uses for such a model are:
- To describe the important features of the time series pattern.
- To explain how the past affects the future or how two time series can "interact".
- To forecast future values of the series.
- To possibly serve as a control standard for a variable that measures the quality of the product in some manufacturing situations.
Types of Models
There are two basic types of "time domain" models.
- Models that relate the present value of a series to past values and past prediction errors - these are called ARIMA models (for Autoregressive Integrated Moving Average). We'll spend substantial time on these.
- Ordinary regression models that use time indices as x-variables. These can be helpful for an initial description of the data and form the basis of several simple forecasting methods.
Important Characteristics to Consider First
Some important questions to first consider when first looking at a time series are:
- Is there a trend, meaning that, on average, the measurements tend to increase (or decrease) over time?
- Is there seasonality, meaning that there is a regularly repeating pattern of highs and lows related to calendar time, such as seasons, quarters, months, days of the week, and so on?
- Are there outliers? In regression, outliers are far away from your line. With time series data, your outliers are far away from your other data.
- Is there a long-run cycle or period unrelated to seasonality factors?
- Is there constant variance over time, or is the variance non-constant?
- Are there any abrupt changes to either the level of the series or the variance?
Example 1-1
The following plot is a time series plot of the annual number of earthquakes in the world with a seismic magnitude over 7.0 for 99 consecutive years. By a time series plot, we simply mean that the variable is plotted against time.
Some features of the plot:
- There is no consistent trend (upward or downward) over the entire time span. The series appears to slowly wander up and down. The horizontal line drawn at quakes = 20.2 indicates the mean of the series. Notice that the series tends to stay on the same side of the mean (above or below) for a while and then wanders to the other side.
- Almost by definition, there is no seasonality as the data are annual data.
- There are no obvious outliers.
- It's difficult to judge whether the variance is constant or not.
One of the simplest ARIMA-type models is a model in which we use a linear model to predict the value at the present time using the value at the previous time. This is called an AR(1) model, standing for autoregressive model of order 1. The order of the model indicates how many previous times we use to predict the present time.
A start in evaluating whether an AR(1) might work is to plot values of the series against lag 1 values of the series. Let denote the value of the series at any particular time , so denotes the value of the series one time before time . That is, is the lag 1 value of . As a short example, here are the first five values in the earthquake series, along with their lag 1 values.
t | x_{t } |
x_{ t-1}(lag 1 value) |
---|---|---|
1 | 13 | * |
2 | 14 | 13 |
3 | 8 |
14 |
4 | 10 | 8 |
5 | 16 | 10 |
For the complete earthquake dataset, here's a plot of versus
:
Although it's only a moderately strong relationship, there is a positive linear association so an AR(1) model might be a useful model.
The AR(1) model
Theoretically, the AR(1) model is written
Assumptions:
- , meaning that the errors are independently distributed with a normal distribution that has mean 0 and constant variance.
- Properties of the errors are independent of .
This is essentially the ordinary simple linear regression equation, but there is one difference. Although it's not usually true, in ordinary least squares regression, we assume that the x-variable is not random but instead is something we can control. That's not the case here, but in our first encounter with time series, we'll overlook that and use ordinary regression methods. We'll do things the "right" way later in the course.
Following is the Minitab output for the AR(1) regression in this example:
quakes = 9.19 + 0.543 lag1
98 cases used; 1 case contains missing values
Predictor | Coef | SE Coef | T | P |
---|---|---|---|---|
Constant | 9.191 | 1.819 | 5.05 | 0.000 |
lag1 | 0.54339 | 0.08528 | 6.37 | 0.000 |
S = 6.12239 R-Sq = 29.7% R-Sq(adj) = 29.0%
We see that the slope coefficient is significantly different from 0, so the lag 1 variable is a helpful predictor. The
Residual Analysis
In traditional regression, a plot of residuals versus fits is a useful diagnostic tool. The ideal for this plot is a horizontal band of points. Following is a plot of residuals versus predicted values for our estimated model. It doesn't show any serious problems. There might be one possible outlier at a fitted value of about 28.Example 1-2
The following plot shows a time series of quarterly production of beer in Australia for 18 years.Some important features are:
- There is an upward trend, possibly a curved one.
- There is seasonality – a regularly repeating pattern of highs and lows related to quarters of the year.
- There are no obvious outliers.
- There might be increasing variation as we move across time, although that's uncertain.
Classical regression methods for trend and seasonal effects
To use traditional regression methods, we might model the pattern in the beer production data as a combination of the trend over time and quarterly effect variables.Suppose that the observed series is , for .
- For a linear trend, use (the time index) as a predictor variable in a regression.
- For a quadratic trend, we might consider using both and .
- For quarterly data, with possible seasonal (quarterly) effects, we can define indicator variables such as if the observation is in quarter of a year and 0 otherwise. There are 4 such indicators.
Let . A model with additive components for linear trend and seasonal (quarterly) effects might be written
Note!
We've deleted the "intercept" from the model. This isn't necessary, but if we include it we'll have to drop one of the seasonal effect variables from the model to avoid collinearity issues.
Predictor | Coef | SE Coef | T | P |
---|---|---|---|---|
Noconstant | ||||
Time | 0.5881 | 0.2193 | 2.68 | 0.009 |
tsqrd | 0.031214 | 0.002911 | 10.72 | 0.000 |
quarter_1 | 261.930 | 3.937 | 66.52 | 0.000 |
quarter_2 | 212.165 | 3.968 | 53.48 | 0.000 |
quarter_3 | 228.415 | 3.994 | 57.18 | 0.000 |
quarter_4 | 310.880 | 4.018 | 77.37 | 0.000 |
Residual Analysis
Sample Autocorrelation Function (ACF)
The ACF can be used to identify the possible structure of time series data. That can be tricky going as there often isn't a single clear-cut interpretation of a sample autocorrelation function. We'll get started on that in Lesson 1.2 this week. The ACF of the residuals for a model is also useful. The ideal for a sample ACF of residuals is that there aren't any significant correlations for any lag.
Following is the ACF of the residuals for Example 1, the earthquake example, where we used an AR(1) model. The "lag" (time span between observations) is shown along the horizontal, and the autocorrelation is on the vertical. The red lines indicated bounds for statistical significance. This is a good ACF for residuals. Nothing is significant; that's what we want for residuals.
The ACF of the residuals for the quadratic trend plus seasonality model we used for Example 2 looks good too. Again, there appears to be no significant autocorrelation in the residuals. The ACF of the residual follows:
Lesson 1.2 will give more details about the ACF. Lesson 1.3 will give some R code for examples in Lesson 1.1 and Lesson 1.2.
Sample ACF and Properties of AR(1) Model
Stationary Series
As a preliminary, we define an important concept, that of a stationary series. For an ACF to make sense, the series must be a weakly stationary series. This means that the autocorrelation for any particular lag is the same regardless of where we are in time.(Weakly) Stationary Series
A series is said to be (weakly) stationary if it satisfies the following properties:- The mean is the same for all .
- The variance of is the same for all .
- The covariance (and also correlation) between and is the same for all at each lag = 1, 2, 3, etc.
Autocorrelation Function (ACF)
The denominator in the second formula occurs because the standard deviation of a stationary series is the same at all times.
The last property of a weakly stationary series says that the theoretical value of autocorrelation of particular lag is the same across the whole series. An interesting property of a stationary series is that, theoretically, it has the same structure forwards as it does backward.
The First-order Autoregression Model
Assumptions
- , meaning that the errors are independently distributed with a normal distribution that has mean 0 and constant variance.
- Properties of the errors are independent of .
- The series is (weakly) stationary. A requirement for a stationary AR(1) is that . We'll see why below.
Properties of the AR(1)
Formulas for the mean, variance, and ACF for a time series process with an AR(1) model follow.
- The (theoretical) mean of is
- The variance of is
- The correlation between observations time periods apart is
Note!
is the slope in the AR(1) model, and we now see that it is also the lag 1 autocorrelation.Pattern of ACF for AR(1) Model
The ACF property defines a distinct pattern for autocorrelations. For a positive value of , the ACF exponentially decreases to 0 as the lag increases. For negative , the ACF also exponentially decays to 0 as the lag increases, but the algebraic signs for the autocorrelations alternate between positive and negative.The tapering pattern:
The alternating and tapering pattern.
Example 1-3
In Example 1 of Lesson 1.1, we used an AR(1) model for annual earthquakes in the world with a seismic magnitude greater than 7. Here's the sample ACF of the series:Lag. | ACF |
---|---|
1. | 0.541733 |
2. | 0.418884 |
3. | 0.397955 |
4. | 0.324047 |
5. | 0.237164 |
6. | 0.171794 |
7. | 0.190228 |
8. | 0.061202 |
9. | -0.048505 |
10. | -0.106730 |
11. | -0.043271 |
12. | -0.072305 |
The sample autocorrelations taper, although not as fast as they should for an AR(1). For instance, theoretically, the lag 2 autocorrelation for an AR(1) = squared value of lag 1 autocorrelation. Here, the observed lag 2 autocorrelation = .418884. That's somewhat greater than the squared value of the first lag autocorrelation (.541733^{2}= 0.293). But, we managed to do okay (in Lesson 1.1) with an AR(1) model for the data. For instance, the residuals looked okay. This brings up an important point – the sample ACF will rarely fit a perfect theoretical pattern. A lot of the time, you just have to try a few models to see what fits.
We'll study the ACF patterns of other ARIMA models during the next three weeks. Each model has a different pattern for its ACF, but in practice, the interpretation of a sample ACF is not always so clear-cut.
Example 1-4
Here's a time series of the daily cardiovascular mortality rate in Los Angeles County, 1970-1979There is a slight downward trend, so the series may not be stationary. To create a (possibly) stationary series, we'll examine the first differences . This is a common time series method for creating a de-trended series and, thus, potentially a stationary series. Think about a straight line – there are constant differences in average for each change of 1 unit in .
The time series plot of the first differences is the following:
The following plot is the sample estimate of the autocorrelation function of 1st differences:
Lag. | ACF |
---|---|
1. | -0.506029 |
2. | 0.205100 |
3. | -0.126110 |
4. | 0.062476 |
5. | -0.015190 |
The lag 2 correlation is roughly equal to the squared value of the lag 1 correlation. The lag 3 correlation is nearly exactly equal to the cubed value of the lag 1 correlation, and the lag 4 correlation nearly equals the fourth power of the lag 1 correlation. Thus an AR(1) model may be a suitable model for the first differences .
Let denote the first differences, so that and . We can write this AR(1) model as
Appendix Derivations of Properties of AR(1)
Generally, you won't be responsible for reproducing theoretical derivations, but interested students may want to see the derivations for the theoretical properties of an AR(1).The algebraic expression of the model is as follows:
Assumptions
- , meaning that the errors are independently distributed with a normal distribution that has mean 0 and constant variance.
- Properties of the errors are independent of .
- The series is (weakly) stationary. A requirement for a stationary AR(1) is that . We'll see why below.
Mean
Variance
Autocorrelation Function (ACF)
To start, assume the data have mean 0, which happens when , and . In practice, this isn't necessary, but it simplifies matters. Values of variances, covariances, and correlations are not affected by the specific value of the mean.Let , the covariance observations time periods apart (when the mean = 0). Let = correlation between observations that are time periods apart.
Covariance and correlation between observations one time period apart
R Code for Two Examples in Lessons 1.1 and 1.2
One example in Lesson 1.1 and Lesson 1.2 concerned the annual number of earthquakes worldwide with a magnitude greater than 7.0 on the seismic scale. We identified an AR(1) model (autoregressive model of order 1), estimated the model, and assessed the residuals. Below is the R code that will accomplish these tasks. The first line reads the data from a file named quakes.dat. The data are listed in time order from left to right in the lines of the file. If you were to download the file, you should download it into a folder that you create for storing course data. Then in R, change the working directory to this folder.
The commands below include explanatory comments, following the #. Those comments do not have to be entered for the command to work.
x=scan("quakes.dat")
x=ts(x) #this makes sure R knows that x is a time series
plot(x, type="b") #time series plot of x with points marked as "o"
install.packages("astsa")
library(astsa) # See note 1 below
lag1.plot(x,1) # Plots x versus lag 1 of x.
acf(x, xlim=c(1,19)) # Plots the ACF of x for lags 1 to 19
xlag1=lag(x,-1) # Creates a lag 1 of x variable. See note 2
y=cbind(x,xlag1) # See note 3 below
ar1fit=lm(y[,1]~y[,2])#Does regression, stores results object named ar1fit
summary(ar1fit) # This lists the regression results
plot(ar1fit$fit,ar1fit$residuals) #plot of residuals versus fits
acf(ar1fit$residuals, xlim=c(1,18)) # ACF of the residuals for lags 1 to 18
Note!
- The astsa library accesses R script(s) written by one of the authors of our textbook (Stoffer). In our program, the lag1.plot command is part of that script. You may read more about the library on the website for our text: ASTSA R package. You must install the astsa package in R before loading the commands in the library statement. Not all available packages are included when you install R on your machine (Cran R Project Web Packages). You only need to run install.packages("astsa") once. In subsequent sessions, the library command alone will bring the commands into your current session.
- Note the negative value for the lag in xlag1=lag(x,−1) To lag back in time in R, use a negative lag.
- This is a bit tricky. For whatever reason, R has to bind together a variable with its lags for the lags to be in the proper connection with the original variable. The cbind and the ts.intersect commands both accomplish this task. In the code above, the lagged variable and the original variable become the first and second columns of a matrix named y. The regression command (lm) uses these two columns of y as the response and predictor variables in the regression.
General Note!
If a command that includes quotation marks doesn't work when you copy and paste from course notes to R, try typing the command in R instead.
The results, as given by R, follow.
The time series plot for the quakes series.
Plot of x versus lag 1 of x.
The ACF of the quakes series.
The regression results:
Estimate | Std. Error | t value | Pr(>|t|) | |
---|---|---|---|---|
(Intercept) | 9.19070 | 1.81924 | 5.052 | 2.08e-06 *** |
y[, 2] | 0.54339 | 0.08528 | 6.372 | 6.47e-09 *** |
Signif. codes: 0 ‘***' 0.001 ‘**' 0.01 ‘*' 0.05 ‘.' 0.1 ‘ ' 1
Residual standard error: 6.122 on 96 degrees of freedom
(2 observations deleted due to missingness)
Multiple R-squared: 0.2972, Adjusted R-squared: 0.2899
Plot of residuals versus fits
ACF of residuals
An example in Lesson 1.2 for this week concerned the weekly cardiovascular mortality rate in Los Angeles County. We used a first difference to account for a linear trend and determine that the first differences may have an AR(1) model.
Following are R commands for the analysis. Again, the commands are commented using #comment.
mort=scan("cmort.dat")
plot(mort, type="o") # plot of mortality rate
mort=ts(mort)
mortdiff=diff(mort,1) # creates a variable = x(t) – x(t-1)
plot(mortdiff,type="o") # plot of first differences
acf(mortdiff,xlim=c(1,24)) # plot of first differences, for 24 lags
mortdifflag1=lag(mortdiff,-1)
y=cbind(mortdiff,mortdifflag1) # bind first differences and lagged first differences
mortdiffar1=lm(y[,1]~y[,2]) # AR(1) regression for first differences
summary(mortdiffar1) # regression results
acf(mortdiffar1$residuals, xlim = c(1,24)) # ACF of residuals for 24 lags.
We'll leave it to you to try the code and see the output if you wish.