The residual errors from forecasts on a time series provide another source of information that we can model.
Residual errors themselves form a time series that can have temporal structure. A simple autoregression model of this structure can be used to predict the forecast error, which in turn can be used to correct forecasts. This type of model is called a moving average model, the same name but very different from moving average smoothing.
In this tutorial, you will discover how to model a residual error time series and use it to correct predictions with Python.
After completing this tutorial, you will know:
 About how to model residual error time series using an autoregressive model.
 How to develop and evaluate a model of residual error time series.
 How to use a model of residual error to correct predictions and improve forecast skill.
Let’s get started.
Model of Residual Errors
The difference between what was expected and what was predicted is called the residual error.
It is calculated as:

residual error = expected – predicted 
Just like the input observations themselves, the residual errors from a time series can have temporal structure like trends, bias, and seasonality.
Any temporal structure in the time series of residual forecast errors is useful as a diagnostic as it suggests information that could be incorporated into the predictive model. An ideal model would leave no structure in the residual error, just random fluctuations that cannot be modeled.
Structure in the residual error can also be modeled directly. There may be complex signals in the residual error that are difficult to directly incorporate into the model. Instead, you can create a model of the residual error time series and predict the expected error for your model.
The predicted error can then be subtracted from the model prediction and in turn provide an additional lift in performance.
A simple and effective model of residual error is an autoregression. This is where some number of lagged error values are used to predict the error at the next time step. These lag errors are combined in a linear regression model, much like an autoregression model of the direct time series observations.
An autoregression of the residual error time series is called a Moving Average (MA) model. This is confusing because it has nothing to do with the moving average smoothing process. Think of it as the sibling to the autoregressive (AR) process, except on lagged residual error rather than lagged raw observations.
In this tutorial, we will develop an autoregression model of the residual error time series.
Before we dive in, let’s look at a univariate dataset for which we will develop a model.
Daily Female Births Dataset
This dataset describes the number of daily female births in California in 1959.
The units are a count and there are 365 observations. The source of the dataset is credited to Newton (1988).
Download and learn more about the dataset here.
Download the dataset and place it in your current working directory with the filename “dailytotalfemalebirths.csv“.
Below is an example of loading the Daily Female Births dataset from CSV.

from pandas import Series from matplotlib import pyplot series = Series.from_csv(‘dailytotalfemalebirths.csv’, header=0) print(series.head()) series.plot() pyplot.show() 
Running the example prints the first 5 rows of the loaded file.

Date 19590101 35 19590102 32 19590103 30 19590104 31 19590105 44 Name: Births, dtype: int64 
The dataset is also shown in a line plot of observations over time.
We can see that there is no obvious trend or seasonality. The dataset looks stationary, which is an expectation of using an autoregression model.
Persistence Forecast Model
The simplest forecast that we can make is to forecast that what happened in the previous time step will be the same as what will happen in the next time step.
This is called the “naive forecast” or the persistence forecast model. This model will provide the predictions from which we can calculate the residual error time series. Alternately, we could develop an autoregression model of the time series and use that as our model. We will not develop an autoregression model in this case for brevity and to focus on the model of residual error.
We can implement the persistence model in Python.
After the dataset is loaded, it is phrased as a supervised learning problem. A lagged version of the dataset is created where the prior time step (t1) is used as the input variable and the next time step (t+1) is taken as the output variable.

# create lagged dataset values = DataFrame(series.values) dataframe = concat([values.shift(1), values], axis=1) dataframe.columns = [‘t1’, ‘t+1’] 
Next, the dataset is split into training and test sets. A total of 66% of the data is kept for training and the remaining 34% is held for the test set. No training is required for the persistence model; this is just a standard test harness approach.
Once split, the train and test sets are separated into their input and output components.

# split into train and test sets X = dataframe.values train_size = int(len(X) * 0.66) train, test = X[1:train_size], X[train_size:] train_X, train_y = train[:,0], train[:,1] test_X, test_y = test[:,0], test[:,1] 
The persistence model is applied by predicting the output value (y) as a copy of the input value (x).

# persistence model predictions = [x for x in test_X] 
The residual errors are then calculated as the difference between the expected outcome (test_y) and the prediction (predictions).

# calculate residuals residuals = [test_y[i]–predictions[i] for i in range(len(predictions))] 
The example puts this all together and gives us a set of residual forecast errors that we can explore this tutorial.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

from pandas import Series from pandas import DataFrame from pandas import concat series = Series.from_csv(‘dailytotalfemalebirths.csv’, header=0) # create lagged dataset values = DataFrame(series.values) dataframe = concat([values.shift(1), values], axis=1) dataframe.columns = [‘t1’, ‘t+1’] # split into train and test sets X = dataframe.values train_size = int(len(X) * 0.66) train, test = X[1:train_size], X[train_size:] train_X, train_y = train[:,0], train[:,1] test_X, test_y = test[:,0], test[:,1] # persistence model predictions = [x for x in test_X] # calculate residuals residuals = [test_y[i]–predictions[i] for i in range(len(predictions))] residuals = DataFrame(residuals) print(residuals.head()) 
Running the example prints the mean squared error for the predictions at 83.744.
The example then prints the first 5 rows of the forecast residual errors.

Test MSE: 83.744 0 0 9.0 1 10.0 2 3.0 3 6.0 4 30.0 
Finally, the residual time series is plotted.
We now have a residual error time series that we can model.
Autoregression of Residual Error
We can model the residual error time series using an autoregression model.
This is a linear regression model that creates a weighted linear sum of lagged residual error terms. For example:

error(t+1) = b0 + b1*error(t1) + b2*error(t2) …+ bn*error(tn) 
We can use the autoregression model (AR) provided by the statsmodels library.
Building on the persistence model in the previous section, we can first train the model on the residual errors calculated on the training dataset. This requires that we make persistence predictions for each observation in the training dataset, then create the AR model, as follows.

# persistence model on training set train_pred = [x for x in train_X] # calculate residuals train_resid = [train_y[i]–train_pred[i] for i in range(len(train_pred))] # model the training set residuals model = AR(train_resid) model_fit = model.fit() window = model_fit.k_ar coef = model_fit.params print(‘Lag=%d, Coef=%s’ % (window, coef)) 
Running this piece prints the chosen lag of 15 and the 16 coefficients (intercept and one for each lag) of the trained linear regression model.

Lag=15, Coef=[ 0.10120699 0.84940615 0.77783609 0.73345006 0.68902061 0.59270551 0.5376728 0.42553356 0.24861246 0.19972102 0.15954013 0.11045476 0.14045572 0.13299964 0.12515801 0.03615774] 
Next, we can step through the test dataset and for each time step we must:
 Calculate the persistence prediction (t+1 = t1).
 Predict the residual error using the autoregression model.
The autoregression model requires the residual error of the 15 previous time steps. Therefore, we must keep these values handy.
As we step through the test dataset timestep by timestep making predictions and estimating error, we can then calculate the actual residual error and update the residual error time series lag values (history) so that we can calculate the error at the next time step.
This is a walk forward forecast, or a rolling forecast, model.
We end up with a time series of the residual forecast error from the train dataset and a predicted residual error on the test dataset.
We can plot these and get a quick idea of how skillful the model is at predicting residual error. The complete example is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49

from pandas import Series from pandas import DataFrame from pandas import concat from statsmodels.tsa.ar_model import AR from matplotlib import pyplot series = Series.from_csv(‘dailytotalfemalebirths.csv’, header=0) # create lagged dataset values = DataFrame(series.values) dataframe = concat([values.shift(1), values], axis=1) dataframe.columns = [‘t1’, ‘t+1’] # split into train and test sets X = dataframe.values train_size = int(len(X) * 0.66) train, test = X[1:train_size], X[train_size:] train_X, train_y = train[:,0], train[:,1] test_X, test_y = test[:,0], test[:,1] # persistence model on training set train_pred = [x for x in train_X] # calculate residuals train_resid = [train_y[i]–train_pred[i] for i in range(len(train_pred))] # model the training set residuals model = AR(train_resid) model_fit = model.fit() window = model_fit.k_ar coef = model_fit.params
# walk forward over time steps in test history = train_resid[len(train_resid)–window:] history = [history[i] for i in range(len(history))] predictions = list() expected_error = list() for t in range(len(test_y)): # persistence yhat = test_X[t] error = test_y[t] – yhat expected_error.append(error) # predict error length = len(history) lag = [history[i] for i in range(length–window,length)] pred_error = coef[0] for d in range(window): pred_error += coef[d+1] * lag[window–d–1] predictions.append(pred_error) history.append(error) print(‘predicted error=%f, expected error=%f’ % (pred_error, error)) # plot predicted error pyplot.plot(expected_error) pyplot.plot(predictions, color=‘red’) pyplot.show() 
Running the example first prints the predicted and expected residual error for each time step in the test dataset.

… predicted error=1.951332, expected error=10.000000 predicted error=6.675538, expected error=3.000000 predicted error=3.419129, expected error=15.000000 predicted error=7.160046, expected error=4.000000 predicted error=4.179003, expected error=7.000000 predicted error=10.425124, expected error=5.000000 
Next, the actual residual error for the time series is plotted (blue) compared to the predicted residual error (red).
Now that we know how to model residual error, next we will look at how we can go about correcting forecasts and improving model skill.
Correct Predictions with a Model of Residual Errors
A model of forecast residual error is interesting, but it can also be useful to make better predictions.
With a good estimate of forecast error at a time step, we can make better predictions.
For example, we can add the expected forecast error to a prediction to correct it and in turn improve the skill of the model.

improved forecast = forecast + estimated error 
Let’s make this concrete with an example.
Let’s say that the expected value for a time step is 10. The model predicts 8 and estimates the error to be 3. The improved forecast would be:

improved forecast = forecast + estimated error improved forecast = 8 + 3 improved forecast = 11 
This takes the actual forecast error from 2 units to 1 unit.
We can update the example from the previous section to add the estimated forecast error to the persistence forecast as follows:

# correct the prediction yhat = yhat + pred_error 
The complete example is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53

from pandas import Series from pandas import DataFrame from pandas import concat from statsmodels.tsa.ar_model import AR from matplotlib import pyplot from sklearn.metrics import mean_squared_error series = Series.from_csv(‘dailytotalfemalebirths.csv’, header=0) # create lagged dataset values = DataFrame(series.values) dataframe = concat([values.shift(1), values], axis=1) dataframe.columns = [‘t1’, ‘t+1’] # split into train and test sets X = dataframe.values train_size = int(len(X) * 0.66) train, test = X[1:train_size], X[train_size:] train_X, train_y = train[:,0], train[:,1] test_X, test_y = test[:,0], test[:,1] # persistence model on training set train_pred = [x for x in train_X] # calculate residuals train_resid = [train_y[i]–train_pred[i] for i in range(len(train_pred))] # model the training set residuals model = AR(train_resid) model_fit = model.fit() window = model_fit.k_ar coef = model_fit.params
# walk forward over time steps in test history = train_resid[len(train_resid)–window:] history = [history[i] for i in range(len(history))] predictions = list() for t in range(len(test_y)): # persistence yhat = test_X[t] error = test_y[t] – yhat # predict error length = len(history) lag = [history[i] for i in range(length–window,length)] pred_error = coef[0] for d in range(window): pred_error += coef[d+1] * lag[window–d–1] # correct the prediction yhat = yhat + pred_error predictions.append(yhat) history.append(error) print(‘predicted=%f, expected=%f’ % (yhat, test_y[t])) # error mse = mean_squared_error(test_y, predictions) print(‘Test MSE: %.3f’ % mse) # plot predicted error pyplot.plot(test_y) pyplot.plot(predictions, color=‘red’) pyplot.show() 
Running the example prints the predictions and the expected outcome for each time step in the test dataset.
The mean squared error of the corrected forecasts is calculated to be 56.234, which is much better than the score of 83.744 for the persistence model alone.

… predicted=40.675538, expected=37.000000 predicted=40.419129, expected=52.000000 predicted=44.839954, expected=48.000000 predicted=43.820997, expected=55.000000 predicted=44.574876, expected=50.000000 Test MSE: 56.234 
Finally, the expected values for the test dataset are plotted (blue) compared to the corrected forecast (red).
We can see that the persistence model has been aggressively corrected back to a time series that looks something like a moving average.
Summary
In this tutorial, you discovered how to model residual error time series and use it to correct predictions with Python.
Specifically, you learned:
 About the Moving Average (MA) approach to developing an autoregressive model to residual error.
 How to develop and evaluate a model of residual error to predict forecast error.
 How to use the predictions of forecast error to correct predictions and improve model skill.
Do you have any questions about Moving Average models, or about this tutorial?
Ask your questions in the comments below and I will do my best to answer.