Basic Notation
Notation
The prediction errors are defined with a reference $i$ to the information set available at the time the forecast was made:
where need not only include lags of . In practice, the information that will be actually used may be a small subset of .
The properties of these forecast errors can be assessed in isolation or relative to a benchmark, which we will define as . The benchmark may be a naive forecast, e.g. random walk, in which case would be equal to . However, the benchmark could also be a prediction regularly published by a forecasting institute or market analysts, i.e. Bloomberg, which is not necessarily model-based. In that case, would be given by methods and a subset of which is unknown to us.
For model-based forecasts, we use the following notation:
to highlight the fact that they are based on model-consistent expectations given by the parameter vector .
In forecasting comparisons involving competing forecasts resulting from the same information set, the subindex $i$ will be removed because it does not play a role. One could test the following hypothesis involving forecast errors:
Test | Null Hypothesis | JDemetra+ class AccuracyTests is extended by |
---|---|---|
Unbiasedness | BiasTest |
|
Autocorrelation | EfficiencyTest |
|
Equality in squared errors | DieboldMarianoTest |
|
Forecast encompases | EncompassingTest |
|
Forecast encompases | EncompassingTest |
The subsequent pages describe the implementation details of the various tests within JDemetra+ and examples of how to construct them.