Probabilistic Long-Range Forecasts

The probabilistic forecasts provide users with additional information that is not contained in the deterministic forecasts. This product gives estimates of the probability that the seasonal mean will be above, near or below normal. For each category, the probability is obtained by counting the number of members that predict a seasonal mean in that category, and then dividing by the total number of ensemble members (more details). Note that the probabilistic forecasts are only available for the forecasts in the period 0-3 months (zero lead-time) since an ensemble of runs is required to estimate the probabilities. The forecasts for periods 3-6, 6-9 and 9-12 months are produced with a statistical method and do not use ensemble forecast technique.

What do the probabilistic forecast maps represent?

The forecast maps are composed of 3 panels, one for each of the categories: above normal, near normal and below normal. On the maps for temperature, the colors range from yellow to red for above normal; grey to purple for near normal and light to dark blue for below normal. On the maps for precipitation, the colors range from green to blue for above normal; grey to purple for near normal and yellow to dark brown for below normal. The months for which the forecast is valid is indicated at the bottom of each panel. The date of issuance is shown on the top right corner. The scale on the right side of the maps indicates the percentage of the 12 members that predict this specific category.

How are the deterministic and probabilistic forecasts related?

The deterministic and probabilistic forecasts are two different ways of presenting the forecast information. The deterministic forecasts show the predicted forecast category (above, near or below normal), resulting from the AVERAGE of the 12 model runs. The historical percent correct maps attached to the deterministic maps give an indication of the skill of the prediction system based on verification of the forecasts over a number of years (typically 30 years). This is useful but unfortunately, the skill maps do not provide information on the confidence that might be attributed to the specific current forecast.

This is where the probabilistic forecasts can add important information to the deterministic forecasts by giving an indication of the specificity of the forecast. For example, a deterministic forecast of above normal conditions that is accompanied by probabilities of 45%, 30% and 25% for the above normal, normal and below normal categories would be less clear than a forecast with probabilities of 60%, 25% and 15% respectively. In the latter case, the forecast is that there are better than even odds of above average conditions, and that this is accompanied by relatively small odds of below average conditions. Similarly, a forecast of near normal conditions might be accompanied by probabilities of 30%, 40% and 30%, in which case the forecast is not very specific, or probabilities of 15%, 70% and 15%, in which case the forecast of near-normal conditions is much more specific.

It must be noted that while the deterministic forecasts skill maps are based on the verification of the 3 categories, many studies done on various seasonal forecast systems developed around the world show that the near normal category is always less well predicted than the above and below categories (Van Den Dool and Toth, 1991; Gagnon et al. 2000; Gagnon and Verret, 2000, 2001; Kharin and Zwiers, 2003). The main reason for this is that the above and below categories are open ended. In other words, they are only constrained on one side (i.e. by the near normal category). Thus, a forecast of above normal will be correct whether the observed conditions are slightly or much above normal. The same applies to the below normal category. On the other hand, the near normal category is constrained on both sides. Hence only a comparatively smaller range of values in observed conditions allow it to be correct. Therefore, less confidence should be placed on the near normal forecasts, irrespective of what the probabilistic forecasts show compared to above or below normal forecasts.

It should also be mentioned that the probabilistic forecasts are not calibrated. Please see the calibration section.

How are the probabilistic forecasts produced?

The current season 1 ensemble consists of 12 model runs: 6 runs from the GEM model and 6 runs from the CCCma GCM2 model (details). The forecast probabilities are calculated by counting the number of individual members in each of the three categories at every location and then dividing by the ensemble size. This 12-member ensemble is then configured to have a resolution of 10% (10 bins of 10%). For example, if at one location 8 members predict ABOVE normal, 3 members near NORMAL and one member BELOW normal, the forecast probabilities will be respectively 66.7% ABOVE, 25% near NORMAL and 8.3% BELOW.

Number of members Probability Bin
0 0.0% 0-9%
1 8.3% 0-9%
2 16.7% 10-19%
3 25.0% 20-29%
4 33.3% 30-39%
5 41.7% 40-49%
6 50.0% 50-59%
7 58.3% 50-59%
8 66.7% 60-69%
9 75.0% 70-79%
10 83.3% 80-89%
11 91.7% 90-100%
12 100.0% 90-100%

The configuration of the ensemble to calculate probabilities is described in the table above. It should be noted that since there are 13 possibilities but only 10 bins, some bins have 2 possibilities (bins 0 and 1, 6 and 7, 11 and 12). Thus, the 0-9%, 50-59% and 90-100% bins are somewhat artificially more likely to occur.

Definition of the categories

The probabilistic forecasts are categorized as below normal, near normal and above normal. The definition of these 3 categories is the same as for the deterministic forecasts.

How to use the maps?

  1. Look at the deterministic forecast and skill map for the temperature and precipitation anomalies to determine the forecast category (above, near or below normal). This is what you would use if you had to quantify the forecast with one word.
  2. Look at the probabilistic forecast maps for each of the 3 categories in the area of interest.
  3. Compare the color on the probabilistic maps with the scale on the right side. The number that you obtain is an estimate of the probability of occurrence for each category. As a rough estimate, one can say that a higher probability equals a higher confidence in the forecast (see examples below). It is recommended to read the calibration section to get additional information on how to calibrate the probabilities.

It has to be noted that the surface air temperature forecast is a prediction of the anomaly of the mean daily temperature at 2 meters (i.e. at standard observation Stevenson screen height). It is not a forecast of the maximum or of the minimum daily temperature. For more information on what is predicted by Environment Canada seasonal forecasts please read this frequently asked questions page.

Examples

  1. Assume that the DETERMINISTIC forecast is above normal and the PROBABILISTIC maps show 80 to 89% for above normal, 10 to 19% for near normal and 0 to 9% for below normal. Based on this, one would conclude that the probability that temperatures will be above normal is high (80 to 89%) and that the forecast is very specific. One might also say that the probability of below normal temperatures is low.
  2. Now, suppose that the DETERMINISTIC forecast is above normal. Suppose now that the PROBABILISTIC maps show 50 to 59% for above normal, 20 to 29% for near normal and 20 to 29% for below normal. This means that only half of the model runs (6 out of 12) was actually above normal. Based on this, one would conclude that even if the deterministic forecast is for above normal temperatures, there is a good chance also that the conditions could be near normal or below normal. The odds of occurence of above normal conditions would certainly be lower than what it was for the example presented earlier.
  3. Suppose that the DETERMINISTIC forecast is near normal and that the PROBABILISTIC maps show 20 to 29% for above normal, 40 to 49% for near normal and 40 to 49% for below normal. This is the least specific of the three examples. One would conclude that there is good chance that near normal and as well below normal conditions will occur but also that it is unlikely that temperatures will be above normal.

References

  • Gagnon, N. and R. Verret, 2001: Probabilistic Approach to Seasonal Forecasting, Proceedings of the Long-Range Weather and Crop Forecasting Work Group Meeting IV, Regina, Saskatchewan, March 5-6, 2001, 13-18.
  • Gagnon, N. and R. Verret, 2000: Probabilistic Approach to Seasonal Forecasting at the Canadian Meteorological Centre. Proceedings of the Twenty-Fifth Annual Climate Diagnostics and Prediction Workshop, Palisades, New York, October 23-27 2000, 169-172
  • Gagnon, N., R. Verret, A. Plante, L. Lefaivre and G. Richard, 2000: Long-Range Forecasts Verification, Preprints, 15th Conference Probability and Statistics in Atmospheric Sciences, AMS, Asheville, North Carolina, May 2000, 65-68.
  • Kharin, V. V., and F. W. Zwiers, 2003: Improved seasonal probability forecasts. Journal of Climate, 16, 1684-1701.
  • Van Den Dool, Huug M., Toth, Zoltan. 1991: Why Do Forecasts for "Near Normal" Often Fail? Weather and Forecasting: Vol. 6, No. 1, 76-85.




Created: 2002-12-31
Modified: 2004-07-13
Reviewed: 2002-12-31
URL of this page: http://weatheroffice.ec.gc.ca/saisons/info_prev_proba_e.html

The Green Lane™,
Environment Canada's World Wide Web Site.