Note: This post is rated "G", suitable for all audiences; does not contain the weather "s" word.
In the nearly 60 years since the modern era of numerical weather prediction began, many orders of magnitude improvements have been made in both computer technology and the formulation and methods of solution of the equations describing the atmosphere. As a result, huge improvements have been made in the accuracy of daily weather forecasts. As good as the models have become, however, it's important to realize what they can and cannot do.
First of all, not all variables are created equal. Of the various parameters of interest, some of them can be predicted more accurately than others.
Without putting specific values on them, here's a rough list of the main items in decreasing order of accuracy:
- Air pressure (highs and lows)
- Areal coverage
- Type (frozen or liquid)
Second, accuracy in general decays with time. The main reason is chaos theory. The atmosphere is one of those strange physical systems which can start from a given initial condition and evolve into multiple, different results. Theoretical calculations in the 1960s indicated that the limit of predictability for the atmosphere is somewhere around two weeks, and that estimate has not changed significantly in the meantime. (Note that we are talking about forecasting for a specific time at a specific location, not averages. In dealing with averages, it's a much different story, which is why a climate model is not at all the same as running a weather model for an extremely long time. Climate by definition is a long-term average, so even though they both start from the same base, these are two distinct kinds of problems, and in some ways climate prediction is an easier task.)
Besides the ultimate theoretical limit, there are practical limits, too. Even though the cell phone in your pocket is much more powerful than the supercomputers of a generation or two ago (more on that in a future post), there are still limits on computer speed and data capacity. Since the basic equations of the atmosphere can't be solved explicitly, they must be solved using approximations which require exponentially more time the more accurately they're represented. Using whatever technology is available at any given time, it's always possible to make a more accurate forecast, but it doesn't do much good to make a forecast that takes six hours of computer time, since by then a whole new set of data is available, and it's time to start the cycle all over again.
So, given the limitations, what's reasonable to expect? The chart shows a measure of the accuracy (skill) of model forecasts made by the European Centre for Medium-Range Weather Forecasting (ECMWF). The x-axis is the year, from 1980 through 2004. The y-axis is the number of days for which the forecast has some usable skill. The dashed line is the average for each month, and the solid line is a 12-month moving average. Note that the limit of skill goes from around 5.5 days at the beginning to around eight days at the end. Note also that the forecast is for the 500 mb pressure level of the atmosphere (around 5.5 km altitude, or about halfway up in terms of amount of air), which is, in general, at the top of the list of variables in terms of accuracy.
The comments to this entry are closed.