Network News

X My Profile
View More Activity
The new Washington
Post Weather website
Jump to CWG's
Latest Full Forecast
Outside now? Radar, temps
and more: Weather Wall
Follow us on Twitter (@capitalweather) and become a fan on Facebook
Posted at 11:35 AM ET, 08/19/2010

Forecasting hurricanes: Part 1

By Steve Tracton

How good (or bad) are today's forecasts?

* Sun & warmth: Full Forecast | Yesterday's rain totals *

This is the first of a two-part series on hurricane forecasting. Part 1 details how reliable forecasts are expected to be when (and if?) the 2010 tropical season finally picks up steam. Part 2, to appear next week, will look at ongoing research into the development of tropical systems and the prospects for improved forecasts.


Hurricane Fran, Sept. 5, 1996. Satellite image by NASA.

As reported in an earlier post by CWG's Greg Postel, the National Oceanic and Atmospheric Administration's latest update of its 2010 seasonal hurricane forecast calls for a significant chance that the remainder of the season will be very active, perhaps one of the most active on record. Whatever the number of tropical storms and hurricanes -- collectively referred to here as tropical cyclones (TC) -- most seasonal hurricane forecasts justifiably caution that it's impossible to reliably forecast when and where an individual storm might develop, nor its intensity and whether it will make landfall, before a storm even exists.

The specifics of genesis, strength, size and track of TCs fall in the domain of daily weather. As such, predictability is limited in theory and practice from a few hours to about a week or two at most (no one really knows for sure). These specifics, of course, are precisely those which are required by emergency managers who, for example, must decide if and when to order evacuations, and those which ultimately determine the impact of a TC on lives and property.

So, what are the current capabilities and limitations in the accuracy and utility of TC forecasts?

The foundation of the TC prediction and warning process is the track forecast. If the track forecast is off, so too will be predictions of other parameters, such as wind speed and direction, rainfall, and storm surge relative to landfall. This would be true even if it were possible to accurately forecast the distribution of winds, precipitation, etc., relative to the storm center. In reality, however, this is not possible (discussed below), which compounds the significance of track errors and further undermines the value of all aspects of the TC forecast.

The good news is that track forecasts -- on average -- have steadily improved. As seen in the chart below, the 72-hour average track error for the Atlantic Basin decreased between the 1970s and 2000s from about 385 nautical miles to 155 nautical miles. Following the lower dotted horizontal line from right to left shows that three-day forecasts are as skillful now as 48-hour forecasts were during the 1990s, and more accurate than 36-hour forecasts of the 1980s. Forecasts to five days out did not begin until 2001. Following the upper dotted horizontal line from right to left shows that the current decade's five-day track error (about 265 nautical miles) is somewhat less than that of the 2.5-day forecasts of the 1980s.


This overall improvement in prediction of TC tracks is an impressive accomplishment attributable to advances in observational systems and weather prediction models and, just as important, forecasters' improving ability to balance the capabilities and limitations of observations and models in the process of formulating official forecasts.

While these improvements in the average skill of track forecasts indisputably reflect major progress, they hardly can be considered good enough for having a high level of confidence on what, if any, action should be taken as a TC moves toward possible landfall. Moreover, the average errors obscure the fact that there is considerable variability in track forecast errors within a season. To account for this, National Hurricane Center (NHC) forecasts are displayed in the form of a "cone of uncertainty."

The width of the cone is statistically based and set so that it encompasses two-thirds of the track errors over the most recent five-year period. To put it another way, there is a 33.3% chance the actual track will fall outside the cone. For the 2010 tropical season, the width of the cone at days one, three and five is 124, 322 and 570 nautical miles, respectively. Needless to say, especially beyond a couple days ahead, the inability to be more precise in the forecast track generally requires the warning area for coastal areas to be considerably larger than the area ultimately affected. This often results in costly "over warning" (e.g., unnecessary evacuations) given the priority of safeguarding lives and property. (Informally and unofficially, this "better-safe-than-sorry" approach in the official track forecasts is frequently referred to as the "path of least regret.")

Note: While some storms are intrinsically more or less predictable than others, the cone of uncertainty's fixed width from storm to storm precludes it from communicating variability in the degree of uncertainty from one storm to the next. However, advances in ensemble prediction systems -- which produce an array of possible storm tracks by running the same model multiple times, each time with slightly different initial conditions -- will likely be factored into operational forecasts in the relatively near future (more on this in part 2 of this series).

While there have been notable improvements in predicting TC tracks, little progress has been made over the years in mean errors of intensity (wind speed) forecasts. For example, the average error in the 72-hour intensity predictions during the 2000s was insignificantly different from that in the 1990s (21 mph). Most importantly, the average errors do not reflect the very much larger errors when storms undergo rapid changes in intensity -- winds increasing or decreasing by more than 35 mph in 24 hours (approximately a two-category change in the Saffir-Simpson scale).

With virtually no ability to anticipate rapid changes in strength much more than possibly a few hours in advance, a run-of-the-mill TC approaching the coast could become a monster with virtually no warning and, thereby, put coastal communities at grave risk. Or, the imminent threat of a monster storm resulting in massive evacuations, for example, could become nothing more than a relatively minor annoyance (except for the outcry from citizens unnecessarily stuck in massive traffic jams heading away from the coast -- as they contemplate throwing brickbats and worse at forecasters).

By way of example, in October 1995 Hurricane Opal featured both an unexpected rapid intensification and weakening over the Gulf of Mexico as it approached the Florida Panhandle. Between Oct. 3 and 4, Opal intensified from a marginal hurricane (75 mph winds) to a 150 mph Category 4 major threat. But, within just a few hours of landfall, maximum winds fortunately decreased to a more moderate, though still dangerous, 115 mph. (aside: subsequent to landfall, Opal continued northeastward and spawned three tornadoes locally in Charles, Prince George's and Anne Arundel Counties. I was an eyewitness and almost victim of the Prince George's tornado as it lifted my car about a foot high before jolting it back to the ground).

It's important to recognize that the category of a storm describes only the maximum wind speed (about 30 feet above the ocean surface) anywhere within the TC. It conveys no information on the spatial detail in the size and structure of a storm, such as the radial extent and distribution of hurricane-force winds, nor on precipitation and storm surge relative to center of the TC. Suffice to say, accurate and reliable predictions of these aspects of a TC are currently beyond the capabilities of operational models and forecasters. However, in order to provide some possibly useful information in forecasts of intensity and other critical parameters (such as just mentioned), NHC now provides information on uncertainties in the form of probabilities derived from the historical record of forecast errors.

Obviously, much improvement is necessary in TC predictions to ensure timely issuance of TC watches and warnings and, thereby, reduce to the extent possible the loss of life and injury, as well as property losses and economic disruption. Intensive research efforts are currently in progress, including three major programs this summer, to address this need. The essential aspects of the research and where it might lead -- to the "Holy Grail" of hurricane research? -- will be the subject of part 2, coming next week.

By Steve Tracton  | August 19, 2010; 11:35 AM ET
Categories:  Latest, Tracton, Tropical Weather  
Save & Share:  Send E-mail   Facebook   Twitter   Digg   Yahoo Buzz   Del.icio.us   StumbleUpon   Technorati   Google Buzz   Previous: Yesterday's rainfall totals
Next: PM Update: Heat makes another D.C. visit

Comments

Hurricane and tropical storm forecasting has improved dramatically since I started following the weather back in 1961 as a student in junior high school. Back then, they wouldn't list strike probabilities until 24 to 48 hours before landfall.

Recently, the biggest surprise has involved sudden near term intensifications such as Charley. Katrina was not much of a forecasting surprise, but the relative lack of vigorous post-storm response in New Orleans to a Category 2 impact was rather surprising.

Posted by: Bombo47jea | August 19, 2010 12:43 PM | Report abuse

I concur with Bombo. In 1960, Hurricane Donna was supposed to move into north Georgia after it hit SW Florida.

The next day, this powerful hurricane was moving up the east coast and at the exact time the NHC said the eye was centered east of Charleston, SC, the eye was passing right over Wilmington, NC where I lived. Donna had a huge eye; it took an hour for it to move over Wilmington. But the eye wasn't THAT big!

TIROS and subsequent technologies have made a huge difference in hurricane tracking and forecasting.

Posted by: JerryFloyd1 | August 19, 2010 2:32 PM | Report abuse

A bit more about Donna (and was it really 50 years ago... jeez I really was a young weather geek!). Donna made landfall twice in Florida, in NC, and then again on Long Island and New England.

According to the Hurricane Donna Wiki entry, it's the only storm ever to produce hurricane force winds in every state along the eastern seaboard of the U.S.

The storm nearly destroyed the-then fishing village of Naples, Florida and at Camp LeJeune, NC, about 40 miles NE of where I was living, winds of 125 knots (144 MPH) were recorded. We didn't go to a shelter (shelter?? what were they in those days?) and it was probably the scariest night I've ever experienced as the wooden structure we were living in rocked back and forth.

And talk about weather extremes? That year, Jacksonville, NC (Camp LeJeune, where I was then living before the move to Wilmington, NC), had 13.4 inches of snow in three separate March snowfalls. Historic snows across North Carolina, followed by a very hot summer, and then Donna, all in 1960.

Posted by: JerryFloyd1 | August 19, 2010 3:11 PM | Report abuse

I find "the cone" to be particularly interesting in light of the different types of models. I can see how a single model could be run multiple times with slight variations in initial conditions (to match the reality of sparse measurements) and obviously different random fluctuations within the model run. From that we would derive a probability "cone".

But the fact is that different kinds of models (e.g. fluid dynamics versus finite element) produce different results, sometimes hugely different. I read about those in the hurricane forecast discussions all the time, how one model is showing slowing and another is speeding it up, one is intensifying, another is not, etc. A lot of that has to do with how well the model handles the rest of the weather including teleconnections from places far from the tropics.

The bottom line to me is there should sometimes be different cones based on the different models because the two different scenarios are both plausible. Often instead they choose one scenario and hand nudge it slightly towards the other one.

Posted by: eric654 | August 20, 2010 7:29 AM | Report abuse

eric654

An astute observation concerning the cone varying by model.

The description of an ensemble being a function of only running a given model with varying initial conditions is not complete. As you indicate there are uncertainties related to model differences, as well as uncertainties in initial conditions.

In fact, the optimum ensemble strategies are now are focused on a multi-model approach. That is, ensemble members generated independently from different models are combined to produce an array of possibilities that account for model differences, as well as uncertainties in intial conditions.

I'll have more to say on this in Part 2

Posted by: SteveT-CapitalWeatherGang | August 20, 2010 9:51 AM | Report abuse

The US National Hurricane Center are predicting Katrina sized storms this year. Here's a bit more on 2010 forecast for Atlantic storms, with a nice graphic showing storm-related damage since 1900:
http://8020vision.com/2010/08/10/atlantic-hotter-than-before-katrina-boosting-storm-forecasts/

Posted by: jaykimball | August 24, 2010 2:38 AM | Report abuse

The comments to this entry are closed.

 
 
RSS Feed
Subscribe to The Post

© 2012 The Washington Post Company