Network News

X My Profile
View More Activity
The new Washington
Post Weather website
Jump to CWG's
Latest Full Forecast
Outside now? Radar, temps
and more: Weather Wall
Follow us on Twitter (@capitalweather) and become a fan on Facebook
Posted at 11:00 AM ET, 08/26/2010

Forecasting hurricanes: Part 2

By Steve Tracton

Can/will the accuracy and utility of forecasts improve?

* Nice weather! Full Forecast | NatCast | Hurricane Tracking Center *
* The buzz on summer mosquitos | 2010 most extreme weather year? *

This is the second of a two-part series on hurricane forecasting. Part 1 detailed how reliable forecasts are expected to be as the 2010 tropical season finally starts to pick up steam. Part 2 looks at ongoing research into the development of tropical systems and the prospects for improved forecasts.


Satellite image of Hurricane Danielle on Aug. 26, 2010. Credit: NOAA.

Five years after Katrina, there's good news, bad news and some ugly news in the world of hurricane forecasting.

The good news, as covered in Part 1 of this series, is the considerable improvement made over the past few decades in predicting the track of tropical cyclones (TC).

The bad news is that the current accuracy of track forecasts is not good enough beyond a day or two (at most) to provide the reliable information necessary to, for example, confidently narrow watches and warnings to only those coastal regions likely to be affected.

The ugly news?

That virtually no progress has been made in forecasting TC intensity (maximum wind speed). Really ugly is that there is no skill in predicting rapid changes in intensity nor in a storm's size and structure, which along with a storm's track govern the distribution and radial extent of damaging winds and heaviest rainfall, and help determine where along the coast the storm surge will be at its worst.

Four fundamental requirements are necessary (but not necessarily sufficient) for improving on current TC prediction capabilities:

1. Research directed toward increased knowledge and understanding of the environmental conditions and physical processes associated with TC genesis, evolution and motion.
2. Translating research findings from the best mix of observation platforms (e.g., aircraft, satellite and radar) to describe a storm and its environment (what's happening with the storm "now").
3. Incorporating the results of research on relevant physical processes into experimental forecast models (what will the storm do going forward).
4. Most important as far society at large is concerned, transitioning experimental gains into the 24/7 operational world of modeling and forecast centers -- most notably the National Centers for Environmental Prediction, which includes the National Hurricane Center. Note: I can personally attest that the path from research to operations can be extremely problematic and is colloquially referred to as "crossing the valley of death."

Three major, collaborative and complementary research programs are currently underway to address the needs for improving TC prediction: NOAA's Intensity Forecast Experiment (IFEX), NASA's Genesis and Rapid Intensification Processes (GRIP), and the National Science Foundation's PRE-Depression Investigation of Cloud systems in the Tropics (PREDICT).

Collectively, the experiments involve flying specially equipped aircraft, including an unmanned Global Hawk, through, above and around tropical cyclones at different stages in their life cycle -- from formation and early organization, to peak intensity and subsequent landfall or decay over open water.

The intent is to collect unprecedented amounts of data during the nominal peak of the Atlantic and Gulf of Mexico hurricane season. More specifically, researchers will gather data on temperature, humidity, wind speed and direction, and characteristics of small particles, possibly including African dust, which provide the nuclei for condensation of atmospheric moisture. The latter is especially important since the process of condensation releases the heat energy required for genesis and survival of all TCs.

Among the objectives is to learn more about how and why TCs form and evolve. One of the crucial questions is under what situations and circumstances do some clusters of tropical thunderstorms grow and turn deadly (see figure below), while others simply fade away. Ed Zipser, an atmospheric scientist at the University of Utah, justifiably refers to improved understanding of hurricane formation and changes in intensity over a storm's life cycle as the 'holy grail' of hurricane science.


Infrared satellite images (credit: NRL) from 8 p.m. EDT, Aug. 30, 2007 (top-left), and 2 a.m. EDT, Aug. 31, 2007 (top-right), show the thunderstorm clusters that led to Hurricane Felix. Just two days later, Felix became a major hurricane (bottom-left; credit: NOAA) as it tracked west across the Caribbean Sea (bottom-middle; credit: Weather Underground) and soon caused extensive damage and loss of life as it made landfall along the Central America coast. Bottom-right photo (AFP: Oscar Navarrete) shows Nicaraguan village in ruins after Felix.

It's important to recognize that uncovering the secrets of the "holy grail" will contribute significantly to the knowledge base relevant to understanding the structure of TCs and the factors contributing to storm track.

These research programs are expected to yield a wealth of data and information to digest and incorporate into experimental forecast models. Importantly, this includes development of advanced ensemble prediction systems and strategies to improve estimates in the level of forecast uncertainty, or confidence, which can vary from one set of model runs to the next. This includes accounting for uncertainties arising from errors in the initial conditions fed into models (current sate of the atmosphere), as well as differences between various models.

Along with results of previous and future research, it's hoped (expected?) that the current research will eventually lead to notable gains in operational forecast capabilities. The National Oceanic and Atmospheric Administration (NOAA), of course, is the single agency responsible for official TC predictions. All efforts by NOAA and in collaboration with its research partners (e.g., NASA, National Science Foundation) toward gains in the accuracy and value of those forecasts fall under the umbrella of NOAA's Hurricane Forecast Improvement Project (HFIP).

HFIP is a 10-year plan begun in 2008 to improve the accuracy, reliability and degree of confidence of TC forecasts and warnings with an emphasis on rapid intensity change. Specific HFIP goals are:

  • Reduce average track error by 50% for Days 1 through 5.
  • Reduce average intensity error by 50% for Days 1 through 5.
  • Detect rapid changes in intensity 90% of time at Day 1, decreasing to 60% at Day 5.
  • Extend the lead time for hurricane forecasts out to Day 7.

Meeting these goals would significantly improve forecast guidance and thereby enhance the ability of emergency management officials and individuals to make mitigation and preparedness decisions significantly further in advance of an approaching TC than now possible. Whether these goals are achieved or not depends significantly on maintaining momentum from current research efforts by ensuring proper funding for sufficient computer, technology and human resources over the next several years.

The concern is not the science per se, but the impacts of possible (likely?) budget cuts among the government agencies participating in the research. (As I know well from personal experience and insights, requiring agencies to do more with less has definitively reached the limit of what's possible).

By Steve Tracton  | August 26, 2010; 11:00 AM ET
Categories:  Tracton, Tropical Weather  
Save & Share:  Send E-mail   Facebook   Twitter   Digg   Yahoo Buzz   Del.icio.us   StumbleUpon   Technorati   Google Buzz   Previous: Forecast: Plenty of sun and low humidity
Next: PM Update: Near perfection through tomorrow

Comments

Thanks for all the info Steve. How much data would we need to collect in the area (i.e. initial conditions) to predict something like the amazingly rapid development of Felix? Versus how much money would have to be spent on modeling research and modeling number crunching? It seems to me that collecting the data would cost more.

Posted by: eric654 | August 26, 2010 1:10 PM | Report abuse

I'd say you'd have to add to the "ugly" list the poor forcasting of number/intensity of these storms. We've been told often (not by you guys) that we will get more and more storms, of greater strength, due to "global warming." Yet, this has not been happening. Best to leave ideology out of forcasting?

Posted by: silencedogoodreturns | August 26, 2010 1:12 PM | Report abuse

One HUGE problem here...we never get any detailed surface observations, just satellite data, on Cape Verde storms in their initial phases.

I believe we have agreements with Cape Verde re NASA spacecraft tracking. Why can't we enter an agreement with them to base one or more hurricane hunter aircraft there between Aug. 1 and Oct. 15 each year? This should fill in the data on long-track tropical cyclones in the Atlantic Ocean.

I suspect the issue here is of a fiscal nature--the right-wing Tea Party activists don't want us adding more basic research to the Federal deficit.

Posted by: Bombo47jea | August 26, 2010 1:51 PM | Report abuse

People are justifably focused on the fifth anniversary of Katrina. But as noted in a post last week, the 50th anniversay of Hurricane Donna is in September.

That storm made four landfalls (or five, depending on whether you count Long Island and Connecticut as separate landfalls) and is the only hurricane of record to produce hurricane-force winds in Florida, the Mid-Atlantic states, and New England. (See the NHC page at http://www.nhc.noaa.gov/HAW2/english/history.shtml#donna)

Though I lived in Tampa/St. Pete, Florida for 16 years as a kid, Donna is the only hurricane I've directly experienced, albeit in Wilmington, NC. And I hope it's the last one, because the night of September 12, 1960 was very scary in SE North Carolina.

Posted by: JerryFloyd1 | August 26, 2010 8:24 PM | Report abuse

Danielle, Earl and wave. http://weather.unisys.com/hurricane/sat_wv_a.gif Danielle is a who-cares storm, Earl could threaten Bermuda, but the wave is where we could use more data that could tell us if it will strengthen and the initial track (the Cape Verde data we don't have).

After that it is probably up to the mid-Atlantic high, whether it stays strong enough to keep the storm on a southerly track. I'm not sure how much data we have for that and the high is also affected by the other storms.

Posted by: eric654 | August 27, 2010 5:58 AM | Report abuse

eric654

A prime goal of the research is, in fact, to address your question about observational requirements. There will never be the option of collecting data everywhere at all times, so the the real issue is the "optimum mix" of observational platforms (sat, aircraft, etc) that is generally "good enough" and operationally practicable and financially feasible. The $$ issue is probably the most critical from a cost benefit point of view, but that's likely to more a political debating point than based on what makes most sense. The same is true in regard to computer and human resources to extract the maximize the operationl capability (most bang for the buck issue).


Posted by: SteveT-CapitalWeatherGang | August 27, 2010 9:43 AM | Report abuse

The comments to this entry are closed.

 
 
RSS Feed
Subscribe to The Post

© 2012 The Washington Post Company