News Section
Stories from Climate Central's Science Journalists and Content Partners

Storm Intensity Forecasts Lag; Communities More at Risk

The 2011 North Atlantic hurricane season cost the U.S. billions in damage, largely from inland flooding. Hurricane Irene alone killed 45 people and cost upwards of $7.3 billion, according to the National Oceanic and Atmospheric Administration (NOAA). But while Irene’s storm’s track was forecast with near pinpoint accuracy days in advance, in keeping with the general state of the science, the intensity forecasts were not nearly as accurate.

With the 2012 Atlantic hurricane season now underway, forecasters are determined to make more accurate forecasts. As part of a NOAA project known as the Hurricane Forecast Improvement Program, or “HFIP,” they are armed with upgraded tools to help them more accurately predict the path and intensity of these massive storms. These tools range from a turbocharged, higher-resolution computer model to the type of unmanned drone aircraft that more typically prowl the skies above war-torn Afghanistan.

Credit: NASA/NOAA GOES Project

For forecasters, and the communities they must warn, the stakes are high. If forecasters were to achieve their goals, it would mean that they could issue hurricane watches and warnings with greater lead time, provide emergency management officials with greater specificity regarding where a storm is going to go and how strong it will be when it gets there, and help avoid false alarms and needless evacuations, which cost millions in lost revenue for coastal communities.

For example, the massive evacuations ahead of 1999’s Hurricane Floyd that menaced the East Coast, and another huge evacuation in advance of 2005’s Hurricane Rita, cost hundreds of millions of dollars alone. Last year, the prospect of Hurricane Irene reaching landfall in Manhattan caused the city’s mayor to mandate the first complete shutdown of the city’s mass transit system, bringing the Big Apple to a near standstill. In the end, more money may have been lost due to business closures in advance of the storm than because of the storm itself, which arrived in New York City as a strong tropical storm.

But even with new technologies, it’s unclear how much more accurate the forecasts will be, particularly when it comes to storm intensity, which has been a stubborn riddle for meteorologists to solve.

According to NOAA, during the past 15 years, hurricane track forecasts have improved by 50 percent, while the accuracy of intensity forecasts have not budged. During the period from 2003 to 2008, the average storm track forecast had an error that was down to less than 200 miles at 72 hours, and less than 100 miles at 48 hours. Meanwhile, 24- to 48-hour intensity forecasts were likely to be off by at least one category on the Saffir-Simpson Hurricane Scale.

“The state of affairs in [storm] track is steady improvement, while the state of affairs in [storm] intensity has been stagnant,” said Frank Marks, director of NOAA’s Hurricane Research Division in Miami. Marks said that 48-hour track forecast errors today are the same as 24-hour track forecast errors 10 years ago, whereas there has only been slight improvement in 48-hour intensity forecasts during the past two decades.

The HFIP aims to cut the average errors of hurricane track and intensity forecasts by 20 percent within five years and by 50 percent by 2019, within a seven-day forecast period. In addition, researchers are working to increase their ability to anticipate that a storm is about to rapidly intensify, jumping one or more categories of strength in a short timespan. Currently, forecasters don’t have a thorough understanding of what allows a storm to make that leap.

A scenario in which a rapidly intensifying storm quickly approaches land is one of the many nightmare possibilities that keep hurricane forecasters awake at night, since a stronger storm puts more people in harm's way, and depending on a storm’s forward speed, they may run out of time to evacuate vulnerable areas.

Why Have Intensity Forecasts Lagged Behind?

To forecast the track that a tropical storm or hurricane will take, forecasters need to know how the large-scale weather pattern will evolve over a particular period of time. For example, they may need to answer questions such as: Will a high pressure system park itself over New England, and prevent a hurricane from moving up along the eastern seaboard? Or will a cold front come along and sweep a nascent storm out to sea?

Because answering such questions involves dealing with large-scale atmospheric features that today’s generation of computer models are adept at simulating, track forecasts have continued to be refined. For example, the storm track forecasts for Hurricane Irene in 2011 were spot on, several days in advance.

“We keep getting better and better at the track forecasts” said James L. Franklin, chief of the hurricane specialist unit at NOAA’s National Hurricane Center in Miami.

But Franklin said that in contrast to track forecasts, predicting storm intensity requires knowing lots of small-scale details that computer models have trouble capturing, from the dynamics of a storm’s structure to the characteristics of air masses being pulled into a storm’s circulation. If dry air gets sucked into the core of a hurricane, it can significantly weaken the storm, which is one possible explanation for why Hurricane Irene was much weaker than initially expected when it finally made its move toward New York City.

“We now have to telegraph in to much, much smaller scales in the atmosphere” where predictability is less, Franklin said, “Where we’re working blind, essentially.”

The HFIP’s Ambitious Goals

The HFIP has different components, including a data gathering effort involving turbulent hurricane research flights. While NOAA relies on the Air Force Reserve’s “Hurricane Hunters” unit for the vast majority of hurricane reconnaissance work, NOAA’s own aging fleet of hurricane research aircraft also fly into and around the fierce storms with specialized instruments, such as airborne Doppler radar. 

NOAA research flights have already provided researchers with dozens of radar case studies of tropical storms and hurricanes, including Hurricane Irene. However, due to budgetary restrictions, NOAA cannot fly their two P-3 aircraft (nicknamed “Kermit” and “Miss Piggy”) into every storm, and the Air Force C-130 planes that do the majority of hurricane reconnaissance lack the same type of advanced radar technology.

Credit:  NASA Goddard Space Flight Center

This year, NOAA is working with NASA to test a small fleet of Global Hawk unmanned aerial vehicles (UAVs) that can linger for long periods of time in and around a hurricane. The NOAA aircraft only have a few more years of life in them, Marks said, and the future of hurricane hunting may lie with the UAVs.

Another HFIP research stream involves creating better performing computer models that incorporate new data and research insights. Rapidly translating research findings into forecasting tools is a unique aspect of HFIP, since it forces researchers to quickly turn their findings into products that can be used by day-to-day forecasters, rather than just publishing papers in scientific journals.

“It’s researchers working in an operational way, which is not the normal way researchers work,” Marks said.

For example, as part of HFIP, a group of researchers set up shop in Boulder, Colo., far from tropical weather systems, where they took advantage of high speed computer resources to duplicate a computer model that the Hurricane Center relies on to make its forecasts. The research version of this computer model, known by the acronym HWRF (pronounced “H WARF”), picks up on storm features that other models typically miss altogether, such as the evolution of the rain bands that pinwheel around a tropical storm or hurricane.

After testing the research model, the Hurricane Center is now moving forward with an operational version of the upgraded HWRF model that will be used during the 2012 hurricane season. “That’s the first time we’ve seen that kind of rapid development being implemented,” Marks said.

According to Marks, the upgraded HWRF model has shown promise in forecasting storm intensity, particularly when Doppler radar scans of a storm are included in the model analysis. Specifically, about a 10 percent improvement in storm forecasts were seen in research cases.

“If we can use that type of data in this model we can really gain more promise,” Marks said.

It remains to be seen how much of an improvement the new HWRF model will bring to this year’s hurricane forecasts, but the bar has been set so low that any progress would be heralded as a breakthrough. 

Comments

By Scott Sandgathe (Springfield/OR/97478)
on June 19th, 2012

I am surprised that the COAMPS TC model isn’t mentioned. It is part of the HFIP experiment and has far surpassed HWRF the past two years in intensity prediction. It seems odd that this wouldn’t be mentioned in a discussion of our capabilities for intensity forecasting as it is currently our most promising model and part of a strong federal partnership to improve hurricane prediction.

Reply to this comment

By Chris Squire (Twickenham)
on June 22nd, 2012

‘harm’s way’ not ‘harms’ way - ‘harm’ is singular.

Reply to this comment

Name (required):
Email (required):
City/State/Zip:
Enter the word "climate" in the box below:

[+] View our comment guidelines.

Please note: Comment moderation is enabled. Your comment will not appear until reviewed by Climate Central staff. Thank you for your patience.