NewsDecember 22, 2010

How Will We Know if 2010 Was the Warmest Year on Record?

Search results placeholder

Different Groups' Methods Yield the Same Finding: Warming Surface Temperatures

By Tom Yulsman

Earlier this month, NASA’s Goddard Institute for Space Studies (GISS) announced that November was the warmest such month in its record books — and that 2010 overall may well turn out to be the warmest year ever.

Now, the National Climatic Data Center (NCDC), part of the National Oceanic and Atmospheric Administration (NOAA), has published the results of its own calculations, showing that November was the second warmest, not the first.

Such conflicts in global temperature rankings aren’t terribly unusual. In fact, NASA-GISS and NOAA-NCDC rank 2005 as the warmest year on record. But a third group, a collaboration of the U.K. Met Office's Hadley Center and the Climatic Research Unit known as “HadCRUT,” gives the title to 1998. (When December hits the record books, it’s possible that 2010 will be crowned warmest year by all three.)

Each of the three groups calculates temperatures at the surface of the land and sea. But two other groups, one at the University of Alabama and the other at Remote Sensing Systems (a private company), use microwave sensors on satellites to estimate the temperature of the lowest part of the atmosphere.

And guess what? Their findings differ a bit from each other, and from those of the other groups as well.

What’s going on here? And do these discrepancies cast doubt on the conclusion that the world is warming?

Scientific Groups Use Different Techniques

What’s going on is quite simple, scientists say: normal science. The groups come up with somewhat different results because each one approaches the complex task of determining global temperature trends in a different way.

Perhaps it’s not surprising that the two satellite records tend to differ from the others — because they use a completely different technology and analytical method.  Their approach tends to exaggerate the impact of ocean-atmosphere phenomena like El Niño (which causes warming) and volcanic eruptions (which cause cooling).

But it may be less obvious why the three groups that use much the same basic surface temperature data still diverge in their findings.

“Each group tries to do the best job possible,” says Richard Reynolds, a scientist with NOAA, now semi-retired, who helped refine that agency’s approach. “Different decisions on the data processing cause the final numbers to differ. However, the differences are very useful to help define the uncertainty in the results.”

Despite those uncertainties, a consistent picture has emerged: Since 1970, each decade has been warmer than the one before — and 2000 to 2010 has been the warmest one on record.

Of course, the subject of global temperature trends has become intensely politicized. This has been especially true in the aftermath of the controversy surrounding the unauthorized release of hundreds of email messages between some climate scientists, including Phil Jones, director of the Climatic Research Unit.

To many climate change skeptics, the emails suggested that Jones and his colleagues at the CRU deliberately manipulated data to concoct a global warming trend, and also stonewalled critics, preventing them from accessing CRU data.

Since then, an independent review, headed by Sir Muir Russell, found that while CRU scientists failed to show the appropriate degree of openness, the accusations of fabrication, dishonesty and lack of rigor were groundless. Other reviews also found accusations of data-rigging to be groundless. And there is now a move afoot to make surface temperature data much more easily accessible.

Even so, some public doubt remains about assessments of global temperature trends. A Yale University survey found, for example, that 38 percent of Americans still believe there is significant disagreement among scientists over whether global warming is occurring.

Gavin Schmidt, a scientist with the NASA-GISS team, argues that even though they differ somewhat, the independent assessments of Earth’s temperature trends “are exactly what is needed to reassure people. The differences reflect real uncertainties,” he says, “but the similarity in the bottom line, despite variations in approach, should increase credibility in the overall warming trend.”

How to Calculate a Global Average Surface Temperature

Map of global average temperature anomalies from 2000-2009, showing the most rapid warming in the Arctic and a small portion of Antarctica. Credit: NASA Earth Observatory.

To understand why different answers to the same question can be perfectly normal from a scientific perspective — and how they all actually add up to the same overall trend — it helps to know how the different groups go about their work.

Each month, the groups use overlapping sets of data to determine global temperature anomalies, meaning the degree to which temperatures around the globe have departed from a long-term average. The data consist of temperature measurements from thousands of measuring stations on land, as well as measurements from ships and buoys at sea. Satellite measurements of the sea surface temperature are also added to the mix in some of the analyses.

The groups produce graphs showing how the Earth’s temperature has changed over the course of years and decades. They also prepare maps depicting the geographic pattern of temperature anomalies across the globe for a given month, season or year.

To get a clearer idea of just what a temperature anomaly is, imagine taking your own temperature with an oral thermometer and determining that you are running a fever of 101.6°F. Since “normal” is considered 98.6°F, your “temperature anomaly” is plus 3°F.

Similarly, when researchers calculate a temperature anomaly for the Earth, they need a base number for comparison — a number akin to “normal.” To do that, they estimate the mean temperature for a base period. For NASA-GISS, the base period is 1951 – 1980. The other groups use somewhat different base periods, but the general approach is the same.

So for NASA-GISS, the global temperature anomaly for a given month is the extent to which the actual average temperature of the Earth differs from the mean temperature during 1951-1980. But how can such a number be calculated for the entire planet?

NASA-GISS goes about it by carving up the land surface into 8,000 equal-area grid boxes. Using the available station data, it calculates an average anomaly for each box (compared to a base period for each box). With that data, NASA-GISS then calculates anomalies for a series of larger and larger portions of the globe, including the Northern and Southern hemispheres, and the entire land surface of the globe.

In a separate step, NASA-GISS uses sea surface temperature data from ships spanning the years 1880 to 1981, and satellite data from 1981 to the present, to determine temperature anomalies for the oceans. When that’s done, the scientists combine the anomalies for the land and the sea into a single global temperature anomaly figure, called the “Land-Ocean Temperature Index.”

NOAA-NCDC and HadCRUT divide the land surface and sea surface into grid boxes five degrees on a side. As in the NASA approach, the groups use the available temperature data to calculate average temperature anomaly numbers for each grid box. These are then combined to create a global temperature anomaly (as well as anomaly numbers for smaller geographic regions).

Although the details differ, the overall approaches of both NASA and NOAA are similar.

So why did they come up with different rankings for this past November — not to mention for entire years? And why is it that HadCRUT will probably come up with a somewhat different number when it issues its calculation for November? (The group has just issued its anomaly map for November).

Dealing with Data Gaps, Biases

November surface air temperature anomaly in the NASA GISS analysis, using only data from meteorological stations and Antarctic research stations, with the radius of influence of a station limited to 250 km. Credit: NASA.

The long and short of it is that the scientists independently grapple with a variety of common challenges, and in some cases using different methods.

One of those challenges is the need to adjust for biases in the data arising from changes in technology going all the way back to the 1800s, when reliable record-keeping began.

So, for example, seafarers have been measuring sea surface temperatures for more than a century. But they’ve gone from using buckets to pull up water samples, to sampling the water drawn through their ships’ engine-cooling intakes. And as Richard Reynolds of NOAA-NCDC notes, “Buckets tend to be biased cold due to evaporation, while engine room temperatures tend to be biased warm due to engine room heating.”

These biases in the data must be corrected to produce reasonably accurate temperature anomalies.

Another challenge affects measurements of temperatures on land: Over the course of more than a century, many rural areas have been urbanized. And as any city dweller knows, the dog days of summer are a good time to escape to the country, because all that asphalt, concrete and brick tends to elevate urban temperatures. So the three groups must adjust for this possible bias as well. (For details on how NASA-GISS does this, check out “Step 2” in its GISS Surface Temperature Analysis explanation.)

Another source of potential error arises from the fact that there are still significant parts of the Earth with no meteorological stations on land checking surface temperatures with thermometers, and no ship-based or buoy-based measurements of sea surface temperatures. Large gaps exist in the Amazon, parts of Africa, Antarctica, and, most significantly, in the Arctic.

Complicating things, the farther back in time you go, the sparser the geographic coverage becomes, potentially skewing the long-term record of global temperature anomalies.

HadCRUT approaches this challenge in a simple and straightforward way: If there are no temperature data for a given month in grid box, that box is simply left blank. (In HadCRUT’s anomaly map for November, notice all the grid boxes that are not filled in.)

But there is a problem with this approach. Gaps in coverage are particularly significant in the Arctic — where, despite a dearth of monitoring stations, warming is known to have been particularly intense and rapid in recent years.

Land stations used as part of NOAA's Global Historical Climatology Network. Credit: NOAA.

So by leaving large parts of the Arctic blank, HadCRUT may well be underestimating the degree of global warming. In fact, the HadCRUT analysis typically shows global temperature anomalies to be somewhat cooler than the other two analyses.

The scientists at NOAA tackle this problem using interpolation, a statistical approach that fills in gaps using data from nearby areas.

Reynolds thinks this is a reasonable approach. “If you know the temperature in Denver, you can make a good estimate of the temperature in Boulder, but not in Melbourne, Australia,” he says.

NASA-GISS scientists also fill in gaps, but in a somewhat different way. As James Hansen and colleagues at Goddard write in a paper explaining their approach:

The GISS analysis assigns a temperature anomaly to many gridboxes that do not contain measurement data, specifically all gridboxes located within 1200 km of one or more stations that do have defined temperature anomalies.

In other words, they extrapolate across some pretty large gaps — ones as far across as 700 miles.

Hansen and his colleagues argue the approach is valid because research shows that any particular temperature anomaly will tend to be large in geographic reach, particularly at middle and high latitudes.

But the NASA-GISS approach has come in for particular criticism from climate change skeptics, who severely question the scientific basis for extrapolating data across such large gaps.

However, there is significant scientific support for the approach, and for the determination that the Arctic is warming rapidly. Hansen and his colleagues point out that independent measurements of temperatures in the Arctic using infrared instruments reveal significant warming over large areas. 

Support also from comes observations of shrinking and thinning Arctic sea ice, thawing of permafrost, and changes to Greenland’s ice sheet — all indicators of widespread warming. 

Gavin Schmidt, Hansen’s colleague at NASA-GISS, points out that whether you fill gaps or not, you are making a decision about what the temperatures were in those gaps.

“When you have a data gap, you can either interpolate/extrapolate from nearby sources or not,” he says. “Each approach has an implication. If you leave it blank, it is equivalent to assuming that it has warmed at the same rate as the globe. While if you fill it in, you assume that it is changing at the same rate as nearby points. This makes the biggest difference in the Arctic, which is warming substantially faster than the globe. I think the interpolation/extrapolation approach is a better solution.”

Analyses Are in Close Agreement

Interestingly, at least six other methods for determining global temperature anomalies were devised prior to the 1970s, and one of these actually dates all the way back to 1881. Yet despite the generally less sophisticated tools available for these earlier analyses, they still agree with each other quite closely — and also with the more modern analyses. This is true going as far back on the trendline as 1900, when there were many more geographic gaps in coverage than there are today.

Historical surface temperature trends. Credit: U.N. Intergovernmental Panel on Climate Change.

The closest agreement is between the three main modern analyses — by NASA, NOAA and HadCRUT. Phil Jones, director of the CRU (and focus of much of the email controversy), points out their results are all within each other’s estimated error bars. As is suggested by this graphic, where the gray band is the range of uncertainty in the HadCRUT data. In fact, according to a Climate Central analysis, during the past 130 years the three surface temperature datasets differed by just 0.043°C on average.

The close agreement between the analyses prompts Schmidt to ask this: “Who is making any decision based on a tenth of a degree Celsius change in global temperature?”

Besides, month-to-month, and even year-to-year temperature rankings aren’t really all that important — if you’re interested in climate change.

That’s because phenomena like the El Niño-Southern Oscillation and the North Atlantic Oscillation, can cause all sorts of up and down squiggles in the temperature trendlines over a period of months and years.

“But we'd expect anthropogenic warming to be evident on decadal timescales,” Jones says. “So it isn't the warmth of 2010 or 2005 or 1998 that’s significant, but the warmth of the 2000s versus the 1990s versus the 1980s.”

And for those decades, there really is little scientific doubt about what’s been going on.

Tom Yulsman is a freelance writer in Colorado and the co-director of the Center on Environmental Journalism at the University of Colorado at Boulder. He blogs at CE Journal.