Scientists unable to predict consequences of El Niño
Wouldn’t it be helpful if we knew when another big El Niño was about to hit, so we could board up the windows, bring in the cat, and cancel our cable television contract?
Like general circulation modelers seeking to predict future climate, climatologists worldwide are working on complex predictive models that say when the next El Niño might occur and where it may have the greatest impact. One small problem: When you make a forecast for an event that occurs within your lifetime, someone just might hold you accountable.
Along come Chris Landsea and John Knaff, who, in the Bulletin of the American Meteorological Society, decided to do just that. They compared the 1997-1998 forecasts produced by a number of different El Niño models, some dynamic (i.e., expensive) and some statistical (i.e., less expensive). Their approach? CLImatology and PERsistence (CLIPER), a model that uses past events (Climatology) and the recent history of the current event (Persistence). The CLIPER model is commonly used to evaluate tropical cyclone predictions.
In a sense, CLIPER is the simplest possible model. Given the data, it’s the kind of model that an undergraduate statistical climatology class could develop between the end of the football game and the start of the fraternity mixer using your average home computer.
That’s not a criticism. In fact, Landsea and Knaff argue that if these more sophisticated forecast models are to have any useful predictive ability (or “skill”) they should, at a minimum, be better than CLIPER.
Compare what actually happened with the 6- to 8-month advance forecasts of a variety of models (dynamic and statistical). Some forecast verifications differ since some models make predictions for slightly different regions of the tropical Pacific.
How did they fare? In a word, terrible, but with varying degrees of terrible-ness. Note that CLIPER outperforms nearly all of the more “sophisticated” dynamical models.
As Landsea and Knaff point out, national meteorological centers may do well to note that “the current best tools are the relatively cheap statistical systems,” not the expensive but ineffective dynamical models that have yet to produce a reliable forecast.
Toward the paper’s conclusion, the authors offer a remarkably candid (for science) perspective:
[It is] disturbing that others are using the supposed success in dynamical El Niño forecasting to support other agendas. As an example, an overview paper by Ledley et al. (1999) to support the American Geophysical Union’s “Position Statement on Climate Change and Greenhouse Gases” said the following: “Confidence in [comprehensive coupled] models [for anthropogenic global warming scenarios] is also gained from their emerging predictive capability. An example of this capability is the development of a hierarchy of models to study the El Niño-Southern Oscillation (ENSO) phenomena. . . . These models can predict the lower frequency responses of the climate system, such as anomalies in monthly and season averages of the sea surface temperatures in the tropical Pacific.
“On the contrary,” Landsea and Knaff state, their own results suggest we should have “less confidence in anthropogenic global warming. . . . The bottom line is that the successes in ENSO forecasting have been overstated (sometimes drastically) and misapplied in other arenas.”
In the end, they report, “There were no models that provided both useful and skillful forecasts for the entirety of the 1997-98 El Niño. This is a conclusion that remains unclear to the general meteorological and oceanographic community.”
Robert E. Davis, Ph.D., is an associate professor of environmental science at the University of Virginia.
Landsea, C.W., and J.A. Knaff., 2000. How much skill was there in forecasting the very strong 1997-98 El Niño? Bulletin of the American Meteorological Society, 81, 2107-2119.
Ledley, T.S., et al., 1999. Climate change and greenhouse gases, EOS: Transactions of the American Geophysical Union, 80, 454-458.