Regulatory Power Is the Dangerous Kind

Regulatory Power Is the Dangerous Kind
October 1, 1999



For two decades, America's courtrooms and regulatory agencies have been overwhelmed by various health scares: breast implants, pesticides, cell phones, and fat substitutes, to mention only a few. But recent headlines brought news of what has to be a new low in the annals of junk science: a government-funded scientist who had systematically distorted data to jumpstart the moribund hypothesis that electromagnetic fields near power lines cause cancer.

The sheer financial costs of this taxpayer-subsidized fraud--as measured in liability suits, declines in real-estate values, and misdirected resources--are staggering. Far more serious are its implications for the credibility of science, particularly government-funded science.

In 1995, the Department of Energy's Lawrence Berkeley National Laboratory concluded that Robert P. Liburdy, a cell biologist there, had "deliberately [created] 'artificial' data where no such data existed." Liburdy, who claimed to have identified a possible mechanism by which electromagnetic fields caused cancer, had published his "data" in 1992 in peer-reviewed journals. This allowed him to win a $3.3 million grant from the National Institutes of Health, Department of Energy, and Department of Defense.

To be sure, the scare over power lines never reached epidemic proportions. First advanced in 1979, the thesis was eventually discredited by such eminent scientific groups as the National Research Council and the American Physical Society. But Liburdy's deception did much to keep the myth alive and thus sparked a campaign of "prudent avoidance": if there is even a hint of a health problem, play it safe and avoid exposure.

Prudent avoidance sounds like a sensible caution. In fact, it constitutes a rejection of science and a triumph of fear over reason. And, as physicist David Hafemeister of California Polytechnic State University notes, "prudent avoidance is a delight for plaintiff lawyers, since it is essentially a conclusion that the danger is probable."

A 1994 General Accounting Office report acknowledged the downside of taking action based on half-baked scientific theories, noting that the total estimated costs of "mitigation" activities exceeded $1 billion. Examples of mitigation, cited in a 1992 issue of Science magazine, include a town that moved several blocks of power lines underground at a cost of $20,000 per exposed person; a utility that rerouted an existing line around a school at a cost of $8.6 million; a new office complex that incorporated EMF protections in its design at a cost of $100 to $200 per worker; and firms that installed ferrous shielding on office walls and floors to reduce magnetic-field exposure from nearby power handling equipment, at costs ranging up to $500 per square yard of office space.

All of this expenditure was directed at what we now know is a phony health risk. How did it happen? The answer lies in the peculiar way in which environmental health researchers operate.

They are, to begin with, a self-selected group with an inherent bias toward indicting industrial environmental agents as causes of disease. Often, this leads them to become emotionally wedded to their own hypotheses, to the detriment of scientific objectivity. As Robert Park, a professor of physics at the University of Maryland, has observed, "People are awfully good at fooling themselves. They're so sure they know the answer that they don't want to confuse people with ugly-looking data."

The government makes matters worse by transforming research into what science-policy analyst Edith Efron calls "regulatory science." Regulatory scientists are well aware that they will not be published or have funding renewed if they report, for example, that Alar-treated apples are safe, that saccharin is safe, . . . or that power lines don't cause cancer. The successful regulatory scientist finds possible problems, publicizes them, and hopes to get the ear of Congress for follow-up work and renewed funding.

One would think that the exposure of Liburdy's malfeasance would sound an alarm, both within the general scientific community and with lawmakers who provide these researchers with grants, about the objectivity of environmental health researchers. No such luck.

While Liburdy's falsifications of data were first exposed in 1995, he remained in his job until this past May, shortly before the Office of Research Integrity released its findings in the Federal Register. His "punishment" consisted of agreeing not to apply for more federal grants for three years.

Meanwhile, those of us involved in research and policy making in the private sector are constantly queried about our funding, motivation, and agenda, which critics regard as "too pro-industry." If the Liburdy case means anything, it is that government-funded regulatory scientists should be subject to similar levels of scrutiny by Congress, the media, and peer-reviewed scientific journals.