1.1999597-1163013307
Image Credit: Hugo Sanchez/©Gulf News

Things are looking good for the economics profession. In an earlier column, I suggested that the shift toward empirical economics, away from unsupported theoretical speculation, will be good for restoring faith in the field.

But this encouraging trend comes with a caveat. As with any science, readers need to learn how to be sceptical of news articles about hot new empirical results. In the modern age, any scientific finding gets instantly written up and published.

No matter how well writers do their jobs, these results will often turn out to be wrong — or at least, not all they’re cracked up to be.

In 2014, for example, the physics press reported that an experiment called Bicep 2 had found evidence confirming a popular theory about the early expansion of the universe. Every responsible outlet gave the standard caveats.

“Such a groundbreaking finding requires confirmation from other experiments to be truly believed,” “Scientific American” dutifully reminded us. But that sensible scepticism was lost amid the talk of Nobel Prizes. A few months later, however, it was found that the result was a mistake — the researchers had actually just seen a bunch of dust.

In economics, proof is almost never as final or definite as in physics. But there are also cases where famous, highly publicised findings turn out to be much shakier than originally reported. In 2001, economists John Donohue and Steven Levitt reported that legalising abortion led to a large decline in crime years later.

The finding was sensational, as it appeared to offer an answer to the mystery of why crime fell in the 1990s. It became a centrepiece of the best-selling popular book “Freakonomics”, which spawned a media empire.

Four years later, some other economists decided to check the finding. Christopher Foote and Christopher Goetz of the Federal Reserve Bank of Boston found an “inadvertent but serious computer programming error” in Donohue and Levitt’s code — the authors had left out some important control variables. Once the controls were added back in, the link between abortion and crime became statistically insignificant.

Foote and Goetz also added other sensible-seeming controls that made the link disappear entirely. Donohue and Levitt fired back, and the debate raged on for a while, but most economists eventually seemed to agree that the link between abortion and crime was not very reliable.

This illustrates an important point about economics research: There’s rarely perfect certainty. Different research teams will often disagree about which controls to include and which sample to use. Studies drawing on real-world behaviour, which economists have to use in lieu of laboratory experiments, are hard to replicate. As UCLA’s Ed Leamer has noted, there will always be things economists might fail to take into account.

That’s why controversies in economics are rarely settled quickly or cleanly. There are a few exceptions, such as when a graduate student found in 2013 that macroeconomists Carmen Reinhart and Kenneth Rogoff had made an Excel error in a paper showing a link between government debt levels and slow growth. But most economics disputes end not in absolute proof, but in a slow shifting of the professional consensus.

So how should you think about results of economics studies that you read about in the press? The idea that we should ignore all research results because of their inevitable limitations is a mistake. Economics research really does uncover important information about the real world — it’s not all just “he said, she said”.

The right approach is to adopt a reasonable scepticism. Let research results shift your beliefs, but don’t place absolute faith in any one paper. When you see two studies that agree with each other, let your beliefs shift a little more.

Second, pay attention to the quality of the research. Not all studies are created equal. If a high-profile claim relies on questionable methods, economists and the economics press will probably find out and report it.

Papers that rely on simple correlation are generally a lot less reliable than papers that use natural experiments. Macroeconomics is an inherently harder field than labour or tax economics, so it’s better to take macro findings with an extra grain of salt. And if you notice that a study relies on an assumption you think is questionable, be more sceptical.

A third strategy is to look at meta-analyses — studies that collect a large number of other studies in order to assess the weight of evidence. For example, there are many papers about the effect of minimum wage hikes on employment — some say it’s big, some say it’s small.

But meta-analysis shows that most papers find only a small or zero effect. That’s a lot more reliable than looking only at one paper.

So although research mistakes and protracted disputes are common, it’s still possible to learn a lot from reading about economics.

Science is a slow process of inching closer to the truth, full of missteps and accidents, and reading about science is no different.