Economist George Akerlof has spent much of his celebrated career thinking about how trickery and deceit affect markets. His most famous insight, which won him the 2001 Nobel Prize in economics, is that when buyers and sellers have different information, lack of trust can cause markets to break down.

In those models, no one actually ends up getting tricked — everyone is perfectly rational, so even the possibility of getting cheated causes them to stay prudently out of the market. But in his book Phishing for Phools, written with fellow Nobelist Robert Shiller, Akerlof goes one step further.

Much of the actual, real-world economy, he says, involves trickery and deception.

That might not come as a shock to most people, but to economists of the libertarian, free-market bent, it’s tantamount to heresy. Tricksters and con men, they say, will be weeded out of the market by their bad reputations. That’s a pretty big assumption, and might rightfully strike most non-economists as wishful thinking.

But if you look at economic models, you’ll find that few of them include successful fraud and deception as a possibility. Modern econ relies so heavily on the assumption of rationality that there’s just not much room for tricks and scams. Obviously these models are far removed from most people’s experience of real markets, in which deception is common.

That makes it very difficult for economists to analyse the role that fraud and deception played in the 2008 financial crisis and the housing bubble that led up to it. Trickery was probably a big deal — many home buyers lied about their incomes to get mortgages, and many banks misled investors about the safety of the mortgage-backed financial products that they created.

Outright fraud is supposed to be dealt with through the legal system. But what about subtler forms of deception? Many people suspect that large swathes of the US financial industry make money not by allocating capital to productive uses, or by helping people hedge their risks, but by tricking normal people, businesses and local governments into making poor decisions.

Complex products

Even without blatant fraud, financial companies might use their superior intelligence, insider knowledge and deep pockets to create complex products that naturally tend to lure their counterparties into making bad decisions. That kind of thing might not be moral, but it’s often perfectly legal.

How widespread and effective is it?

A recent paper by economists Andra Ghent, Walter Torous and Rossen Valkanov may shed some light on the question. Ghent and her co-authors look at mortgage-backed securities, which figured prominently in the crisis. They try to measure how complex various products were, using measures like the number of pages in the prospectus, the number of tranches in the security and the number of different types of collateral.

That allowed the researchers to see whether more complex products fared better or worse in the years before the crisis. Using Bloomberg data, they look at private-label, mortgage-backed securities issued between 1999 and 2007. They then look forward in time, to see which products defaulted and which ones experienced more foreclosures in the mortgage pools that they used as collateral.

Riskier

It turns out that complexity was a bad sign. More complex deals experienced higher default rates and more foreclosures on their collateral. So if you were an MBS buyer from 1999 to 2007, the rational thing to do would have been to demand a higher interest rate on a more complex security.

Except that didn’t happen. Ghent et al. found that complexity had no correlation with the yields on MBS. That means that although more complex products were riskier on average, buyers didn’t recognise that fact.

The authors also carefully exclude the possibility that complex deals commanded higher prices because they were specially tailored to individual buyers’ needs — in fact, most products contained the same types of collateral, but the complex ones were just of lower quality.

The most likely explanation is that complexity simply dazzled and baffled MBS buyers into buying low-quality products. Perhaps complex products made it too hard to do meaningful due diligence, which caused smart buyers stay away from complex products, leaving only the overly trusting to participate in these deals.

Interestingly, Ghent and her coauthors find that credit-ratings companies tended to give higher grades to more complex products. That implied the credit raters were willing to trust issuers when figuring out what was actually in the products got too hard.

Reputation

Maybe it’s human nature to trust our counterparties more when things get too complicated. Or maybe the ratings companies’ well-known bad incentives took over when complexity and opacity made their misbehaviour harder to observe.

So it looks like the suspicions are true — a portion of our financial system runs on trickery. But what about reputation? Won’t companies that pull these kinds of tricks enough times lose their reputations, and be replaced by more honest dealers?

One would hope so.

Maybe America’s big banks cashed out their reputations in one huge party in the 2000s, and their employees will now retire comfortably while their less high-flying successors take over.

But in the meantime, we should think about how not to get fooled the next time we’re offered products that are too complex for us to really understand.

The writer is an assistant professor of finance at Stony Brook University.