1.1488440-1945824203

Online banner ads are not the advertising industry’s most glorious achievement. From the pop-up to the sudden blast of music, the clickbait to the nonsensically animated gifs, the stroboscope to the advert that simply appears to have a spider scurrying across it, there seems to be no end to the ways in which banner advertisements can annoy us.

Up to a point, this is part of the deal. Publishers offer something we want to look at, our attention is worth money to advertisers, and the advertisements help to pay for the content we are enjoying. But how annoying does an ad have to be before a website should refuse to run it? While the question is obvious, the answer is not: It is hard for publishers to know how much the adverts may be driving readers away.

Daniel Goldstein, Preston McAfee and Siddharth Suri, all now at Microsoft Research, have run experiments to throw light on this question. (They are, respectively, a psychologist, an economist and a computer scientist; do send in your suggested punchlines.)

The experiments are intriguing as much for the method as for the conclusion. Traditionally, much experimental social science has been conducted with all the participants in the same room, interacting on paper, face to face or through computers. Then the computer-mediated experiments moved online, with researchers such as Goldstein assembling large panels of participants willing to log in and take part in exchange for a modest payment.

Now there is an easier way: Amazon Mechanical Turk. The original Mechanical Turk was an 18th-century chess-playing “robot” which, in reality, concealed a human chess player. Amazon’s Mechanical Turk (MTurk) also uses humans to do jobs we might expect from a computer, but which computers cannot yet manage. For example, Turk workers might help train a spam filter by categorising tens of thousands of emails; or they might decide which of several photographs of an item or location is the best.

From the point of view of social-science researchers, MTurk is a remarkable resource, allowing large panels of diligent experimental subjects to be assembled cheaply at a moment’s notice. It is striking and somewhat discomfiting just how little MTurk workers (“Turkers”) are willing to accept — a study in 2010 found an effective median wage of $1.38 (Dh5) an hour. Siddharth Suri says that, because of its speed, flexibility and low cost, MTurk is rapidly becoming a standard tool for experimental social science.

So, back to those annoying ads. First, Goldstein, McAfee and Suri recruited MTurk workers to rate a selection of 72 animated adverts and 72 static ads derived from the final frame of the animations.

It may not surprise you to know that the 21 most annoying adverts were all animated, while the 24 least annoying were static.-

The researchers picked the-10 least aggravating and the 10 most excruciating and used them in the second stage of the study.

In this second stage, Goldstein and his colleagues hired Turkers to sort through emails and pick out the spam - they were offered 25 cents as a fixed fee plus a “bonus” that was not specified until after they signed up. The experiment had two variables at play. First, the Turkers were randomly assigned to groups whose workers were paid 10-cents, 20 cents or 30 cents per 50-emails categorised. Second, while the workers were sorting through the emails, they were either shown no adverts, “good” adverts or “bad” adverts. Some workers diligently plodded on while others gave up and cashed out early.

Usually researchers want to avoid people dropping out of their experiments. The wicked brilliance of this experimental design is that the dropout rate is precisely what the experimenters wanted to study.

Unsurprisingly, the experiment found that people will do more work when you pay them a better rate, and they will do less work when you show them annoying adverts. Comparing the two lets the researchers estimate the magnitude of the effect, which is striking: Removing the annoying adverts entirely produced as much extra effort as paying an additional $1.15 per 1,000 emails categorised — and effectively $1.15 per 1,000 adverts viewed. But $1.15 per 1,000 views is actually a higher rate than many annoying advertisers will pay — the rate for a cheap advert may be as low as 25 cents per 1,000 views, says Goldstein.

Good adverts are much less destructive. They push workers to quit at an implicit rate of $0.38 per 1,000 views, for an advert that may pay $2 per 1,000 views to the publisher. Generalising for a moment: Good adverts seem worth the aggravation, but bad adverts seem to impose a higher cost on a website’s readers than the advertisers are willing to pay. It is no wonder that websites hoping for repeat traffic tend to avoid the most infuriating adverts.

A sting in the tail is that the animated adverts may not even work on their own terms. An eye-tracking study conducted in 2003 by Xavier Dreze and Francois-Xavier Hussherr found that people avoided looking at banner advertisements in general; in 2005 Moira Burke and colleagues found that people actually recalled less about the animated adverts than the static ones.

How could that be? Perhaps we have all learnt a sound principle for browsing the internet: never pay attention to anything that jiggles around.

— Financial Times

Tim Harford is the author of The Undercover Economist Strikes Back.