TBILISI: When MacKenzie Fegan boarded a flight from New York to Mexico City, nobody asked for a passport or boarding card. Airline staff simply told the US writer to smile into a camera.
Within seconds, her photo was matched against a US Customs and Border Protection (CBP) database and the-34-year old American was free to board.
“Did facial recognition replace boarding passes, unbeknown to me? Did I consent to this?” she wrote to the airline, JetBlue, in a tweet that has since gone viral, highlighting widespread unease with an increasingly ubiquitous technology.
Computers have grown much better at recognising faces in recent years, unlocking a myriad of applications for facial recognition, from tracking criminals to counting truants.
But as cameras appear at unlikely spots across the globe, activists raise fears about lost privacy and say society might be on the doorstep of a dystopia where Big Brother sees all.
“We are entering a potential future where everywhere someone goes, the government knows who they are, who they are with and what they are doing,” said Adam Schwartz, a senior attorney at US rights group Electronic Frontier Foundation (EFF).
An all-knowing, all-seeing government sounds intimidating, said Fegan, who described herself as a politically active, queer woman of colour of immigrant descent.
“Historically speaking, things haven’t gone great for people like me when repressive government do come into power,” she told the Thomson Reuters Foundation.
JetBlue’s facial recognition boarding process is part of a wider government pilot that began in 2017 and covers 15 US airports and several airlines, including Delta and British Airways, according to CBP.
The agency said the programme, which it expects to cover 97 per cent of all US outbound flights by 2023, aims to enhance security and speed up custom controls.
Rights groups question whether the end justifies the means.
“There is a tendency among governments, even when they have a legitimate goal ... to view new technologies as some kind of magic,” said Sarah St. Vincent, a researcher on national security and surveillance at Human Rights Watch.
“But we need to make sure that all of these tools are the least intrusive effective methods available.” She said the technology — and fear of its potential fallout — could in theory discourage perfectly lawful behaviour, be it attending a protest, conducting an affair or meeting someone with a criminal conviction.
The technology also has accuracy problems, particularly identifying people from ethnic minorities, she added.
A 2018 study by the Massachusetts Institute of Technology found three major facial-analysis programmes were much better at recognising light-skinned men than darker-skinned women.
CBP said its airport system had a match rate of 98 per cent.
As of December, it had helped identify 7,000 people who had stayed in the country longer than allowed and six who had tried to enter with fake papers, it said.
Similar systems are being piloted at other airports, including Amsterdam’s Schiphol and Singapore’s Changi.
Airport security is a natural venue for facial recognition, as travellers are primed for ID checks, said Elke Oberg, a spokeswoman for German facial recognition company Cognitec.
But she said people must first be told how it works, why it helps, where their data goes and whether they can opt out.
JetBlue said passengers were not required to board biometrically and were informed through gate announcements.
The company does not have access to customer photos, which CBP said were all deleted within 14 days.
Facial recognition has uses way beyond air travel and has been tested to read emotions, sexuality and personal traits that are unrelated to physical identity.
Police in London are testing the technology to see if it can accurately identify wanted criminals, with cameras installed at nine sites to check passers-by against a database of offenders.
More than 90 per cent of those stopped after a match had done nothing wrong, according to figures obtained by privacy group Big Brother Watch.
Many smartphones let users access their device with a glance into the camera, a step on from thumb print recognition.
And singer Taylor Swift used facial recognition cameras to enhance her own security during a 2018 tour, according to Rolling Stone magazine. It said kiosks played videos of the star and would simultaneously photograph the viewers to check their images against a database of known stalkers.
Companies are training algorithms to recognise not only an identity but also emotions, with far-reaching implications, said Amie Stepanovich, a lawyer specialising in cyber security at campaign group Access Now.
“Some emotional states have been found to be better at selling people things,” she said.
Researchers at Stanford University said they had developed a software that could tell whether someone was gay or heterosexual — it analysed photos on dating websites to make its decision — in a 2017 study that was criticised as dangerous by LGBT groups.
Some within the industry say regulation is vital.
Schwartz of EFF said authorities should be banned outright from using the technology for sweeping surveillance.
The type and amount of data that private companies can collect should also be clearly limited — and customers ought to be given the possibility to opt out, he added.
Tech giants Microsoft and Amazon have both backed legislation to ensure the technology is used with transparency and in a way that protects people’s rights in the United States, where it is currently largely unregulated.
“Imagine a government tracking everywhere you walked over the past month without your permission or knowledge,” Microsoft chief Brad Smith wrote in a blog last year.
“This has long been the stuff of science fiction and popular movies . but now it’s on the verge of becoming possible.” Yet, it is important not to forget potential benefits, said Melissa Doval, CEO of US facial recognition company Kairos.
It could be used to verify age on dating sites to protect children from predators or identify drunk drivers, she said.
“Facial recognition can actually help to protect people,” she said. “At the end of the day it is going to come down to companies being held accountable for their ethical use of it.”