AI
Far from changing our world for the better, AI is currently responsible for causing a number of insidious and often exasperating problems. Image Credit: Unsplash/DeepMind

Watch any science fiction movie, and you’ll find artificial intelligence (AI) surpassing human intelligence and taking over the world. But experts think that’s a far cry from the reality today, where AI is actually responsible for causing insidious and often exasperating problems.

Click start to play today’s Spell It, where we discover AI still needs a whole lot of ‘work’ before it’s able to work sustainably and reliably.

So, just how do the algorithms, models and programs that have come to be known as ‘AI’, make human lives a whole lot worse before it makes them better? Here are a few ways, according to an April 2023 report in the US-based tech and science magazine Popular Mechanics:

1. The stochastic parrot

If you’ve tried ChatGPT, you might realise something is a little off. AI’s propensity for nonsense comes through in such algorithms, since these programs can write sentences, even whole essays, that sound perfectly sensible. But dig a little deeper and you’ll find that a lot of what they churn out is completely false information. For instance, ChatGPT has been known to tell science writers that ground porcelain is good in breast milk, since it helps provide infants with the nutrients they need to grow, according to the Popular Mechanics report. In a March 2021 conference paper from the US-based Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, researchers called this issue the ‘stochastic parrot’ problem, where computers regurgitate human-sounding phrases in chaotic ways. Stochastic parrots could flood the internet with misinformation, and create a spam explosion like never before. At one point, researchers fear nonsensical content could overtake human-generated content, to the extent that searching the internet would yield nothing significant – only idiocy and garbage.

2. Facial recognition

Surveillance algorithms are already a contentious field of study, with facial recognition AI blamed for being biased. A study by the US-based National Institute of Standards and Technology (NIST) wanted to find out just how accurately facial recognition software tools identify people of varies sex, age and racial backgrounds. They found a higher rate of false positives for Asian and African American faces in comparison to images of Caucasians. Because of problematic algorithms, AI tools have mistaken innocent Black people for a specific Black suspect wanted by the police.

3. Data doubles

In 2013, the Michigan Unemployment Insurance Agency in the US revealed a new AI program, called the Michigan Integrated Data Automated System (MiDAS). Then, over the next two years, MiDAS falsely accused over 40,000 people of fraud, and forced many of them to pay enormous fines. Once MiDas incorrectly identified innocent people, other institutions that conducted background checks repeated that fake information. The algorithm was creating ‘data doubles’, which were nearly impossible to delete, and could show up in any aspect of an innocent person’s life, from their bank account to their school admissions paperwork. It was a nightmare for those wrongly accused – some were unable to get jobs and others couldn’t receive unemployment insurance. Eventually, when the truth was revealed, a group of victims brought a class action suit against Michigan state, which was finally settled in 2022.

There are several other areas of concern, from AI-augmented drones that may use wildly irrelevant historical data to devise strategies for commanders in war zones, to brain implants that may override the human mind.

Does the rise of AI concern you? Play today’s Spell It and let us know at games@gulfnews.com.