Apple AI churns out fictional headlines, 15% of Youtube videos 'untrue'

What is the best antidote against “hallucinations” of artificial intelligence (AI), the one that makes extensive use of the so-called Large Language Models (LLMs)?
And how do you guard against destructive biases, outright lies or defamatory and libelous statements reinforced algorithmically on social media?
How does one combat mis/disinformation aimed at shaping public perception with the regularity of pings of our news alerts?
No ready answers
The answer, my friend, is blowing in the wind.
So what is the "truth" in this AI-driven world? Is it what today's tech overlords view it to be?
How do LLMs and algorithms amplify (or distort) the truth which we hold to be true and are self-evident?
LLMs vs algorithms vs truth
LLMs power today’s AI leading platforms.
Examples of LLMs include OpenAI, Anthropic, Gemini, Llama and Groq (also Grok) — they are advanced AI systems designed to understand, generate, and process human language.
They are built using deep-learning architectures or “transformers” based on artificial neural networks (which imitate how the human brain works).
In the business of coding, and in AI "bot" parlance, these LLMs need regular "maintenance", and upgrades.
They also require massive computational power to “train” on vast amounts of data.
By "data", it means publicly-available sources, i.e. books, websites, videos and other sources. This helps them predict and generate text (and eventually, action) based on input prompts.
This requires (huge amounts of) electricity – i.e. training a large-scale model like GPT can consume as much power as several hundred households over a year, depending on the hardware and energy source.
Algorithm
On the other hand, an algorithm is a set of step-by-step instructions or rules designed to solve a problem or accomplish a task.
In the context of computers, algorithms are used to process data and automate decision-making.
They take inputs (e.g., user activity, preferences) and generate output.
Algorithms decide which posts appear in your feed and in what order, prioritising content they predict you'll find engaging.
Factors include: your interactions (likes, shares, comments), relationships (content from friends or accounts you frequently engage with), and relevance (topics you're interested in based on your activity).
Can you expect AI to be truthful; can algorithms reinforce them freely?
Quick answer: No.
That’s why LLMs need training.
Expecting AI to be truthful is like expecting everything on the internet to be truthful and fact-based.
It’s not.
One Engadget report, for example, showed that two-thirds of people said they run into false or untrue videos on Youtube “at least sometimes”, with 15 per cent finding such videos “frequently”. Alphabet (Google) said Youtube contains “less than 1 per cent” fake information.
Regulations
Currently, regulations and moderation practices against defamation, malicious imputations, maligning of character, libel, misinformation and disinformation that find their way into social media platforms are less vigorous – or are more loosely implemented – than the laws which traditional media must adhere to.
Traditional media are subject to anti-defamation laws that are at least 200 years old – the Libel Act of England was enacted in 1792.
At the moment, the biggest challenge to AI is truth.
Examples of fake hallucination headlines
Recently, Apple scrapped its AI news summaries after spewing outlandish and false headlines.
The California-based tech giant was forced to scrub an AI functionality that gives a summary of news, after notifications sent to customers used misleading headlines that including the reverse of what happened.
The snafu has triggered a quick reaction from news and press freedom organisations.
According to Apple Machine Learning Research, the tech giant employs a combination of on-device and cloud-based LLMs to power its AI features under the Apple Intelligence platform.
The on-device model is a ~3 billion parameter language model designed to handle tasks locally. This, the company explained, ensures user privacy and real-time processing.
Apple utilises a larger server-based language model running on Apple servers, accessible through Private Cloud Compute, for more computationally intensive tasks.
What are LLM ‘hallucinations’
One key problem with LLMs is that can generate outputs that are plausible – but factually incorrect or downright fabricated. These are called “hallucinations”.
This undermines their reliability, especially in critical applications like medicine or law.
An LLM might also provide an incorrect medical diagnosis or fabricate references in an academic context with a high degree of confidence, but with dangerous consequences.
Bias and ethical concerns
LLMs can also inherit and amplify biases present in their “training” data, leading to outputs that may be discriminatory, offensive, or unethical.
For example, if trained on biased datasets, an LLM might generate content that reflects stereotypes or excludes marginalised perspectives, perpetuating societal inequalities.
Transparency and explainability
AI systems should provide clear reasoning for their decisions, enabling developers and users to detect anomalies.
Reinforcement learning and human oversight
Continue refining models with feedback from experts and robust testing in real-world scenarios to minimise "hallucination rates".
Simulation testing
Employ stress tests and adversarial simulations to explore edge cases and improve AI's robustness.
Role-specific AI design
Restricting applications of AI to areas where outputs can be easily validated by human oversight can be helpful.
Malicious actors exploiting AI systems (e.g., remotely hijacking self-driving software) is a growing concern with potentially catastrophic outcomes.
There are a number of approaches to guard against it:
Advanced encryption and security protocols
Encrypt communications between AI systems to prevent unauthorized access, and use multi-factor authentication to secure remote operations.
'Red Team' exercises
Conduct simulated cyberattacks on AI systems to identify vulnerabilities and improve defenses before a real attack occurs.
Fail-safe mechanisms
Build override or shutdown protocols to immediately halt compromised systems without endangering users or infrastructure.
Government regulations
There is a challenge to develop standardised safety and security requirements for AI-based technologies, enforced by regulatory bodies.
Make-believe “facts”
Apple’s AI-powered feature misrepresented a Washington Post notification, incorrectly summarising it as “Pete Hegseth fired; Trump tariffs impact inflation; Pam Bondi and Marco Rubio confirmed.”
None of these statements were accurate.
The BBC wasn’t having any of it, calling out Apple for distorting their content.
Tech columnist Geoffrey Fowler ranted about the AI's inability to get even basic facts straight.
Meanwhile, press freedom advocates are up in arms, warning of the perils of such tech in spreading misinformation.
Stark reminders
Is it possible for AI to turn the world into a Disneyland or Godzilla? Or a hybrid, with elements of both?
These are stark reminders: even the biggest tech brains can’t always keep their creations from wandering into the land of make-believe.
For now, Apple, a $3.46-trillion empire (based on market capitalisation, as of Monday, January 20, 2025), was forced to hit the pause button on its AI news feature.
Apple engineers promised a comeback once they’ve ironed out the kinks. End users can only wait for a fix, and hope that the next update has a vastly improved hit rate for truth-telling, instead of going ganja.
Takeaways
Our increasing dependence on technology heightens fresh challenges, due to their implications for society.
AI represents an opportunity not only for vastly improving our lives but also for proactive, interdisciplinary problem-solving that integrates engineers, ethicists, policymakers, and the broader public.
Going forward, vigilant regulations and accountability would play a big role to prevent loss of control, erosion of privacy, and widening inequality.
Ultimately, the solutions lie in responsible innovation, coupled with reasonable regulation. Addressing such risks today would help ensure AI's role remains constructive, equitable, and aligned with human goals, rather than a road to "Armageddon" (or AIrmageddon?).