From ads to articles, AI is cranking out content faster — and cheaper — than ever
Picture this: writing a catchy tweet, a snappy ad, or even a full-blown article doesn’t require hours of brainstorming or caffeine-fuelled late nights anymore.
Thanks to AI, creating content is now cheaper, faster, and way simpler than ever before.
AI-generated content (and yes, even ads) are like having a tireless assistant who never needs a coffee break. Need a clever tagline or a whole thread? AI’s got your back with words that flow and ideas that sparkle — all at lightning speed.
So what’s the big deal?
And should you care?
Well, it means AI is quickly stealing the spotlight as the go-to tool for crafting and sending messages everywhere.
Whether you’re a content creator, a small business, or just someone who loves sharing thoughts online, AI is becoming your new best friend in the messaging game.
There's a flipside: As synthetic posts flood your feed, regulators are stepping in to demand clear labels.
As misinformation risks rise, new rules now require platforms to tag AI-generated content. Transparency isn't just polite anymore — it's essential for keeping truth and trust alive online.
Why is tagging AI-generated content important? It isn’t just good manners.
For one, it keeps things transparent. For two, it helps fight off fake news ninjas, and lets users know who — or what — created the post or content.
Without it, we risk confusing fact with fiction, especially when it comes to hot-button topics like politics (think wars) or health.
But what if a platform ignores or dodges it?
Authorities are increasingly realising the importance of marking AI-generated content as a standard on social media platforms.
For example, a US Executive Order (2023) mandates clear disclosure of AI-generated content to protect and empower consumers.
Several leading platforms have established formal AI content labelling systems. It is still early days.
A landmark research conducted by Jakesch et al. (published in Nature, 2023) demonstrates that people often cannot reliably distinguish Generative AI (GenAI) content from human-made content.
Their study found that while overall discrimination was "better than chance", there was substantial individual variation, with many participants misattributing AI-generated text as human — and vice versa.
This is known as "imperfect human discernment of AI-generated content".
More formally, it is now known as "individual differences in human-AI content discrimination."
On one hand, factors such as intelligence and digital habits influenced this ability. On the other hand, heavy social media use correlated with increased misclassification of AI content as human-made, the researchers found.
A University of Southern California (USC) research published on SSRN (November 2024) also reinforced these findings, as it delved into how social media is shaping the way we connect, chat, and scroll our lives away.
Most people still fall into the trap, the USC study shows, potentially making something contrived and untrue to go viral.
This raises the question: How do you keep trust among end-users? How can audiences distinguish between human and AI-created material?
The answers are still somewhere out there.
Thus, there's a legitimate need to label AI-generated content.
There’s still a dearth of research on how AI-generated content affects people’s behaviour.
This raises the urgency of studying how people react to GenAI disclosures (Peres et al., 2023).
Other studies (i.e. Wahid et al., 2023; Silver et al., 2021) also suggest that automating content creation may hurt perceived authenticity and reduce public acceptance of AI-generated content.
Today, we're facing a dicey situation: a deluge of AI-generated content.
Platforms like TikTok and Meta (Facebook) now require GenAI disclosures, either voluntarily (by brands or individuals) or automatically by the platform when synthetic content is detected.
That's a harbinger for something more serious: the floodgates of AI content generation opening.
So, you have a situation where an AI just posted — should you still hit the "like", "subscribe" and "share" button?
It's a guaranteed situation: Meta, whose key focus is advertising (98% of its revenues), is now testing an updated image-to-video ad tool. This allows, for example, companies or individuals to use AI to turn product photos into multi-scene video ads with music and overlaid text.
This is a Tower-of-Babel moment: In a hyperconnected world saturated by AI, everyone’s talking — but no one understands each other.
AI models across the globe are churning out tweets, videos, articles, comments, and even legislation drafts.
It’s cheap, it’s constant, and it’s everywhere. Political campaigns flood the web with AI-crafted messages. So do parody artists. And so do brands, scammers, pranksters, and influencers.
Even AI bots argue with each other on Reddit and quote themselves as sources.
What should policymakers do?
First is to understand this fast-evolving situation; then set minimum standards.
A good place to start? Labelling.
In the digital realm, consumers deserve to know the product being sold to them.
The practice has not been adopted across the board yet, as it remains largely voluntary. There's a gap to be filled.
Let's take a look:
Platform | Label Used | How Label is Shown | Labeling Method | Penalties for Non-Disclosure |
---|---|---|---|---|
Meta (Facebook, Instagram, Threads) | "AI Info" | Beneath user's name for fully AI posts; in menu for AI-enhanced content | Automatic detection plus option for manual disclosure | Content removal or other penalties if misleading |
YouTube | "Altered or Synthetic Content" | In video description panel; prominently on player for sensitive topics | Manual disclosure during upload | Content removal; suspension from Partner Program1 |
TikTok | "AI-Generated" | Directly visible on content | Manual toggle at posting; automatic detection in development | Content removal; account restrictions |
X (Twitter) | AI assisted or #AIgenerated | No labelling required as of 2025; required only when the AI content could “mislead” users, such as deepfakes, AI-generated news, or manipulated media. | No universal method: Creators who post AI-generated content are encouraged — but not mandated — to add labels | Content can be flagged, shadowbanned, or removed if undisclosed AI content is deceptive or violates rules. |
To be fair, social media platforms have increasingly embraced clear labelling practices for AI-generated content as part of their transparency protocols.
These labels, which are visible to end-users, are supported by automatic or manual disclosure methods.
Failure to properly label AI-generated content on social media risks content removal, account penalties, and a downgrade to someone's reputation.
US: Increasing state-level AI legislation mandates disclosure in many contexts; platforms like Meta and YouTube enforce penalties ranging from content removal to suspension for violations, as per the US National Conference of State Legislatures (NCSL).
EU and Others: While explicit penalties vary, obligations for transparency in AI content disclosures are growing along with anti-coercion and misinformation safeguards, as per IAAIR.
Philippines: Specific guidelines for AI-generated content disclosure apply during the 2025 elections, impacting political candidates and platforms alike, with regulatory enforcement from election bodies, according to an October 2024 article by Baker McKenzie.
Institutional policies: Entities like the Institute of Applied Artificial Intelligence and Robotics (IAAIR) enforce strict disclosure norms internally with disciplinary action for nondisclosure, emphasizing ethical use and human accountability, according to the US-based institute.
USC researchers zoomed in on TikTok (because where else?) and ran six experiments to crack the code.
The verdict?
The researchers found that when posts are tagged as AI-made, people engage less. Not because the content looks sketchy or sounds robotic. It’s all about feeling less connected.
It turns out that people form emotional one-way bonds with creators (it’s called a “parasocial connection”), and when they sense a machine’s behind the magic, it kinda kills the vibe.
Here's why AI-content marking matters:
It preserves trust and authenticity: Users engage differently with content known to be AI-generated. Studies indicate that AI disclosure can lower consumer engagement by signaling reduced creator effort, affecting emotional bonds between creators and audiences.
Helps combat misinformation and manipulation: Disclosing AI generation reduces the risk of covert deepfakes or fabricated content being mistaken for genuine human output, which can influence public opinion or elections.
Regulatory compliance: Many countries and institutions now require transparent disclosure as part of ethical AI use policies and digital content laws.
Supports platform accountability: Labelling helps platforms enforce community guidelines and respond effectively to potential harms from synthetic media.
Going forward, the key challenge lies in understanding the underlying process, tech and interests involved. Doing so would help address the growing volume of generative AI content and its potential for audience confusion.
Here’s the twist: in general, slapping an “AI-generated” label changes how people react. It makes them go away.
It’s not a hopeless case. If creators show they still put in effort, like adding a personal touch or storytelling flair, that dip in engagement can be softened.
Bottomline: while AI can help you create content, but if you want to keep your fans double-tapping, make sure it still feels human.viral
Sign up for the Daily Briefing
Get the latest news and updates straight to your inbox