Signal leakage, hidden risks and why your chatbot confessions may not be private

As artificial intelligence (AI) tools become part of everyday life, experts are warning that the simple act of typing into a chatbot may carry risks many people do not fully understand.
In a recent conversation, cybersecurity leader Davide Del Vecchio – CISO at Careem, highlighted a growing concern “while AI systems offer convenience and speed, they may also quietly expose sensitive information”.
Today, millions of people use AI-powered assistants to write emails, fix computer code, plan business ideas, or even seek personal advice. These tools are fast, helpful, and often feel private. But according to experts, that sense of privacy can be misleading.
“People usually think of data leaks as hacks or cyberattacks,” Davide explained. “But with AI, the risk is different. It’s about how these systems learn from what you type.”
AI models, often called large language models, are trained on massive amounts of text. They don’t “remember” information the way humans do. Instead, they recognize patterns and use them to generate responses. This is what makes them sound natural and intelligent.
Get updated faster and for FREE: Download the Gulf News app now - simply click here.
However, this learning process also creates a potential risk.
When users enter sensitive information, such as business plans, financial data, or personal details they may assume it disappears after the conversation ends. In reality, depending on the platform, that information could be stored, reviewed, or even used to improve the system.
Even if the exact words are not saved or repeated, the system may still absorb patterns from that data. Over time, this can subtly influence how the AI responds to other users.
Experts call this “signal leakage.” It does not mean your exact data will appear somewhere else, but it does mean your input could shape the system in ways you cannot see.
Many organisations now use AI tools to summarize reports, analyse data, and draft important documents. But some employees may unknowingly upload confidential information into public AI systems, creating potential risks for their organizations.
In response, several companies have restricted or banned the use of public AI tools. Others have shifted to specialized enterprise versions that offer stronger privacy protections.
Startups and small businesses, however, may not always have access to these secure systems. In fast-moving environments, founders and teams often prioritize speed over data safety, sometimes pasting entire presentations or strategies into chatbots for quick feedback.
More individuals are now using AI for personal matters, including health questions, relationship advice, and emotional support. In these situations, chatbots can feel easier to talk to than another person.
But experts caution that this comfort does not guarantee confidentiality.
Even when organisations say they remove personal details from stored data, complete anonymity is difficult. Conversations often include identifying clues such as names, locations, or unique experiences.
Meanwhile, laws and regulations around AI are still catching up. Existing privacy rules offer some protection, but many questions remain unanswered about how data is handled and stored.
AI companies themselves face a difficult balance. They need data to improve their systems but collecting that data raises privacy concerns.
Some platforms are introducing “private mode” or “zero-retention” options, where user inputs are not stored or used for training. While this improves privacy, it may slow down the development of better AI systems.
Transparency is another challenge. Many platforms use general terms like “improving user experience,” which may not clearly explain how user data is used.
For now, experts suggest a simple rule “treat AI like a public space”.
If you would not share something on the open internet, it is best not to share it with a public AI tool. This includes passwords, financial information, confidential work documents, and deeply personal details.
As AI continues to grow and integrate into daily life, awareness becomes increasingly important. These tools offer enormous benefits, but they also require careful use.
Every question typed into a chatbot may seem private in the moment.
Whether it truly stays that way is something both users and the tech industry are still learning to understand.
The rise of artificial intelligence has brought undeniable benefits, speed, efficiency, and accessibility that were unimaginable just a few years ago. Yet, as Davide cautions, this convenience must be matched with awareness.
AI systems are not just tools; they are learning engines shaped by the data we provide. Every prompt, no matter how trivial it seems, contributes in some way to that learning process. While the risks may not always be visible or immediate, they are real and evolving.
The path forward lies in balance. Organizations must invest in secure technologies and clear policies, while individuals must adopt more mindful habits when using AI. Trust in these systems will not come from innovation alone, but from transparency, responsibility, and informed use.
In the end, the rule is simple but powerful “treat AI interactions as public by default. Because in a world driven by data, protecting what you share is just as important as the insights you gain”.
Stay tuned for more experts’ interviews…
Network Links
GN StoreDownload our app
© Al Nisr Publishing LLC 2026. All rights reserved.