CXOs warn of an ‘AI Vulnerability Storm’ and call for security by design

In a recent extended discussion among our MANY CXOs community, a strong consensus emerged that the cybersecurity landscape is entering a more volatile phase, the one many are beginnincg to describe as the “AI Vulnerability Storm.”
Security leaders agreed that the “challenge is no longer simply about responding faster, but about fundamentally rethinking how vulnerabilities are discovered, prioritised, and mitigated in an era where artificial intelligence is accelerating both offensive and defensive capabilities”.
At the center of this shift is growing concern around large-scale, AI-driven vulnerability discovery programs, sometimes referred to in industry circles as “Mythos-style” initiatives.
Get updated faster and for FREE: Download the Gulf News app now - simply click here.
While these systems promise unprecedented visibility into software weaknesses, CXOs cautioned that they also risk overwhelming organizations with a surge of newly identified issues, many of which demand urgent attention and strain already limited security resources.
Honestly, it’s not the noise that concerns, but it’s what’s being overlooked because of it. The narrative has become almost mythological. People are focusing on what they think the system is, rather than what it actually implies from a security standpoint. That gap between perception and reality is where risks start to grow.
From what can be reasonably inferred, GLASSWING seems to push toward AI systems with deeper contextual awareness and possibly persistent memory. That’s a significant shift. Traditional systems process inputs in isolation, but here you’re potentially dealing with accumulated context over time. That opens the door to new attack surfaces, especially subtle, long-term manipulation rather than immediate exploits.
Think of it like this, instead of hacking a system in one go, an attacker could slowly influence it over multiple interactions. If the system retains context, those small manipulations can build up. Over time, the system’s behaviour might shift in ways that are hard to detect. That’s a very different security challenge compared to what we’re used to.
Said the experts “Not fully”. Security is still too often treated as an afterthought, something you test once the system is already built. But with systems like this, that approach doesn’t hold. Security needs to be part of the architecture from day one. Otherwise, you’re just patching vulnerabilities in something that was never designed to be resilient in the first place.
Data provenance is a big issue. If a system is continuously learning or adapting from incoming data, you must ask “where is that data coming from, and how trustworthy is it? If someone manages to poison that data stream, the system doesn’t just make one bad decision, instead it can internalize that corruption. And because the outputs may still appear coherent, it’s harder to catch”.
Transparency is essential, not only for maintaining public trust, but for strengthening security itself. When systems are unclear or poorly documented, it becomes far more difficult for experts to identify and address potential weaknesses. Openness invites scrutiny, and that scrutiny ultimately makes systems more resilient. Without it, organizations are forced to rely on assumptions rather than evidence, increasing overall risk.
To an extent, yes, but not at the cost of accountability. There’s a difference between protecting intellectual property and obscuring fundamental system behaviours. Especially when technologies have wide-reaching impact, clarity should not be optional.
The tendency to equate complexity with inevitability. Just because something can be built doesn’t mean it should be deployed without constraints. Security engineering is about anticipating failure and designing systems that fail safely. Right now, the conversation is too focused on capability and not enough on failure modes.
Many CXOs community experts collectively highlighted, we need to move away from mythologizing projects like GLASSWING. The real conversation should be about responsibility, safeguards, and long-term impact. If we don’t ground these discussions in reality, we risk building systems that are impressive, but not secure. And that’s a trade-off we can’t afford to make.
We are in conversations with more experts and solution vendors, stay tuned…