Cyber leaders say clear thresholds and governance are key to safe deployment

We are seeing CEO mandates that focus on doing more with less, “minimizing unnecessary spend, reducing operational risk, and improving both performance and compliance” focus on resilience.
Adding to the above, observed similar statement from CFO, mandates that “ensure you operate within strict ROI discipline, approving no technology purchase without quantified risk exposure, cost-benefit analysis, and payback justification. Budget priorities favour governance, process optimization, and administrative controls over technology spending.
We floated these mandates in CXO community to see their perspectives, one of the seasoned CISO, Tushar Vartak, EVP & Head, Information, Cyber Security and Fraud Prevention, RAKBANK shared his perspectives:
“The Autonomy Ladder and Confidence Rope model provides a practical framework for AI adoption in this context. By viewing AI as a series of stepped capabilities, from advisory support to code assistance to autonomous workflow execution, cyber leaders can calibrate how much authority to grant AI at each stage. The Confidence Rope concept ensures that oversight, accountability, and controls scale alongside autonomy, so, organizations can capture efficiency gains without exposing themselves to uncontrolled risk. In essence, it allows organisations to align CEO’s mandate to deploy AI in a way that reduces cost and risk while boosting productivity and compliance”.
Vartak added, think of Anthropic’s tools, Claude, Claude Code, and Cowork, as a ladder of autonomy. At the first rung, AI augments human thinking and analysis. Next, it helps create and modify software. At the highest level, it executes multi-step tasks across systems independently. As autonomy increases, so must trust, guardrails, and oversight. The key leadership question is not whether to use AI, but how much authority to delegate and with what safeguards.
Another CISO, Bharat Raigangar, Global Head of Cybersecurity & AI Risk, explains, AI shifts from augmentation to delegated authority when it moves from supporting human decisions to making and executing them without immediate review. Tools like OpenAI’s language models assist humans, who retain final control. By contrast, high-frequency trading systems in Stock Exchange or autonomous vehicles making real-time road decisions operate within delegated authority. The distinction is critical, “advisory systems inform judgment, agentic systems act”. For boards and risk committees, that shift marks the point where oversight must rise from IT management to enterprise-level governance.
AI adoption must align with the CEO mandate to reduce operational cost, lower risk, improve performance, and strengthen compliance. As AI moves from advisory insights to assisted execution and ultimately autonomous workflows, decision velocity increases, along with potential exposure. While analytical AI primarily drives efficiency with limited incremental risk, AI-assisted coding and autonomous system changes can introduce security, operational, and compliance vulnerabilities if not properly governed. The organization must therefore define a clear autonomy threshold, ensuring that greater speed and automation translate into measurable performance gains and cost savings without increasing residual or systemic risk.
As AI adoption advances, the concept of “blast radius” the scope of impact if something goes wrong, becomes central to control design. An advisory model may only affect decision quality, but insecure AI-generated code can expose customers and trigger regulatory risk, while autonomous agents executing unintended changes can disrupt entire infrastructures. Before deployment, organizations must define clear failure boundaries and non-negotiable guardrails, including human-in-the-loop approvals for production changes, enforced code review standards, scoped access controls, segregated testing environments, and real-time monitoring with audit trails. The greater the level of autonomy, the more robust and layered the control architecture must be to contain risk while preserving the intended gains in cost efficiency and performance.
An interesting discussion with Dhiraj Sasidharan, a senior information security executive at a financial services organization said, “one of the most significant challenges lies in embedding AI into existing DevSecOps frameworks without weakening them”.
Speed and scale represent AI's most compelling contributions to DevSecOps. Yet speed, pursued without discipline, can quietly erode the very security controls it aims to strengthen. The challenge isn't whether to adopt AI in security pipelines but it's how to do so without introducing the kind of subtle, systemic risk that compounds over time.
The right approach treats AI as an intelligence and orchestration layer that sits above existing security tooling rather than displacing it. Deterministic controls, like SAST scanning, container vulnerability checks, admission control, runtime protection must remain policy-driven and enforced through GitOps workflows. AI adds genuine value by triaging findings, correlating vulnerabilities across contexts, recommending fixes, and generating remediation pull requests. But critically, every AI-driven action must pass through the same verification stages we require of human changes. Signed commits, SBOM generation, image signing, and policy enforcement ensure that AI contributions remain traceable, auditable, and reversible through standard Git reconciliation. Runtime protections such as RASP, eBPF-based telemetry, and network policy enforcement then validate that what's actually running matches declared intent.
As AI agents begin to take operational actions, not merely advising but executing, the question of accountability becomes unavoidable. Each agent must operate under a distinct identity with minimal privileges, produce cryptographically signed actions, and feed into an immutable audit trail that captures prompts, reasoning, and resulting system changes. Incident response must extend naturally to AI-initiated events, giving teams the ability to freeze autonomous activity, reconstruct decision paths, and remediate through GitOps rollback and policy updates.
Equally important is data governance. AI access must be tiered by data classification, with strict controls preventing exposure of secrets or sensitive production data, enforced through network policy, service mesh authorization, and formal data governance frameworks. These aren't optional safeguards; they're foundational to operating AI at enterprise scale without creating new categories of risk.
When these controls are in place, organizations gain something genuinely valuable “the ability to accelerate vulnerability remediation, sharpen signal detection, and streamline DevSecOps workflows while preserving regulatory compliance, operational accountability, and the security boundaries that underpin trust”.
All these experts collectively highlighted that as AI capability expands, data exposure and governance become central concerns. AI is only as powerful as the data it can access, but broader access increases risk. Source code, security architecture, incident response records, and personally identifiable information carry intellectual, operational, and regulatory sensitivity. As AI evolves from summarization to coding and autonomous workflows, its required access and potential exposure grow. Organizations must set clear data classification boundaries per use case and enforce strong controls, including logging and monitoring of AI activity, data loss prevention, regional data residency compliance, and defined retention policies, potentially sandboxing standards. AI must be treated as a first-class participant in the data governance framework, not a peripheral tool, to balance performance with risk control.
They also mentioned that the progression from reasoning assistance to code generation to autonomous workflow execution represents a clear value ladder, AI enhances human decision-making speed, accelerates technical delivery, and automates repetitive or multi-step processes. When deployed effectively, this stack can materially reduce operating costs, improve cycle times, minimize human error, and increase throughput, directly supporting the CEO mandate to improve performance and efficiency at scale.
However, each step up the autonomy curve requires proportionally stronger governance to ensure that cost savings are not offset by increased operational, security, or compliance risk. Traditional controls, secure coding standards, penetration testing, access management, and vulnerability remediation remain essential. What shifts is the allocation of investment and oversight. Organizations must complement existing practices with structured AI governance, continuous monitoring, and autonomous risk controls that match the speed of execution.
“AI therefore represents both a productivity multiplier and a potential risk amplifier”. The strategic objective is clear “capture the productivity dividend while systematically engineering down residual risk, ensuring that automation drives measurable cost reduction, performance improvement, and strengthened compliance but not unintended exposure”.
Organizations will unlock sustainable AI value when it directly advances three executive priorities, namely, lowering operational cost, reducing risk exposure, and improving performance and regulatory compliance, while strengthening resilience. AI must be managed not as a convenience tool, but as a governed enterprise asset embedded within security, risk, and compliance frameworks.
Before expanding autonomy, leadership should require formal risk assessments, tightly defined and approved use cases, continuous monitoring of AI actions, and clear executive oversight as authority scales. As AI evolves from decision support to code generation to autonomous execution, governance and control mechanisms must mature in parallel.
When calibrated correctly, AI drives efficiency, accelerates execution, enhances control consistency, and improves compliance posture. The competitive advantage comes from scale and speed; resilience comes from disciplined oversight. The real measure of enterprise AI success is not how much autonomy is deployed, but how effectively authority is aligned to cost optimization, risk reduction, performance improvement, and sustained compliance.