At the RSAC 2026 Conference in San Francisco, a central question emerged regarding enterprise artificial intelligence deployments: do these systems require a "human in the loop," or does manual oversight limit operational speed and scalability?
During a panel titled "including Threat and Strategy: The CISO's Playbook for the AI Revolution," security executives examined evolving AI use cases and the requirements for safe, protective deployment. Moderated by James Rundle of The Wall Street Journal, the discussion featured Francis deSouza, Google Cloud chief operating officer and president of security products; Emma Smith, Vodafone global CISO; and Shaun Khalfan, PayPal senior VP and CISO.
The integration of LLM-powered security tools has shifted the broader security environment. Securing AI systems requires strict standards to prevent the exposure of sensitive corporate data through vulnerabilities such as prompt injections. The shared data security model between AI vendors and customers remains operationally complex. Additionally, engineering practices like relying heavily on AI-generated code without adequate human review can introduce new structural risks, adding complexity to the CISO's mandate. Industry studies indicate that many organizations are still working to mature their AI security and governance deployments.
The panelists shared their current operational baselines. Google reports that 50% of its code is currently AI-generated with developer assistance. Vodafone security analysts use AI systems to automate workflows and generate executive summaries of technical data. Khalfan noted that PayPal utilizes AI to support fraud detection across its one billion monthly transactions.
Smith detailed Vodafone's realization that adopting AI safely requires a top-down approach from leadership to ensure ethical and responsible integration. Vodafone's architectural solution is AI Booster, a centralized machine learning platform built on Google's Vertex AI. It features a central, reusable codebase that allows the organization to deploy established use cases quickly via pre-trained models and custom tools. This centralization gives Vodafone's privacy engineering team a consistent framework to review each use case, track business value, and verify that proper guardrails are in place.
Evaluating the human operational role
The panel evaluated the "human in the loop" model—the practice of requiring human validation for LLM outputs at specific steps. deSouza noted that manual defense processes are often too slow to mitigate automated, agent-driven security threats. Because of this velocity mismatch, Google is moving toward agent-assisted defense architectures.
Smith agreed that relying strictly on human review is difficult to sustain for scaled operations.
"I totally agree that a human in the loop is not scalable if we think about our traditional security controls," Smith said. "Let's face it, we rely on the ones that are technical and automated and that we can prove over time. A human in the loop is not the solution for the long term, certainly on scaled operations."
Instead, organizations can position personnel "on the loop" to review insights and guide AI systems asynchronously. Smith noted that Vodafone utilizes a heat map to evaluate the confidence and potential risk impact of AI outcomes. For use cases with a high risk impact, the organization strictly enforces a human-in-the-loop requirement unless an overriding business benefit justifies an alternative, highly monitored approach.
Structuring data security and industry collaboration
Khalfan emphasized the necessity of encasing AI initiatives within a comprehensive compliance and risk framework. While PayPal utilizes the engineering benefits of AI tooling, he stated that the surrounding data security wrapper is equally critical.
"When we think about our key AI principles, it's data and security. It's privacy, it's transparency, it's explainability," Khalfan said. "As we wrap everything we're doing in these principles, it helps us keep this anchor of all of the efforts that we're making."
To operationalize this, PayPal's AI teams categorize models in tiers based on data sensitivity. This classification determines the specific controls required to protect stored data from tampering and unauthorized inputs, including prompt injections. It also guides how the organization manages the multiple identities required by AI agents.
Khalfan also pointed to the value of broader ecosystem collaboration, specifically referencing the Coalition for Secure AI (CoSAI). This industry-wide initiative provides documentation, white papers, and standardized methodologies to support secure AI development across different workstreams.
Alexandra Rose, director of government partnerships and the Counter Threat Unit at Sophos, summarized the objective of safe AI deployment as a balance of innovation and protection.
"I think it's important that security is not the world of no," she said. "It's how do we get to yes, and how do we get to a yes in a way that we're protected?"