Home OSINT News Signals
CYBER

AI Security Crisis: CISOs Rely on Outdated Tools and Skills to Protect Modern AI Systems, Study Reveals

đź•“ 2 min read

A new industry report reveals a significant and growing security crisis in the age of artificial intelligence. According to the *AI and Adversarial Testing Benchmark Report 2026* from security firm Pentera, a majority of Chief Information Security Officers (CISOs) and senior security leaders are attempting to defend complex AI systems with tools, skills, and organizational structures that are fundamentally ill-equipped for the task. The study, which surveyed 300 U.S.-based CISOs, paints a concerning picture of widespread visibility gaps, decentralized ownership, and a critical shortage of specialized expertise, leaving corporate AI infrastructure vulnerable to novel threats.

The core of the problem lies in the pervasive and integrated nature of modern AI deployment. AI systems are no longer isolated experiments; they are deeply layered across and woven into the fabric of existing corporate technology stacks. From cloud platforms and identity management systems to core business applications and data pipelines, AI models interact with critical infrastructure in ways traditional security tools were not designed to monitor. This integration, coupled with ownership spread across disparate data science, development, and IT teams, has led to a collapse of effective centralized security oversight. Consequently, 67% of CISOs admitted to having limited visibility into how AI is actually being used within their organizations, with none claiming full visibility. Most leaders acknowledge the existence of unmanaged or "shadow AI" usage, creating a vast and poorly understood attack surface.

This lack of visibility directly translates into an inability to manage risk. Security teams are struggling to answer fundamental questions essential for threat modeling and defense: Which identities and permissions do AI agents use? What sensitive data can they access and exfiltrate? How do they behave when standard security controls fail or are bypassed? Without clear answers, organizations cannot properly assess the unique risks AI introduces, such as autonomous decision-making with security implications, indirect access paths that bypass traditional perimeters, and privileged, machine-to-machine interactions that evade human oversight.

Notably, the study indicates that the primary barrier to securing AI is not financial. Only 17% of CISOs cited budget constraints as a top concern, suggesting a willingness to invest. The true obstacles are a severe skills gap and technological lag. Security professionals lack the specialized expertise to evaluate AI-specific risks—like data poisoning, model theft, or adversarial attacks—in real-world environments. Furthermore, they are relying on legacy security controls that cannot interpret the novel behaviors and data flows of AI systems. Until organizations bridge this expertise chasm and adopt or develop tools designed for the AI era, their cutting-edge AI deployments will remain secured with yesterday's defenses, an inherently risky proposition.

Telegram X LinkedIn
Back to News