Facebook Google Plus Twitter LinkedIn YouTube RSS Menu Search Resource - BlogResource - WebinarResource - ReportResource - Eventicons_066 icons_067icons_068icons_069icons_070

Cybersecurity Snapshot: Top Advice for Detecting and Preventing AI Attacks, and for Securing AI Systems



Cybersecurity Snapshot: Top Advice for Detecting and Preventing AI Attacks

As organizations eagerly adopt AI, cybersecurity teams are racing to protect these new systems. In this special edition of the Cybersecurity Snapshot, we round up some of the best recent guidance on how to fend off AI attacks, and on how to safeguard your AI systems.

Key takeaways

  1. Developers are getting new playbooks from groups like OWASP and OpenSSF to lock down agentic AI and AI coding assistants.
     
  2. Hackers are broadly weaponizing AI, using conventional LLMs to scale up phishing and new agentic AI tools to run complex, automated attacks.
     
  3. To fight fire with fire, organizations are unleashing their own agentic AI tools to hunt for threats and boost their cyber defenses.

In case you missed it, here’s fresh guidance for protecting your organization against AI-boosted attacks, and for securing your AI systems and tools.

1- OWASP: How to safeguard agentic AI apps

Agentic AI apps are all the rage because they can act autonomously without human intervention. That’s also why they present a major security challenge. If an AI app can act on its own, how do you stop it from going rogue or getting hijacked?

If you’re building or deploying these “self-driving” AI apps, take a look at OWASP’s new “Securing Agentic Applications Guide.”

Published in August, this guide gives you “practical and actionable guidance for designing, developing, and deploying secure agentic applications powered by large language models (LLMs).”
 

Cover page of OWASP's “Securing Agentic Applications Guide”


It's a guide aimed at the folks in the trenches, including developers, AI/ML engineers, security architects, and security engineers. Topics include:

  • Technical security controls and best practices
  • Secure architectural patterns
  • Common threat mitigation strategies
  • Guidance across the development lifecycle (design, build, deploy, operate).
  • Security considerations for components such as LLMs, orchestration middleware, memory, tools, and operational environments

It even provides examples of how to apply security principles in different agentic architectures.

For more information about agentic AI security:

2 - Anthropic: How an attacker turned Claude Code into a master hacker

Think that OWASP guide is just theoretical? Think again. In a stark example of agentic AI's potential for misuse, AI vendor Anthropic recently revealed how a sophisticated cyber crook weaponized its Claude Code product to “an unprecedented degree” in a broad extortion and data-theft campaign.

It’s a remarkable story, even by the standards of the AI world. The hacker used this agentic AI coding tool to:

  • Automate reconnaissance.
  • Harvest victims’ credentials.
  • Breach networks.
  • Make tactical and strategic decisions, such as choosing which data to steal, and crafting “psychologically targeted” extortion demands.
  • Crunch stolen financial data to set the perfect ransom amounts.
  • Generate “visually alarming” ransom notes.

The incident takes AI-assisted cyber crime to another level.

Agentic AI has been weaponized. AI models are now being used to perform sophisticated cyberattacks, not just advise on how to carry them out,” Anthropic wrote in an August blog post.

This new breed of agentic AI abuse makes security exponentially harder because the tool is autonomous, so it adapts to defenses in real time.
 

Illustration of an agentic AI attack

(Image generated by Tenable using Google Gemini)

By the time Anthropic shut the attacker down, at least 17 organizations had been hit, including healthcare, emergency services, government, and religious groups.

Anthropic says it has since built new classifiers – automated screening tools – and detection methods to catch these attacks faster.

This incident, which Antropic labeled “vibe hacking,” is just one of 10 real-world use cases included in Anthropic’s “Threat Intelligence Report: August 2025” that detail abuses of the company’s AI tools.

Anthropic said it hopes the report helps the broader AI security community strengthen their own defenses.

“While specific to Claude, the case studies … likely reflect consistent patterns of behaviour across all frontier AI models. Collectively, they show how threat actors are adapting their operations to exploit today’s most advanced AI capabilities,” the report reads.

For more information about AI security, check out these Tenable Research blogs:

3 - CSA: Traditional IAM can’t handle agentic AI identity threats

The Anthropic attack, in which an agentic AI tool stole credentials, highlights a fundamental vulnerability: managing identities for autonomous systems. What happens when you give these autonomous AI systems the keys to your organization’s digital identities?

It’s a question that led the Cloud Security Alliance (CSA) to develop a proposal for how to better protect digital identities in agentic AI tools.

In its new paper "Agentic AI Identity and Access Management: A New Approach," published in August, the CSA argues that traditional approaches for identity and access management (IAM) fall short when applied to agentic AI systems.
 

Cover page of the CSA paper "Agentic AI Identity and Access Management: A New Approach"


“Unlike conventional IAM protocols designed for predictable human users and static applications, agentic AI systems operate autonomously, make dynamic decisions, and require fine-grained access controls that adapt in real-time,” the CSA paper reads.

Their solution? A new, adaptive IAM framework that ditches old-school, predefined roles and permissions for a continuous, context-aware approach.

The framework is built on several core principles:

  • Zero trust architecture
  • Decentralized identity management
  • Dynamic policy-based access control
  • Continuous monitoring

The CSA’s proposed framework is built on “rich, verifiable” identities that track an AI agent’s capabilities, origins, behavior, and security posture.

Key components of the framework include an agent naming service (ANS) and a unified global session-management and policy-enforcement layer.

For more information about IAM in AI systems:

4 - OpenAI: Attackers abuse ChatGPT to sharpen old tricks

While agentic AI attacks illustrate novel AI-abuse methods, attackers are also misusing conventional AI chatbots for more pedestrian purposes.

For example, as OpenAI recently disclosed, attackers have recently attempted to use ChatGPT to refine malware, set up command-and-control hubs, write multi-language phishing emails, and run cyber scams.

In other words, these attackers weren’t trying to use ChatGPT to create sci-fi-level super-attacks, but mostly trying to amplify their classic scams, according to OpenAI’s report “Disrupting malicious uses of AI: an update.”

“We continue to see threat actors bolt AI onto old playbooks to move faster, not gain novel offensive capability from our models,” OpenAI wrote in the report, published in October.
 

OpenAI logo


The report identifies several key trends among threat actors:

  • Using multiple AI models
  • Adapting their techniques to hide AI usage
  • Operating in a "gray zone" with requests that are not overtly malicious

Incidents detailed in the report include the malicious use of ChatGPT by: 

  • Cyber criminals from Russian-speaking, Korean-language, and Chinese-language groups to refine malware, create phishing content, and debug tools
  • Authoritarian regimes, specifically individuals linked to the People's Republic of China (PRC), to design proposals for large-scale social media monitoring and profiling, including a system to track Uyghurs
  • Organized scam networks, likely based in Cambodia, Myanmar, and Nigeria, to scale fraud by translating messages and creating fake personas
  • State-backed influence operations from Russia and China to generate propaganda, including video scripts and social media posts

“Our public reporting, policy enforcement, and collaboration with peers aim to raise awareness of abuse while improving protections for everyday users,” OpenAI wrote in the statement “Disrupting malicious uses of AI: October 2025.”

For more information about AI security, check out these Tenable resources:

5 - Is your AI coding buddy a security risk?

Hackers aren't the only ones using AI to code. Your own developers are, too. But the productivity gain they get from AI coding assistants can be costly if they’re not careful.

To help developers with this issue, the Open Source Security Foundation (OpenSSF) published the “Security-Focused Guide for AI Code Assistant Instructions.”

“AI code assistants are powerful tools,” reads the OpenSSF blog “New OpenSSF Guidance on AI Code Assistant Instructions.” “But they also create security risks, because the results you get depend heavily on what you ask.”
 

Logo of the OpenSSF


 The guide, published in September, provides developers tips and best practices on how to prompt these AI helpers to reduce the risk that they’ll generate unsafe code.

Specifically, the guide aims to ensure that AI coding assistants consider:

  • Application code security, such as validating inputs and managing secrets
  • Supply chain safety, such as selecting safe dependencies and using package managers
  • Platform or language-specific issues, such as applying security best practices to containers
  • Security standards and frameworks, such as those from OWASP and the SANS Institute

“In practice, this means fewer vulnerabilities making it into your codebase,” reads the guide.

For more information about the cyber risks of AI coding assistants:

6 - PwC: Cyber teams can’t get enough AI

Finally, here’s how organizations are fighting back. They’re leaning heavily into AI to strengthen their cyber defenses, including by prioritizing the use of defensive agentic AI tools.

That’s according to PwC’s new “2026 Global Digital Trust Insights: C-suite playbook and findings” report, based on a global survey of almost 4,000 business and technology executives.

AI’s potential for transforming cyber capabilities is clear and far-reaching,” reads a PwC article with highlights from the report, published in October.

For example, organizations are prioritizing the use of AI to enhance how they allocate cyber budgets; use managed cybersecurity services; and address cyber skills gaps.

With regards to respondents' priorities for AI cybersecurity capabilities in the coming year, threat hunting ranked first, followed by agentic AI. Other areas include identity and access management, and vulnerability scanning / vulnerability assessments.
 

AI security capabilities organizations will prioritize over the next 12 months

Chart from the PwC report “2026 Global Digital Trust Insights: C-suite playbook and findings” showing organization's AI security priorities


Meanwhile, organizations plan to use agentic AI primarily to bolster cloud security, data protection, and security operations in the coming year. Other agentic AI priority areas include security testing; governance, risk and compliance; and identity and access management.

“Businesses are recognising that AI agents — autonomous, goal-directed systems capable of executing tasks with limited human intervention — have enormous potential to transform their cyber programmes,” reads the report.

Beyond AI, the report also urges cyber teams to prioritize prevention over reaction. Proactive work like monitoring, assessments, testing, and training is always cheaper than the crisis-mode alternative of incident response, remediation, litigation, and fines. Yet, only 24% of organizations said they spend “significantly more” on proactive measures.
 

Chart from the PwC report “2026 Global Digital Trust Insights: C-suite playbook and findings” showing organization's spending on reactive vs. proactive security measures

Other topics covered in the report include geopolitical risk; cyber resilience; the quantum computing threat; and the cyber skills gap.

For more information about AI data security, check out these Tenable resources:

Check back here next Friday, when we’ll share some of the best AI risk-management and governance best practices from recent months.


Cybersecurity news you can use

Enter your email and never miss timely alerts and security guidance from the experts at Tenable.

× Contact our sales team