Who's Afraid of AI Risk in Cloud Environments?
The Tenable Cloud AI Risk Report 2025 reveals that 70% of AI cloud workloads have at least one unremediated critical vulnerability — and that AI developer services are plagued by risky permissions defaults. Find out what to know as your organization ramps up its AI game.
With AI bursting out all over these are exhilarating times. The use by developers of self-managed AI tools and cloud-provider AI services is on the rise as engineering teams rush to the AI front. This uptick and the fact that AI models are data-thirsty — requiring huge amounts of data to improve accuracy and performance — means increasingly more AI resources and data are in cloud environments. The million dollar question in the cybersecurity wheelhouse is: What is AI growth doing to my cloud attack surface?
The Tenable Cloud AI Risk Report 2025 by Tenable Cloud Research revealed that AI tools and services are indeed introducing new risks. How can you prevent such risks?
Let’s look at some of the findings and related challenges, and at proactive AI risk reduction measures within easy reach.
Why we conducted this researchUsing data collected over a two-year period, the Tenable Cloud Research team analyzed in-production workloads and assets across cloud and enterprise environments — including Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP). We sought to understand adoption levels of AI development tooling and frameworks, and AI services, and carry out a reality check on any emerging security risks. The aim? To help organizations be more aware of AI security pitfalls. In parallel, our research helps fuel Tenable’s constantly evolving cloud-native application protection platform (CNAPP) to best help our customers address these new risks.
Key concernsLet’s explore two of the findings — one in self-managed AI tooling, the other in AI services.
- 70% of the cloud workloads with AI software installed contained at least one unremediated critical CVE. One of the CVEs observed was a critical curl vulnerability, which remained unremediated more than a year after the CVE was published.. Any critical CVE puts a workload at risk as a primary target for bad actors; a CVE in an AI workload is even more cause for concern due to the potential sensitivity of the data within and impact should it be exploited.
- Like any cloud service, AI services contain risky defaults in cloud provider building blocks that users are often unaware of. We previously reported on the Jenga® concept — a pattern in which cloud providers build one service on top of the other, with risky defaults inherited from one layer to the next. So, too, in AI services. Specifically, 77% of organizations that had set up Vertex AI Workbench in Google Cloud had at least one notebook with the attached service account configured as the overly-privileged Compute Engine service account — creating serious permissions risk.
An unremediated critical CVE in any cloud workload is of course a security risk that should be addressed in accordance with an organization’s patch and risk management policy, with prioritization that takes into account impact and asset sensitivity. So high an incidence of critical vulnerabilities in AI cloud workloads is an alarm bell. AI workloads potentially contain sensitive data. Even training and testing data can contain real information, such as personal information (PI), personally identifiable information (PII) or customer data, related to the nature of the AI project. Exploited, exposed AI compute or training data can result in data poisoning, model manipulation and data leakage. Teams must overcome the challenges of alert noise and risk prioritization to make mitigating critical CVEs, especially in AI workloads, a strategic mission.
Why risky access defaults in AI services are a concern and challengeSecuring identities and entitlements is a challenge in any cloud environment. Overprivileged permissions are even riskier when embedded in AI services building blocks as they often involve sensitive data. You must be able to see risk to fix it. Lack of visibility in cloud and multicloud environments, siloed tools that prevent seeing risks in context and reliance on cloud provider security all make it difficult for organizations to spot and mitigate risky defaults, and other access risks that attackers are looking for.
Key actions for preventing such AI risksThe Artificial Intelligence Index Report 2024, published by Stanford University, noted that organizations’ top AI-related concerns include privacy, data security and reliability; yet most have so far mitigated only a small portion of the risks. Good security best practices can go a long way to getting ahead of AI risk.
Here are three basic actions for reducing the cloud AI risks we discussed here:
- Prioritize the most impactful vulnerabilities for remediation. Part of the root cause behind slow-to-no CVE remediation is human nature. CVEs are a headache — noisy, persistent and some solutions overwhelm with notifications. Cloud security teams own the risk but rely on the cooperation of other teams to mitigate exposures. Understand which CVEs have the greatest potential impact so you can guide teams to tackle the high-risk vulnerabilities first. Advanced tools help by factoring in exploitation likelihood in risk scoring.
- Reduce excessive permissions to curb risky access. It is your shared responsibility to protect your organization from risky access — don’t assume the permissions settings in AI services are risk-free. Continuously monitor to identify and eliminate excessive permissions across identities, resources and data, including to cloud-based AI models/data stores, to prevent unauthorized or overprivileged access. Tightly manage cloud infrastructure entitlements by implementing least privilege and Just in Time access. Review risk in context to spot cloud misconfigurations, including toxic combinations involving identities.
- Classify as sensitive all AI components linked to high-business-impact assets (e.g., sensitive data, privileged identities). Include AI tools and data in security inventories and assess their risk regularly. Use data security posture management capabilities to granularly assign the appropriate sensitivity level.
Ensuring strong AI security for cloud environments requires identity intelligent, AI-aware cloud-native application protection to manage the emerging risks with efficiency and accuracy.
SummaryCloud-based AI has its security pitfalls, with hidden misconfigurations and sensitive data that make AI workloads vulnerable to misuse and exploitation. Applying the right security solutions and best practices early on will empower you to enable AI adoption and growth for your organization while minimizing its risk.
JENGA® is a registered trademark owned by Pokonobe Associates.
Learn more- Download the Cloud AI Risk Report 2025
- View the webinar 2025 Cloud AI Risk Report: Helping You Build More Secure AI Models in the Cloud
- See what Tenable Cloud Security can do for you