Aggregator
G.O.S.S.I.P 阅读推荐 2025-03-26 电子邮件也能用来走私?
G.O.S.S.I.P 阅读推荐 2025-03-26 电子邮件也能用来走私?
G.O.S.S.I.P 阅读推荐 2025-03-26 电子邮件也能用来走私?
G.O.S.S.I.P 阅读推荐 2025-03-26 电子邮件也能用来走私?
What Happened Before the Breach?
Concentric AI’s UBDA feature identifies unusual user activity
Concentric AI announced new, context-driven behavior analytics capabilities in its Semantic Intelligence data security governance platform, enabling organizations to identify abnormal activity at the user level. The company has also added new integrations with Google Cloud Storage, Azure Data Lake, and ServiceNow, enabling customers to leverage Concentric AI’s industry-leading data security for even more data sources. User Behavior Data Analytics (UBDA) helps customers proactively identify unusual user activity – such as risky sharing or excessive … More →
The post Concentric AI’s UBDA feature identifies unusual user activity appeared first on Help Net Security.
New Testing Framework Helps Evaluate Sandboxes
VMware vDefend: Accelerate Enterprise’s Zero Trust Private Cloud Journey with Micro-segmentation and NDR Innovations
New enhancements include: Micro-segmentation Assessment, Air-gapped NDR, and Scale-out Data Lake Platform (Security Services Platform 5.0) For decades, enterprises have relied on perimeter defenses to protect their private cloud assets from external threats. Yet, in this era of ransomware, protecting only the perimeter has proven to be insufficient. Traditionally, only a handful of “crown jewel” … Continued
The post VMware vDefend: Accelerate Enterprise’s Zero Trust Private Cloud Journey with Micro-segmentation and NDR Innovations appeared first on VMware Security Blog.
Cyber Apocalypse CTF 2025: Tales from Eldoria
Date: March 21, 2025, 1 p.m. — 26 March 2025, 12:59 UTC [add to calendar]
Format: Jeopardy
On-line
Offical URL: https://ctf.hackthebox.com/event/details/cyber-apocalypse-ctf-2025-tales-from-eldoria-2107
Rating weight: 24.00
Event organizers: Hack The Box
HICAThon 1.0
Date: March 25, 2025, 3 a.m. — 26 March 2025, 12:30 UTC [add to calendar]
Format: Jeopardy
On-line
Location: Hybrid (Online & Offline at Symbiosis Skills & Professional University)
Offical URL: https://hicathon01.xyz/
Rating weight: 0.00
Event organizers: HICA SSPU
Who's Afraid of AI Risk in Cloud Environments?
The Tenable Cloud AI Risk Report 2025 reveals that 70% of AI cloud workloads have at least one unremediated critical vulnerability — and that AI developer services are plagued by risky permissions defaults. Find out what to know as your organization ramps up its AI game.
With AI bursting out all over these are exhilarating times. The use by developers of self-managed AI tools and cloud-provider AI services is on the rise as engineering teams rush to the AI front. This uptick and the fact that AI models are data-thirsty — requiring huge amounts of data to improve accuracy and performance — means increasingly more AI resources and data are in cloud environments. The million dollar question in the cybersecurity wheelhouse is: What is AI growth doing to my cloud attack surface?
The Tenable Cloud AI Risk Report 2025 by Tenable Cloud Research revealed that AI tools and services are indeed introducing new risks. How can you prevent such risks?
Let’s look at some of the findings and related challenges, and at proactive AI risk reduction measures within easy reach.
Why we conducted this researchUsing data collected over a two-year period, the Tenable Cloud Research team analyzed in-production workloads and assets across cloud and enterprise environments — including Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP). We sought to understand adoption levels of AI development tooling and frameworks, and AI services, and carry out a reality check on any emerging security risks. The aim? To help organizations be more aware of AI security pitfalls. In parallel, our research helps fuel Tenable’s constantly evolving cloud-native application protection platform (CNAPP) to best help our customers address these new risks.
Key concernsLet’s explore two of the findings — one in self-managed AI tooling, the other in AI services.
- 70% of the cloud workloads with AI software installed contained at least one unremediated critical CVE. One of the CVEs observed was a critical curl vulnerability, which remained unremediated more than a year after the CVE was published.. Any critical CVE puts a workload at risk as a primary target for bad actors; a CVE in an AI workload is even more cause for concern due to the potential sensitivity of the data within and impact should it be exploited.
- Like any cloud service, AI services contain risky defaults in cloud provider building blocks that users are often unaware of. We previously reported on the Jenga® concept — a pattern in which cloud providers build one service on top of the other, with risky defaults inherited from one layer to the next. So, too, in AI services. Specifically, 77% of organizations that had set up Vertex AI Workbench in Google Cloud had at least one notebook with the attached service account configured as the overly-privileged Compute Engine service account — creating serious permissions risk.
An unremediated critical CVE in any cloud workload is of course a security risk that should be addressed in accordance with an organization’s patch and risk management policy, with prioritization that takes into account impact and asset sensitivity. So high an incidence of critical vulnerabilities in AI cloud workloads is an alarm bell. AI workloads potentially contain sensitive data. Even training and testing data can contain real information, such as personal information (PI), personally identifiable information (PII) or customer data, related to the nature of the AI project. Exploited, exposed AI compute or training data can result in data poisoning, model manipulation and data leakage. Teams must overcome the challenges of alert noise and risk prioritization to make mitigating critical CVEs, especially in AI workloads, a strategic mission.
Why risky access defaults in AI services are a concern and challengeSecuring identities and entitlements is a challenge in any cloud environment. Overprivileged permissions are even riskier when embedded in AI services building blocks as they often involve sensitive data. You must be able to see risk to fix it. Lack of visibility in cloud and multicloud environments, siloed tools that prevent seeing risks in context and reliance on cloud provider security all make it difficult for organizations to spot and mitigate risky defaults, and other access risks that attackers are looking for.
Key actions for preventing such AI risksThe Artificial Intelligence Index Report 2024, published by Stanford University, noted that organizations’ top AI-related concerns include privacy, data security and reliability; yet most have so far mitigated only a small portion of the risks. Good security best practices can go a long way to getting ahead of AI risk.
Here are three basic actions for reducing the cloud AI risks we discussed here:
- Prioritize the most impactful vulnerabilities for remediation. Part of the root cause behind slow-to-no CVE remediation is human nature. CVEs are a headache — noisy, persistent and some solutions overwhelm with notifications. Cloud security teams own the risk but rely on the cooperation of other teams to mitigate exposures. Understand which CVEs have the greatest potential impact so you can guide teams to tackle the high-risk vulnerabilities first. Advanced tools help by factoring in exploitation likelihood in risk scoring.
- Reduce excessive permissions to curb risky access. It is your shared responsibility to protect your organization from risky access — don’t assume the permissions settings in AI services are risk-free. Continuously monitor to identify and eliminate excessive permissions across identities, resources and data, including to cloud-based AI models/data stores, to prevent unauthorized or overprivileged access. Tightly manage cloud infrastructure entitlements by implementing least privilege and Just in Time access. Review risk in context to spot cloud misconfigurations, including toxic combinations involving identities.
- Classify as sensitive all AI components linked to high-business-impact assets (e.g., sensitive data, privileged identities). Include AI tools and data in security inventories and assess their risk regularly. Use data security posture management capabilities to granularly assign the appropriate sensitivity level.
Ensuring strong AI security for cloud environments requires identity intelligent, AI-aware cloud-native application protection to manage the emerging risks with efficiency and accuracy.
SummaryCloud-based AI has its security pitfalls, with hidden misconfigurations and sensitive data that make AI workloads vulnerable to misuse and exploitation. Applying the right security solutions and best practices early on will empower you to enable AI adoption and growth for your organization while minimizing its risk.
JENGA® is a registered trademark owned by Pokonobe Associates.
Learn more- Download the Cloud AI Risk Report 2025
- View the webinar 2025 Cloud AI Risk Report: Helping You Build More Secure AI Models in the Cloud
- See what Tenable Cloud Security can do for you
Blumira introduces Microsoft 365 threat response feature
Blumira launched Microsoft 365 (M365) threat response feature to help organizations contain security threats faster by enabling direct user lockout and session revocation within M365, Azure and Entra environments. The new threat response feature integrates seamlessly with M365 environments through Blumira’s integrations. Once connected, IT administrators can immediately disable user access to compromised accounts directly within Blumira’s platform, streamlining response workflows and reducing the risk of additional malicious activity. “Security teams often face critical delays … More →
The post Blumira introduces Microsoft 365 threat response feature appeared first on Help Net Security.
CrushFTP Warns of HTTP(S) Port Vulnerability Enabling Unauthorized Access
Both CrushFTP, a popular file transfer technology, and Next.js, a widely used React framework for building web applications, have come under scrutiny due to significant vulnerabilities. Rapid7 has highlighted these issues, emphasizing their potential impact on data security and unauthorized access. Overview of Vulnerabilities Next.js Vulnerability (CVE-2025-29927): This critical vulnerability involves improper authorization in middleware, […]
The post CrushFTP Warns of HTTP(S) Port Vulnerability Enabling Unauthorized Access appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.
Парольный беспредел: почему мы забываем пароли и как этого избежать
Src实战-垂直越权任意添加用户
Src实战-垂直越权任意添加用户
Src实战-垂直越权任意添加用户
Windows 11 24H2 Update Disrupts Connection to Veeam Backup Server
Users of the Veeam Backup Server have encountered a significant issue following the Windows 11 24H2 update. Specifically, the update has disrupted the connection between Veeam Recovery Media and the Veeam Backup Server. This problem affects users who have created recovery media from Windows 11 version 24H2 (build 26100.3194) or higher. When attempting to restore […]
The post Windows 11 24H2 Update Disrupts Connection to Veeam Backup Server appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.