Aggregator
Threat Actors Use GenAI to Launch Phishing Attacks Mimicking Government Websites
Threat actors are increasingly leveraging generative AI (GenAI) tools to craft highly convincing phishing websites that impersonate legitimate government portals. As highlighted by Zscaler ThreatLabz in their recent reports and blogs, the dual nature of GenAI empowering productivity for legitimate users while enabling cybercriminals has become a critical issue. These tools, such as DeepSite AI […]
The post Threat Actors Use GenAI to Launch Phishing Attacks Mimicking Government Websites appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.
Mustang Panda Attacking Windows Users With ToneShell Malware Mimic as Google Chrome
A sophisticated new cyber campaign has emerged targeting Windows users through a deceptive malware variant known as ToneShell, which masquerades as the legitimate Google Chrome browser. The advanced persistent threat (APT) group Mustang Panda, known for its strategic targeting of government and technology sectors, has deployed this latest tool as part of an ongoing espionage […]
The post Mustang Panda Attacking Windows Users With ToneShell Malware Mimic as Google Chrome appeared first on Cyber Security News.
CVE-2013-10064 | ActFax Server 5.01 RAW Protocol Interface stack-based overflow (EUVD-2013-7284 / EDB-24467)
CVE-2025-54873 | risc0 RISC Zero up to 2.1.x divide by zero (GHSA-f6rc-24x4-ppxp / EUVD-2025-23666)
CVE-2025-54653 | Huawei HarmonyOS 5.0.1/5.0.2 Virtualization File Module path traversal (EUVD-2025-23713)
CVE-2025-22469 | SATO CL4-6NX Plus/CL4-6NX-J Plus prior 1.15.5-r1 os command injection (EUVD-2025-23819)
CVE-2025-54652 | Huawei HarmonyOS 5.0.1/5.0.2 Virtualization Base Module path traversal (EUVD-2025-23714)
Grok 未经用户要求就生成斯威夫特的裸照
Google and Cisco Report CRM Software Breaches Via Vishing
Technology giants Google and Cisco separately said they've both suffered recent data breaches after attackers socially engineered their employees via voice phishing attacks, leading to a breach of their customer relationship management software, exposing customer data.
CVE-2021-47570 | Linux Kernel up to 5.15.5 r8188eu rtw_wx_read32 memory leak (c8d3775745ad/be4ea8f38355 / Nessus ID 243905)
CVE-2022-50036 | Linux Kernel up to 5.10.137/5.15.62/5.19.3 Negative Number memory corruption (Nessus ID 243904 / WID-SEC-2025-1350)
CVE-2022-49929 | Linux Kernel up to 6.0.7 rxe_recheck_mr information disclosure (Nessus ID 243903 / WID-SEC-2025-0922)
CVE-2017-0510 | Google Android Kernel FIQ Debugger access control (Nessus ID 243910 / BID-96800)
CVE-2024-27392 | Linux Kernel up to 6.8.1 nvme ns_update_nuse double free (534f9dc7fe49/8d0d2447394b / Nessus ID 243909)
The AI Security Dilemma: Navigating the High-Stakes World of Cloud AI
AI presents an incredible opportunity for organizations even as it expands the attack surface in new and complex ways. For security leaders, the goal isn't to stop AI adoption but to enable it securely.
Artificial Intelligence is no longer on the horizon; it's here, and it's being built and deployed in the cloud at a staggering pace. From leveraging managed services like Microsoft Azure Cognitive Services and Amazon SageMaker to building custom models on cloud infrastructure, organizations are racing to unlock the competitive advantages of AI.
But this rush to adoption brings a new, high-stakes set of security challenges. The Tenable Cloud AI Risk Report 2025 reveals that the very platforms enabling this revolution are also introducing complex and often overlooked risks.
Our analysis uncovered a stark reality: AI workloads are significantly more vulnerable than their non-AI counterparts. A staggering 70% of cloud workloads with AI software installed have at least one critical, unpatched vulnerability, compared with 50% for workloads without AI software. This makes your most innovative projects your most insecure.
Jenga®-style risks in managed AI servicesOne of the most significant challenges stems from the way managed AI services are built. Cloud providers often layer new AI services on top of existing infrastructure components, a concept we call "Jenga-style" architecture. For example, a managed notebook service might be built on a container service, which in turn runs on a virtual machine.
The problem? Risky defaults and misconfigurations can be inherited from these underlying layers, often without the user's knowledge. This creates a complex and opaque stack of permissions and settings that is incredibly difficult to secure. A default setting that allows root access on an underlying compute instance, for example, could be inherited by the AI service, creating a critical security flaw that isn't visible in the AI service's top-level configuration.
Our research found specific, risky defaults in popular services:
- Amazon SageMaker: Instances were found with root access enabled, giving a potential attacker complete control.
- Amazon Bedrock: Training data buckets were configured without the "block public access" setting enabled, and often had overly permissive access policies.
For security leaders, the goal isn't to stop AI adoption but to enable it securely. This requires a proactive and AI-aware security strategy. Here are four recommendations:
- Extend vulnerability management to AI tools: Your security program must account for the unique software stack of AI development. This includes popular libraries like TensorFlow and PyTorch, as well as the underlying infrastructure. The high rate of critical CVEs in AI workloads shows that basic vulnerability hygiene is more critical than ever.
- Scrutinize managed service configurations: Do not trust the defaults. When deploying managed AI services like Amazon SageMaker, Google Cloud Vertex AI or Azure Cognitive Services, conduct a thorough review of the underlying permissions and configurations. Understand the "Jenga stack" you're building on and harden every layer. Ensure that data storage for training models is properly secured and not publicly accessible.
- Implement strong identity and access controls: AI models and, often, the data they are trained on are incredibly sensitive assets. Apply the principle of least privilege rigorously. Who and what can access your training data? What permissions does the AI model have at runtime? An attacker who compromises a model could potentially poison it or, worse, use its credentials to move laterally across your environment.
- Adopt a unified security platform: The interconnected nature of AI risks — from an underlying vulnerability to an exposed data bucket to an overly permissive role — demands a unified view. A cloud-native application protection platform (CNAPP) that incorporates powerful data security posture management (DSPM) and pairs cloud security posture management (CSPM) and AI security posture management (AISPM) will identify the sensitive data in your cloud environment and correlate the different types of risk, providing insights essential to understanding your true exposure and identifying the most critical attack paths.
AI presents an incredible opportunity, but it also expands the attack surface in new and complex ways. By understanding these unique risks and applying foundational cloud security principles, you can ensure your organization's journey into AI is both innovative and secure.
Discover the full scope of AI and cloud risks in our latest reports.
➡️ Download the Tenable Cloud AI Risk Report 2025 to learn more.
➡️ Download the Tenable Cloud Security Risk Report 2025
➡️ View our on-demand research webinar
JENGA® IS A REGISTERED TRADEMARK OWNED BY POKONOBE ASSOCIATES.
https://www.youtube-nocookie.com/embed/IPusFv_iEI8?si=Kr-IckosVNP0Azou
Creators/Authors/Presenters: Ashish Rajan, Jackie Bow, Kane Narraway
Our deep appreciation to Security BSides - San Francisco and the Creators/Authors/Presenters for publishing their BSidesSF 2025 video content on YouTube. Originating from the conference’s events held at the lauded CityView / AMC Metreon - certainly a venue like no other; and via the organization's YouTube channel.
Additionally, the organization is welcoming volunteers for the BSidesSF Volunteer Force, as well as their Program Team & Operations roles. See their succinct BSidesSF 'Work With Us' page, in which, the appropriate information is to be had!
The post https://www.youtube-nocookie.com/embed/IPusFv_iEI8?si=Kr-IckosVNP0Azou appeared first on Security Boulevard.
Проверьте свои согласия: скоро вы увидите, кто использует ваши данные
Sysdig Previews Set of AI Agents for Cloud Security Platform
Sysdig, this week at the Black Hat USA 2025 conference, revealed it is providing early access to artificial intelligence (AI) agents that have been added to its cloud native application protection platform (CNAPP).
The post Sysdig Previews Set of AI Agents for Cloud Security Platform appeared first on Security Boulevard.