Aggregator
Sprawling CrowdStrike Incident Mitigation Showcases Resilience Gaps
Deep Sea Phishing Pt. 1
Adventures in Shellcode Obfuscation! Part 6: Two Array Method
Attackers Exploit 'EvilVideo' Telegram Zero-Day to Hide Malware
Covert Data Exfiltration via JSON in an API
Learn how to conduct covert data exfiltration within JSON payloads of an API response.
The post Covert Data Exfiltration via JSON in an API appeared first on Dana Epp's Blog.
SecWiki News 2024-07-23 Review
更多最新文章,请访问SecWiki
Meta Llama 3.1 now available on Workers AI
Meta Llama 3.1 now available on Workers AI
【AI速读】开源情报暑期作业
威努特助力夏季达沃斯论坛安全保障工作!
Whose Voice Is It Anyway? AI-Powered Voice Spoofing for Next-Gen Vishing Attacks
Written by: Emily Astranova, Pascal Issa
Executive Summary
- AI-powered voice cloning can now mimic human speech with uncanny precision, creating for more realistic phishing schemes.
- According to news reports, scammers have leveraged voice cloning and deepfakes to steal over HK$200 million from an organization.
- Attackers can use AI-powered voice cloning in various phases of the attack lifecycle, including initial access, and lateral movement and privilege escalation.
- Mandiant's Red Team uses AI-powered voice spoofing to test defenses, demonstrating the effectiveness of this increasingly sophisticated attack technique.
- Organizations can take steps to defend against this threat by educating employees, and using source verification such as code words.
Last year, Mandiant published a blog post on threat actor use of generative AI, exploring how attackers were using generative AI (gen AI) in phishing campaigns and information operations (IO), notably to craft more convincing content such as images and videos. We also shared insights into attackers' use of large language models (LLMs) to develop malware. In the post, we emphasized that while attackers are interested in gen AI, use has remained relatively limited.
This post continues on that initial research, diving into some new AI tactics, techniques, and procedures (TTPs) and trends. We take a look at AI-powered voice spoofing, demonstrate how Mandiant red teams use it to test defenses, and provide security considerations to help stay ahead of the threat.
Growing AI-Powered Voice Spoofing ThreatGone are the days of robotic scammers with barely decipherable scripts. AI-powered voice cloning can now mimic human speech with uncanny precision, injecting a potent dose of realism into phishing schemes. We are reading more stories on this threat in the news, such as the scammers that reportedly stole over HK$200 million from a company using voice cloning and deepfakes, and now the Mandiant Red Team has incorporated these TTPs when testing defenses.
Brief Overview of VishingUnlike its traditionally email-based counterpart, vishing (voice phishing) uses a voice-based approach. Rather than sending out an email with the hopes of garnering clicks, threat actors will instead place phone calls directly to individuals in order to earn trust and manipulate emotions, often by creating a sense of urgency.
Like traditional phishing, a threat actor's goal is to deceive individuals into divulging sensitive information, initiating malicious actions, or transferring funds using social engineering tactics. These deceptive calls often impersonate trustworthy entities such as banks, government agencies, or tech support, adding an extra layer of authenticity to the scam.
The rise of powerful AI tools such as text generators, image creators, and voice synthesizers has sparked a wave of open-source projects, making these technologies more accessible than ever before. This rapid development is putting the power of AI into the hands of a wider audience, fueling the potential for more convincing vishing attacks.
AI-Powered Voice Spoofing in the Attack LifecycleModern voice cloning involves recording and processing audio and training a model. Training the model relies on a powerful combination of open-source libraries and algorithms, of which there are many popular choices today. When these initial steps are completed, attackers may take additional time to understand speech patterns of the individual being impersonated, and even write a script before conducting operations. This helps create an extra layer of authenticity, and the attack is more likely to be successful.
Next, attackers may use AI-powered voice spoofing in different stages of the attack lifecycle.
Initial AccessThere are various ways a threat actor can gain initial access using a spoofed voice. Threat actors can impersonate executives, colleagues, or even IT support personnel to trick victims into revealing confidential information, granting remote access to systems, or transferring funds. The inherent trust associated with a familiar voice can be exploited to manipulate victims into taking actions they would not normally take, such as clicking on malicious links, downloading malware, or divulging sensitive data. Although voice-based trust systems are seldom used, AI-spoofed voices can also potentially bypass voice-based authentication systems used for multi-factor authentication or password resets, granting unauthorized access to critical accounts.
Lateral Movement and Privilege EscalationThreat actors can leverage AI voice spoofing to hop from system to system, impersonating trusted individuals to manipulate their way to higher access levels. There are a few ways this may unfold.
One method of lateral movement is chaining impersonations. Imagine an attacker initially gaining access by impersonating a helpdesk employee. After establishing communication with a network administrator, the attacker could subtly record the administrator's voice during the interaction. This captured audio can then be used to train a new AI voice spoofing model, allowing the attacker to seamlessly impersonate the administrator and initiate communication with other unsuspecting targets within the network. This chaining of impersonations enables the attacker to move laterally, potentially gaining access to more sensitive systems and data.
Another method is during the initial access phase, threat actors might discover readily available voice recordings on a compromised host, such as voicemails, meeting recordings, or even training materials. These recordings can be leveraged to train AI voice-spoofing models, allowing the attacker to impersonate specific individuals within the organization without needing to interact with them directly. This can be particularly effective for targeting high-value individuals or bypassing systems that rely on voice biometrics for access control.
Mandiant Red Team Proactive Case StudyIn late 2023, Mandiant conducted a controlled red team exercise with a client, using AI voice spoofing to gain initial access to their internal network. This case study highlights the effectiveness of this increasingly sophisticated attack technique.
The exercise began with obtaining client consent and crafting a custom realistic social engineering pretext. The Red Team opted to impersonate a member of the client's security team, requiring a natural voice sample. After reviewing the pretext with the client, the client provided explicit permission to use their voice for this exercise.
Next, we obtained the necessary audio data to train a model, and achieved a passable level of realism. Open-source intelligence (OSINT) played a crucial role in the next phase. By gathering employee data (job titles, locations, phone numbers), the Red Team identified potential targets most likely to recognize the impersonated voice and possess the necessary permissions for our objectives. With a curated target list, the team initiated spoofed calls via VoIP services and number spoofing.
After facing voicemail greetings and other initial hurdles, the first unsuspecting victim answered with a trusting "Hey boss, what's up?". The Red Team had reached a security administrator who reported to the person whose voice was spoofed. Leveraging the pretext of a "VPN client misconfiguration," the Red Team exploited the opportune timing of a recent global outage impacting the client's VPN provider. This carefully chosen scenario instilled a sense of urgency and increased the victim's susceptibility to our instructions.
Due to the trust in the voice on the phone, the victim bypassed security prompts from both Microsoft Edge and Windows Defender SmartScreen, unknowingly downloading and executing a pre-prepared malicious payload onto their workstation. The successful detonation of the payload marked the completion of the exercise, showcasing the alarming ease with which AI voice spoofing can facilitate the breach of an organization.
Security ConsiderationsThis type of exploitation is social in nature, and currently technical detection controls are limited. Available mitigations center around three major principles: awareness, source verification, and future technical considerations.
AwarenessEducate employees, particularly those who control money and access, on the existence and methodologies of AI vishing attacks. Consider adding AI enhanced threats to security awareness training. With such effective and accessible mimicry available to threat actors, everyone should now adopt a healthy dose of skepticism when dealing with phone calls, especially if they fall under one or more of the following cases:
- The caller is saying things that sound too good to be true.
- The call is from an untrusted number/entity.
- The caller tries to enforce questionable authority.
- The caller is out of character for the source.
Employees in trusted positions should be extremely wary of high urgency calls that demand immediate action, especially when the caller asks or gives financial or access oriented information, such as requesting a one-time password (OTP). Employees should be empowered to hang up and report suspicious calls, especially if they believe AI vishing is involved. It is likely another employee is about to receive the same attack.
Source VerificationWhen possible, cross-reference the information with trusted sources. This includes hanging up and calling back at a number previously validated for the source. The caller can be asked to send a text message from a previously validated number or ask them to send an email or an enterprise chat message.
Train employees to spot audio inconsistencies, such as sudden variation of background noise, which could be a symptom of the threat actor not spending enough time cleaning the audio. Look for unusual speech patterns, like a completely different vernacular than what the source typically uses. Watch for unnatural inflections, fillers not commonly used by the source, strange clicks, pauses or abnormal repetition. Pay attention to voice timbre (tone) and cadence as well.
Establish code words for executives and critical staff that deal with sensitive and/or financial information. Do this out of band so there is no trace within the enterprise to limit exposure in case of a breach. The code words can then be used to validate individuals in case of doubt.
If possible, let unknown numbers go to voicemail. Apply the same vigilance to voice calls that you would otherwise apply to emails. Report any suspicious calls for wider awareness.
Future Technical ConsiderationsToday, organizations can, at best, implement traditional security measures to protect audio conversations within the organization, like using separate networks for VoIP channels as well as implementing authentication and transmission encryption for the same. However, this does not resolve attacks made against employees' personal phones.
Going forward, organizations should consider protecting all audio assets, implementing technologies such as digital watermarking that are subtle enough to be imperceptible to the human ear, but easily detected by AI technologies.
Eventually, mobile device management tools will offer technologies to help verify callers. In the meantime, organizations should consider requiring all sensitive conversations to occur over enterprise chat channels, where strong authentication is required, and identities are not easily spoofed.
Research and tools are actively being developed to help in detecting deepfakes. While they have inconsistent accuracy today, they can still provide value in identifying deepfakes in voicemail or offline voice notes. The detection capabilities will improve over time and eventually be adopted into supportable enterprise tooling. For additional reading, consider the active research going into real-time detection, such as DF-Captcha, which suggests a simple application to queue human prompts implemented using challenge response to validate the identity of the party on the other line.
ConclusionIn this blog post, we explored how modern AI tools can help create more convincing vishing attacks. The alarming success of Mandiant's vishing underscores the urgent need for heightened security measures against AI voice-spoofing attacks. While technology offers powerful tools for both attackers and defenders, the human element remains the critical vulnerability. The case study we shared should serve as a wake-up call, urging organizations and individuals alike to take proactive steps.
Mandiant started leveraging AI voice-spoofing attacks in its more complex Red Team Assessments and Social Engineering Assessments to demonstrate the impact such an attack could have on an organization. As threat actors' use of this technique increases in frequency, it is imperative that defenders plan and take precautions.
Wanted: An SBOM Standard to Rule Them All
Shocked, Devastated, Stuck: Cybersecurity Pros Open Up About Their Layoffs
Find Threats Exploiting CrowdStrike Outage with TI Lookup
A recent update by CrowdStrike on July 18, 2024, resulted in a worldwide outage, causing significant disruption for users who were left with blue screens of death (BSODs) on their devices. Cybercriminals seized the opportunity to target affected users with phishing scams and malware. The ANY.RUN team has been closely monitoring the situation after the […]
The post Find Threats Exploiting CrowdStrike Outage <br> with TI Lookup appeared first on ANY.RUN's Cybersecurity Blog.
VMware ESXi Servers Targeted By New Linux Ransomware Variant
Telegram Zero-Day Enabled Malware Delivery
Ransomware Attack Shuts Down Three Dozen Los Angeles Courts
If You Only Have Five Minutes, Here’s CNAPP in a Snap (But We Have an eBook, Too)
If you’re a bit puzzled by all the talk about cloud native application protection platforms (CNAPPs), worry not. Our new eBook “Empower Your Cloud: Mastering CNAPP Security” explains in plain English what CNAPP is, how it works and why it’ll help you secure your cloud environment confidently. Read on to check out the eBook’s main highlights.
As organizations move operations to the cloud, the need for robust security measures has never been more critical. Cloud native application protection platforms (CNAPPs) offer comprehensive solutions designed to tackle the myriad threats and vulnerabilities that accompany cloud infrastructures.
Tenable recently published an eBook — “Empower Your Cloud: Mastering CNAPP Security.” You can download it here. This post summarizes the key points. But read the book to get the full story.
The misunderstood shared responsibility modelCloud security was built on a misleading concept: the shared responsibility model. On its surface, the “shared responsibility” merely outlines the individual security responsibilities of cloud providers and customers. It seems pretty straightforward and the model does provide a framework for understanding security responsibilities. But customer beware: It usually places the onus on you to implement and maintain robust security measures.
A cottage industry rises to add to the confusionIn response to the rush to the cloud, a cottage industry of solutions have popped up that handle a tiny sliver of the cloud security challenge. The end result is product fatigue. In the age of limited resources, when your day job has you focused on threats like ransomware and trying to ensure compliance, there are only so many alerts and systems you can pay attention to. After a while it’s just all noise.
Meanwhile, cloud breaches are almost universalAmid that noise, the prevalence of cloud breaches underscores the urgency for more complete security measures.
In Tenable's recent "Cloud Security Outlook" report, a staggering 95% of security professionals surveyed reported experiencing a cloud breach within an 18-month period, with an average of 3.6 breaches per respondent. These statistics highlight the pervasive nature of cloud security risks and the need for proactive measures to mitigate them.
(Source: Tenable's "Cloud Security Outlook" report, May 2024)
Amid that noise, the prevalence of cloud breaches underscores the urgency for more complete security measures.
CNAPPs to the rescueCNAPP solutions replace that patchwork of siloed products that often cause more problems than they solve. Those products usually provide only partial coverage and often create overhead and friction with the products they’re supposed to work with. And, in today's multi-cloud landscape, organizations often work with multiple cloud providers to optimize services and avoid vendor lock-in.
The complexity of securing the cloud is multiplied when it becomes plural: Clouds. Managing security across heterogeneous cloud infrastructures is a big challenge. Plus, traditional security tools provided by individual cloud vendors are often limited to their respective platforms, leading to fragmented security postures.
A comprehensive CNAPP solution includes a wide variety of essential capabilities, including:
- Cloud security posture management (CSPM)
- Cloud infrastructure entitlement management (CIEM)
- Cloud workload protection (CWP)
- Kubernetes security posture management (KSPM)
- Infrastructure as code (IaC) scanning
- Cloud detection and response (CDR)
- Data security posture management (DSPM)
As you evaluate CNAPPs, make sure you cover your bases on three key components: Identity and access management to ensure “least privilege” access; vulnerability management to prioritize and remediate vulnerabilities based on their potential impact; and exposure management to gain visibility across cloud environments and mitigate risks that stem from the toxic combinations of vulnerabilities, misconfigurations, and excessive permissions.
How CNAPPs can help youCNAPPs benefit a wide range of stakeholders involved in cloud security, including security, DevOps, DevSecOps, IAM, and IT teams. A CNAPP also helps those disparate groups collaborate to reduce cloud environment risks.
Enterprises stand to gain a number of advantages, including enhanced visibility, consistent security posture, streamlined infrastructure health, minimal overhead, seamless integration, shift-left security and holistic security coverage.
Check out this short video, in which a Tenable security engineer explains some CNAPP benefits experienced by Tenable customers.
Learn moreTo learn more about CNAPPs, read our new eBook “Empower Your Cloud: Mastering CNAPP Security.”