Aggregator
Russia Adjusts Cyber Strategy for the Long Haul in War With Ukraine
Tenable’s Software Update Process Protects Customers’ Business Continuity with a Safe, Do-No-Harm Design
With the unprecedented tech outages experienced by so many of our customers over the last week, we recognize the need for deeper understanding of our software development processes and how they support global business continuity. In this blog post, we’ll outline how Tenable’s comprehensive approach to the software development lifecycle (SDLC) allows us to produce extremely high-quality software and protect our customers’ business operations with a secure, do-no-harm approach.
Tenable rigorously manages every step in the software development lifecycle (SDLC) – research, design, development, testing and release – which results in software that’s stable, tested, accurate and timely.
Specifically, Tenable makes software-design choices that prioritize flexibility and give customers control over the deployment of our software releases and updates.
For example, customers can control when or if the Nessus Agent and its plugins are updated within their environment. Additionally, the Nessus Agent operates in the kernel’s “user space,” reducing the risk of operating system faults.
Features such as these put the ultimate power in the hands of customer change-control programs and lower the risk of incidents, such as the one that caused the global IT outage last week.
Below we provide more details.
- Declarative plugin version control feature
Supporting our customers' change-control management processes, Tenable provides the flexibility to choose from multiple options for how the plugin content version is applied across agent deployments. This offers customers the control to validate and test Tenable plugins before performing an enterprise deployment.
- Do-no-harm Nessus Agent design
The Tenable Nessus Agent is designed so that it executes solely in the user space and limits its interaction with the endpoint's kernel to standard system calls as provided by the operating system, such as event notification callbacks.
As such, the Tenable Nessus Agent does not require any Tenable-developed components to reside inside the operating system kernel. This design is intentional in order to reduce catastrophic impacts to the endpoint's operating system. It also prevents the Tenable Agent from impacting an endpoint's ability to boot properly.
User-space applications do not have direct access to the kernel or hardware. Therefore, they cannot directly cause the types of failures that lead to a “blue screen of death” in a Windows system.
- Nessus Agent software version control features
Enabling our customers' enterprise change-control procedures is at the top of Tenable's mind. With Tenable Vulnerability Management and with Nessus Manager for Security Center integrations, we provide multiple options for customers to apply software version control for their Nessus agents. These options allow customers to test and validate the Nessus Agent before performing an enterprise deployment. Depending on their business needs, customers may choose to leverage this feature.
We hope this blog post has provided you with a clear idea of how Tenable strives to design and deliver software with the highest degree of security and quality, guided by our top priority – to keep our customers safe and protect their businesses.
Please contact us if you wish to get more information about our software development processes.
Phish-Friendly Domain Registry “.top” Put on Notice
China's 'Evasive Panda' APT Spies on Taiwan Targets Across Platforms
Goodbye? Attackers Can Bypass 'Windows Hello' Strong Authentication
Sprawling CrowdStrike Incident Mitigation Showcases Resilience Gaps
Deep Sea Phishing Pt. 1
Adventures in Shellcode Obfuscation! Part 6: Two Array Method
Attackers Exploit 'EvilVideo' Telegram Zero-Day to Hide Malware
Covert Data Exfiltration via JSON in an API
Learn how to conduct covert data exfiltration within JSON payloads of an API response.
The post Covert Data Exfiltration via JSON in an API appeared first on Dana Epp's Blog.
SecWiki News 2024-07-23 Review
更多最新文章,请访问SecWiki
Meta Llama 3.1 now available on Workers AI
Meta Llama 3.1 now available on Workers AI
【AI速读】开源情报暑期作业
威努特助力夏季达沃斯论坛安全保障工作!
Whose Voice Is It Anyway? AI-Powered Voice Spoofing for Next-Gen Vishing Attacks
Written by: Emily Astranova, Pascal Issa
Executive Summary
- AI-powered voice cloning can now mimic human speech with uncanny precision, creating for more realistic phishing schemes.
- According to news reports, scammers have leveraged voice cloning and deepfakes to steal over HK$200 million from an organization.
- Attackers can use AI-powered voice cloning in various phases of the attack lifecycle, including initial access, and lateral movement and privilege escalation.
- Mandiant's Red Team uses AI-powered voice spoofing to test defenses, demonstrating the effectiveness of this increasingly sophisticated attack technique.
- Organizations can take steps to defend against this threat by educating employees, and using source verification such as code words.
Last year, Mandiant published a blog post on threat actor use of generative AI, exploring how attackers were using generative AI (gen AI) in phishing campaigns and information operations (IO), notably to craft more convincing content such as images and videos. We also shared insights into attackers' use of large language models (LLMs) to develop malware. In the post, we emphasized that while attackers are interested in gen AI, use has remained relatively limited.
This post continues on that initial research, diving into some new AI tactics, techniques, and procedures (TTPs) and trends. We take a look at AI-powered voice spoofing, demonstrate how Mandiant red teams use it to test defenses, and provide security considerations to help stay ahead of the threat.
Growing AI-Powered Voice Spoofing ThreatGone are the days of robotic scammers with barely decipherable scripts. AI-powered voice cloning can now mimic human speech with uncanny precision, injecting a potent dose of realism into phishing schemes. We are reading more stories on this threat in the news, such as the scammers that reportedly stole over HK$200 million from a company using voice cloning and deepfakes, and now the Mandiant Red Team has incorporated these TTPs when testing defenses.
Brief Overview of VishingUnlike its traditionally email-based counterpart, vishing (voice phishing) uses a voice-based approach. Rather than sending out an email with the hopes of garnering clicks, threat actors will instead place phone calls directly to individuals in order to earn trust and manipulate emotions, often by creating a sense of urgency.
Like traditional phishing, a threat actor's goal is to deceive individuals into divulging sensitive information, initiating malicious actions, or transferring funds using social engineering tactics. These deceptive calls often impersonate trustworthy entities such as banks, government agencies, or tech support, adding an extra layer of authenticity to the scam.
The rise of powerful AI tools such as text generators, image creators, and voice synthesizers has sparked a wave of open-source projects, making these technologies more accessible than ever before. This rapid development is putting the power of AI into the hands of a wider audience, fueling the potential for more convincing vishing attacks.
AI-Powered Voice Spoofing in the Attack LifecycleModern voice cloning involves recording and processing audio and training a model. Training the model relies on a powerful combination of open-source libraries and algorithms, of which there are many popular choices today. When these initial steps are completed, attackers may take additional time to understand speech patterns of the individual being impersonated, and even write a script before conducting operations. This helps create an extra layer of authenticity, and the attack is more likely to be successful.
Next, attackers may use AI-powered voice spoofing in different stages of the attack lifecycle.
Initial AccessThere are various ways a threat actor can gain initial access using a spoofed voice. Threat actors can impersonate executives, colleagues, or even IT support personnel to trick victims into revealing confidential information, granting remote access to systems, or transferring funds. The inherent trust associated with a familiar voice can be exploited to manipulate victims into taking actions they would not normally take, such as clicking on malicious links, downloading malware, or divulging sensitive data. Although voice-based trust systems are seldom used, AI-spoofed voices can also potentially bypass voice-based authentication systems used for multi-factor authentication or password resets, granting unauthorized access to critical accounts.
Lateral Movement and Privilege EscalationThreat actors can leverage AI voice spoofing to hop from system to system, impersonating trusted individuals to manipulate their way to higher access levels. There are a few ways this may unfold.
One method of lateral movement is chaining impersonations. Imagine an attacker initially gaining access by impersonating a helpdesk employee. After establishing communication with a network administrator, the attacker could subtly record the administrator's voice during the interaction. This captured audio can then be used to train a new AI voice spoofing model, allowing the attacker to seamlessly impersonate the administrator and initiate communication with other unsuspecting targets within the network. This chaining of impersonations enables the attacker to move laterally, potentially gaining access to more sensitive systems and data.
Another method is during the initial access phase, threat actors might discover readily available voice recordings on a compromised host, such as voicemails, meeting recordings, or even training materials. These recordings can be leveraged to train AI voice-spoofing models, allowing the attacker to impersonate specific individuals within the organization without needing to interact with them directly. This can be particularly effective for targeting high-value individuals or bypassing systems that rely on voice biometrics for access control.
Mandiant Red Team Proactive Case StudyIn late 2023, Mandiant conducted a controlled red team exercise with a client, using AI voice spoofing to gain initial access to their internal network. This case study highlights the effectiveness of this increasingly sophisticated attack technique.
The exercise began with obtaining client consent and crafting a custom realistic social engineering pretext. The Red Team opted to impersonate a member of the client's security team, requiring a natural voice sample. After reviewing the pretext with the client, the client provided explicit permission to use their voice for this exercise.
Next, we obtained the necessary audio data to train a model, and achieved a passable level of realism. Open-source intelligence (OSINT) played a crucial role in the next phase. By gathering employee data (job titles, locations, phone numbers), the Red Team identified potential targets most likely to recognize the impersonated voice and possess the necessary permissions for our objectives. With a curated target list, the team initiated spoofed calls via VoIP services and number spoofing.
After facing voicemail greetings and other initial hurdles, the first unsuspecting victim answered with a trusting "Hey boss, what's up?". The Red Team had reached a security administrator who reported to the person whose voice was spoofed. Leveraging the pretext of a "VPN client misconfiguration," the Red Team exploited the opportune timing of a recent global outage impacting the client's VPN provider. This carefully chosen scenario instilled a sense of urgency and increased the victim's susceptibility to our instructions.
Due to the trust in the voice on the phone, the victim bypassed security prompts from both Microsoft Edge and Windows Defender SmartScreen, unknowingly downloading and executing a pre-prepared malicious payload onto their workstation. The successful detonation of the payload marked the completion of the exercise, showcasing the alarming ease with which AI voice spoofing can facilitate the breach of an organization.
Security ConsiderationsThis type of exploitation is social in nature, and currently technical detection controls are limited. Available mitigations center around three major principles: awareness, source verification, and future technical considerations.
AwarenessEducate employees, particularly those who control money and access, on the existence and methodologies of AI vishing attacks. Consider adding AI enhanced threats to security awareness training. With such effective and accessible mimicry available to threat actors, everyone should now adopt a healthy dose of skepticism when dealing with phone calls, especially if they fall under one or more of the following cases:
- The caller is saying things that sound too good to be true.
- The call is from an untrusted number/entity.
- The caller tries to enforce questionable authority.
- The caller is out of character for the source.
Employees in trusted positions should be extremely wary of high urgency calls that demand immediate action, especially when the caller asks or gives financial or access oriented information, such as requesting a one-time password (OTP). Employees should be empowered to hang up and report suspicious calls, especially if they believe AI vishing is involved. It is likely another employee is about to receive the same attack.
Source VerificationWhen possible, cross-reference the information with trusted sources. This includes hanging up and calling back at a number previously validated for the source. The caller can be asked to send a text message from a previously validated number or ask them to send an email or an enterprise chat message.
Train employees to spot audio inconsistencies, such as sudden variation of background noise, which could be a symptom of the threat actor not spending enough time cleaning the audio. Look for unusual speech patterns, like a completely different vernacular than what the source typically uses. Watch for unnatural inflections, fillers not commonly used by the source, strange clicks, pauses or abnormal repetition. Pay attention to voice timbre (tone) and cadence as well.
Establish code words for executives and critical staff that deal with sensitive and/or financial information. Do this out of band so there is no trace within the enterprise to limit exposure in case of a breach. The code words can then be used to validate individuals in case of doubt.
If possible, let unknown numbers go to voicemail. Apply the same vigilance to voice calls that you would otherwise apply to emails. Report any suspicious calls for wider awareness.
Future Technical ConsiderationsToday, organizations can, at best, implement traditional security measures to protect audio conversations within the organization, like using separate networks for VoIP channels as well as implementing authentication and transmission encryption for the same. However, this does not resolve attacks made against employees' personal phones.
Going forward, organizations should consider protecting all audio assets, implementing technologies such as digital watermarking that are subtle enough to be imperceptible to the human ear, but easily detected by AI technologies.
Eventually, mobile device management tools will offer technologies to help verify callers. In the meantime, organizations should consider requiring all sensitive conversations to occur over enterprise chat channels, where strong authentication is required, and identities are not easily spoofed.
Research and tools are actively being developed to help in detecting deepfakes. While they have inconsistent accuracy today, they can still provide value in identifying deepfakes in voicemail or offline voice notes. The detection capabilities will improve over time and eventually be adopted into supportable enterprise tooling. For additional reading, consider the active research going into real-time detection, such as DF-Captcha, which suggests a simple application to queue human prompts implemented using challenge response to validate the identity of the party on the other line.
ConclusionIn this blog post, we explored how modern AI tools can help create more convincing vishing attacks. The alarming success of Mandiant's vishing underscores the urgent need for heightened security measures against AI voice-spoofing attacks. While technology offers powerful tools for both attackers and defenders, the human element remains the critical vulnerability. The case study we shared should serve as a wake-up call, urging organizations and individuals alike to take proactive steps.
Mandiant started leveraging AI voice-spoofing attacks in its more complex Red Team Assessments and Social Engineering Assessments to demonstrate the impact such an attack could have on an organization. As threat actors' use of this technique increases in frequency, it is imperative that defenders plan and take precautions.
Wanted: An SBOM Standard to Rule Them All
Shocked, Devastated, Stuck: Cybersecurity Pros Open Up About Their Layoffs
Find Threats Exploiting CrowdStrike Outage with TI Lookup
A recent update by CrowdStrike on July 18, 2024, resulted in a worldwide outage, causing significant disruption for users who were left with blue screens of death (BSODs) on their devices. Cybercriminals seized the opportunity to target affected users with phishing scams and malware. The ANY.RUN team has been closely monitoring the situation after the […]
The post Find Threats Exploiting CrowdStrike Outage <br> with TI Lookup appeared first on ANY.RUN's Cybersecurity Blog.