SecWiki News 2025-09-05 Review
AI4SDL:业务应用研发安全保障体系建设实践 by ourren
ReDAN:一种面向 NAT 网络的远程拒绝服务攻击实证研究 by ourren
更多最新文章,请访问SecWiki
Sep 05, 2025 - Lina Romero - In 2025, we are seeing an unprecedented rise in the volume and scale of AI attacks. Since AI is still a relatively new beast, developers and security teams alike are struggling to keep up with the changing landscape. The OWASP Top 10 Risks for LLMs is a great jumping-off point to gain insight into the biggest risks and how to mitigate them.Excessive AgencyAgency refers to a model’s ability to call functions, interface systems, and undertake actions. Developers grant each AI agent a necessary degree of agency depending on its use case. When an LLM malfunctions, an AI agent should respond appropriately according to the agency it’s been given. However, Excessive Agency occurs when an AI agent responds inappropriately, performing “damaging actions” in response to unusual LLM outputs. Excessive Agency is ultimately caused by design flaws, stemming from one of the following:Excessive functionality: an LLM has access to extensions which include functions not needed to perform its job, or it may still have access to plugins from the development phase that are no longer neededExcessive permissions: an LLM has permissions for downstream functionality and systems not originally intendedExcessive autonomy: an LLM performs actions that it has not been approved for.And the effects of Excessive Agency vulnerabilities can be catastrophic, leading to PII breaches, financial losses, and more. However, there are ways to mitigate and prevent Excessive Agency.Limit extensions: Only allow the LLM to interact with the minimum necessary amount of extensions.Know your agents: If you can’t see it, you can’t secure it! Keep a centralized inventory to track all agents and interactions.Limit extension functionality: Ensure that the functions implemented to an LLM’s extensions are strictly necessary for its intended purpose.Assess your agents: Test agents as a whole, including the sum of their application code.No open-ended extensions: Open-ended extensions with more granular functionality are not strictly necessary, and open the LLM up to more vulnerabilities than they are worth.Require human approval: For some high-impact actions, it may be necessary to have guardrails around them that require permission from an actual user.Assess application code: Assess for input and output handling to see where upstream and downstream vulnerabilities lay.Sanitize LLM inputs and outputs: Sanitization is a best practice for AI security in general, but particularly following OWASP’s recommendations around Application Security Verification Standards (ASVS) and focusing on input sanitization is critical.Documentation is king: We’ve said it before and we’ll say it again, log everything carefully and monitor those logs with detections.Complete mediation: Instead of relying on an LLM to decide if an action is allowed, implement authorizations in downstream systems and enforce the complete mediation principle so all requests must be validated before completion.Overall, Excessive Agency occurs when an LLM performs actions and behaves in ways outside of what it was created for. Therefore, it is a huge risk to AI security and needs to be mitigated by secure coding and developing practices such as implementing authorizations, sanitizing data, and more. To learn how FireTail can help you protect against Excessive Agency and the other risks outlined in the OWASP Top 10 for LLM, set up a demo or get started with our free tier, today.
The post LLM06: Excessive Agency – FireTail Blog appeared first on Security Boulevard.
Solution Providers Rank IRONSCALES as the Top Performer in Security - Email and Web
Today we’re excited to announce that IRONSCALES has earned a 2025 CRN Annual Report Card (ARC) Award in Security - Email and Webfrom CRN®, a brand of The Channel Company. The ARC Awards spotlight the technology vendors providing best-in-class products and solution provider partnership throughout the IT channel ecosystem.
The post IRONSCALES Honored with CRN 2025 Annual Report Card (ARC) Award appeared first on Security Boulevard.
A recently discovered strain of cryptomining malware has captured the attention of security teams worldwide by abusing the built-in Windows Character Map application as an execution host. The threat actor initiates the attack through a PowerShell script that downloads and executes a heavily obfuscated AutoIt loader entirely in memory, avoiding disk writes and common detection […]
The post New Malware Leverages Windows Character Map to Bypass Windows Defender and Mine Cryptocurrency for The Attackers appeared first on Cyber Security News.