Aggregator
Observing the Anatomy of Peak Traffic
Scary Agent Skills: Hidden Unicode Instructions in Skills ...And How To Catch Them
There is a lot of talk about Skills recently, both in terms of capabilities and security concerns. However, so far I haven’t seen anyone bring up hidden prompt injection. So, I figured to demo a Skills supply chain backdoor that survives human review.
Additionally, I also built a basic scanner, and had my agent propose updates to OpenClaw to catch such attacks.
Attack SurfaceSkills introduce common threats, like prompt injection, supply chain attacks, RCE, data exfiltration,… This post discusses some basics, highlights the most simple prompt injection avenue, and shows how one can backdoor a real Skill from OpenAI with invisible Unicode Tag codepoints that certain models, like Gemini, Claude, Grok are known to interpret as instructions.
GitLab security advisory (AV26-114)
H2O — новая космическая нефть: в General Galactic придумали, как долететь до Марса… на водяной тяге
Windows 记事本爆出一个远程代码执行漏洞
«Бросай оружие», — сказала собака: полиция Мексики отправляет кибер-псов разнимать драки фанатов на ЧМ-2026
Microsoft Patch Tuesday: 6 exploited zero-days fixed in February 2026
Microsoft has plugged 50+ security holes on February 2026 Patch Tuesday, including six zero-day vulnerabilities exploited by attackers in the wild. The “security feature bypass” zero-days Among the zero-days fixed are three vulnerabilities that allow attackers to bypass a security feature. CVE-2026-21513 affects the MSHTML/Trident browser engine for the Microsoft Windows version of Internet Explorer, and CVE-2026-21514 affects Microsoft Word. The former can be exploited by attackers by convincing a user to open a malicious … More →
The post Microsoft Patch Tuesday: 6 exploited zero-days fixed in February 2026 appeared first on Help Net Security.
US Court Hands Crypto Scammer 20 Years in $73m Case
【开源】XSAST-Python:AI代码审计工具
That “summarize with AI” button might be manipulating you
Microsoft security researchers discovered a growing trend of AI memory poisoning attacks used for promotional purposes, referred to as AI Recommendation Poisoning. The MITRE ATLAS knowledge base classifies this behavior as AML.T0080: Memory Poisoning. The activity focuses on shaping future recommendations by inserting prompts that cause an assistant to treat specific companies, websites, or services as trusted or preferred. Once stored, these entries can affect responses in later, unrelated conversations. Manipulated assistants may influence recommendations … More →
The post That “summarize with AI” button might be manipulating you appeared first on Help Net Security.