Three AI coding agents leaked secrets through a single prompt injection. One vendor's system card predicted it

A security researcher, working with colleagues at Johns Hopkins University, opened a GitHub pull request, typed a malicious instruction into the PR title, and watched Anthropic’s Claude Code Security Review action post its own API key as a comment. The same prompt injection worked on Google’s Gemini
ORIGINAL SOURCE →via VentureBeat
ADVERTISEMENT
⚡ STAY AHEAD
Events like this, convergence-verified across 689 sources, land in your inbox every Sunday. Free.
GET THE SUNDAY BRIEFING →RELATED · cyber
- [CYBER] The Ungoverned Workforce: Cybersecurity Insiders Finds 92% Lack Visibility Into AI Identities
- [CYBER] After the Kelp hack, TVL in DeFi faced a liquidity outflow - Coinspot.io
- [CYBER] Trojanized Android App Fuels New Wave of NFC Fraud
- [CYBER] Ransomware negotiator pleads guilty to helping ransomware gang
- [CYBER] CVE-2026-40565 - FreeScout has Stored XSS / CSS Injection via linkify() — Unescaped URL in Anchor href
- [CYBER] CVE-2025-15638 - Net::Dropbear versions before 0.14 for Perl contains a vulnerable version of libtomcrypt