Three AI coding agents leaked secrets through a single prompt injection. One vendor's system card predicted it

A security researcher, working with colleagues at Johns Hopkins University, opened a GitHub pull request, typed a malicious instruction into the PR title, and watched Anthropic’s Claude Code Security Review action post its own API key as a comment. The same prompt injection worked on Google’s Gemini
ORIGINAL SOURCE →via VentureBeat
ADVERTISEMENT
⚡ STAY AHEAD
Events like this, convergence-verified across 689 sources, land in your inbox every Sunday. Free.
GET THE SUNDAY BRIEFING →RELATED · cyber
- [CYBER] Lazarus Lures Developers With Backdoored Coding Tests
- [CYBER] Recent Microsoft Defender Vulnerability Exploited as Zero-Day
- [CYBER] Microsoft Graph API misused by new GoGra Linux malware for hidden communication
- [CYBER] Xinference PyPI Breach Exposes Developers to Cloud Credential Theft
- [CYBER] Any Chance Something Happened?
- [CYBER] Fake Wallpaper App, YouTube Channel Used to Spread notnullOSX Malware