If LLM creates secure code, how could an LLM find a vulnerability in it?
I’m sure I’m not thinking straight here, but if we use AI to create code, give it the prompt that the code must be as secure as possible, then once generated, how could AI find any vulnerabilities in this? submitted by /u/heinternets [link] [comments]
ORIGINAL SOURCE →via Reddit r/cybersecurity
ADVERTISEMENT
⚡ STAY AHEAD
Events like this, convergence-verified across 689 sources, land in your inbox every Sunday. Free.
GET THE SUNDAY BRIEFING →RELATED · cyber
- [CYBER] Built a platform that combines phishing detection, encrypted file sharing, and cloud security scanning
- [CYBER] CVE-2026-8230 - Wavlink NU516U1 login.cgi sys_login1 os command injection
- [CYBER] Cookie, Session và Token-based authentication
- [CYBER] CVE-2026-7258 - Out-of-bounds read in urldecode() on NetBSD
- [CYBER] CVE-2026-6722 - Use-After-Free in SOAP using Apache map
- [CYBER] CVE-2026-8229 - Wavlink NU516U1 wireless.cgi WifiBasic os command injection