How organizations can secure their AI code



While Reworkd was open about their error, many similar incidents remain unknown. CISOs often learn about them behind closed doors. Financial institutions, healthcare systems, and e-commerce platforms have all encountered security challenges as code completion tools can introduce vulnerabilities, disrupt operations, or compromise data integrity. Many of the risks are associated with AI-generated code, library names that are the result of hallucinations, or the introduction of third-party dependencies that are untracked and unverified.

“We’re facing a perfect storm: increasing reliance on AI-generated code, rapid growth in open-source libraries, and the inherent complexity of these systems,” says Jens Wessling, chief technology officer at Veracode. “It’s only natural that security risks will escalate.”

Often, code completion tools like ChatGPT, GitHub Copilot, or Amazon CodeWhisperer are used covertly. A survey by Snyk showed that roughly 80% of developers ignore security policies to incorporate AI-generated code. This practice creates blind spots for organizations, who often struggle to mitigate security and legal issues that appear as a result.

Recent Articles

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here