The risks of entry-level developers over relying on AI



“AI can produce secure-looking code, but it lacks contextual awareness of the organization’s threat model, compliance needs, and adversarial risk environment,” Moolchandani says.

Tuskira’s CISO lists two major issues: first, that AI-generated security code may not be hardened against evolving attack techniques; and second, that it may fail to reflect the specific security landscape and needs of the organization. Additionally, AI-generated code might give a false sense of security, as developers, particularly inexperienced ones, often assume it is secure by default.

Furthermore, there are risks associated with compliance and violations of licensing terms or regulatory standards, which can lead to legal issues down the line. “Many AI tools, especially those generating code based on open-source codebases, can inadvertently introduce unvetted, improperly licensed, or even malicious code into your system,” O’Brien says. 

Open-source licenses, for example, often have specific requirements regarding attribution, redistribution, and modifications, and relying on AI-generated code could mean accidentally violating these licenses. “This is particularly dangerous in the context of software development for cybersecurity tools, where compliance with open-source licensing is not just a legal obligation but also impacts security posture,” O’Brien adds. “The risk of inadvertently violating intellectual property laws or triggering legal liabilities is significant.”

Recent Articles

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here