In November, researchers from JFrog announced the results of their effort to analyze the machine learning tool ecosystem, which resulted in the discovery of 22 vulnerabilities in 15 different ML projects, both in the server-side and client-side components. Earlier in October, Protect AI reported 34 vulnerabilities in the open-source AI/ML supply chain that were disclosed through its bug bounty program.
Research efforts such as these highlight that, being newer projects, many AI/ML frameworks might not be sufficiently mature from a security perspective or have not received the same level of scrutiny from the security research community as other types of software. While this is changing, with researchers increasingly examining these tools, malicious attackers are looking into them as well, and there seems to be enough flaws left for them to discover.
7. Security feature bypasses make attacks more potent
While organizations should always prioritize critical remote code execution vulnerabilities in their patching efforts, it’s worth remembering that in practice attackers also leverage less severe flaws that are nevertheless useful for their attack chains, such as privilege escalation or security feature bypasses.