Building the foundation for secure Generative AI



Generative Artificial Intelligence is a transformative technology that has captured the interest of companies worldwide and is quickly being integrated into enterprise IT roadmaps. Despite the promise and pace of change, business and cybersecurity leaders indicate they are cautious around adoption due to security risks and concerns. A recent ISMG survey found that the leakage of sensitive data was the top implementation concern by both business leaders and cybersecurity professionals, followed by the ingress of inaccurate data.

Cybersecurity leaders can mitigate many security concerns by reviewing and updating internal IT security practices to account for generative AI. Specific areas of focus for their efforts include implementing a Zero Trust model and adopting basic cyber hygiene standards, which notably still protect against 99% of attacks. However, generative AI providers also play an essential role in secure enterprise usage. Given this shared responsibility, cybersecurity leaders may seek to better understand how security is addressed throughout the generative AI supply chain.

Best practices for generative AI development are constantly evolving and require a holistic approach that considers the technology, its users, and society at large. But within that broader context, there are four foundational areas of protection that are particularly relevant to enterprise security efforts. These include data privacy and ownership, transparency and accountability, user guidance and policy, and secure by design.

  1. Data privacy and ownership

Generative AI providers should have clearly documented data privacy policies. When evaluating vendors, customers should ensure their chosen provider will allow them to retain control of their information and not have it used to train foundational models or shared with other customers without their explicit permission.

  1. Transparency and accountability

Providers must maintain the credibility of the content their tools create. Like humans, generative AI will sometimes get things wrong. But while perfection cannot be expected, transparency and accountability should. To accomplish this, generative AI providers should, at minimum: 1) use authoritative data sources to foster accuracy; 2) provide visibility into reasoning and sources to maintain transparency; and 3) provide a mechanism for user feedback to support continuous improvement.

  1. User guidance and policy

Enterprise security teams have an obligation to ensure safe and responsible generative AI usage within their organizations. AI providers can help support their efforts in a number of ways.

Hostile misuse by insiders, however unlikely, is one such consideration. This would include attempts to engage generative AI in harmful actions like generating dangerous code. AI providers can help mitigate this type of risk by including safety protocols in their system design and setting clear boundaries on what generative AI can and cannot do.

A more common area of concern is user overreliance. Generative AI is meant to assist workers in their daily tasks, not to replace them. Users should be encouraged to think critically about the information they are being served by AI. Providers can visibly cite sources and use carefully considered language that promotes thoughtful usage.

  1. Secure by design

Generative AI technology should be designed and developed with security in mind, and technology providers should be transparent about their security development practices. Security development lifecycles can also be adapted to account for new threat vectors introduced by generative AI. This includes updating threat modeling requirements to address AI and machine learning-specific threats and implementing strict input validation and sanitization of user-provided prompts. AI-aware red teaming, which can be used to look for exploitable vulnerabilities and things like the generation of potentially harmful content, is another important security enhancement. Red teaming has the advantage of being highly adaptive and can be used both before and after product release.

While this is a strong starting point, security leaders who wish to dive deeper can consult a number of promising industry and government initiatives that aim to help ensure the safe and responsible generative AI development and usage. One such effort is the NIST AI Risk Management Framework, which provides organizations a common methodology for mitigating concerns while supporting confidence in generative AI systems.

Undoubtedly, secure enterprise usage of generative AI must be supported by strong enterprise IT security practices and guided by a carefully considered strategy that includes implementation planning, clear usage policies, and related governance. But leading providers of generative AI technology understand they also have an essential role to play and are willing to provide information on their efforts to advance safe, secure, and trustworthy AI. Working together will not only promote secure usage but also drive the confidence needed for generative AI to deliver on its full promise.

To learn more, visit us here.

Recent Articles

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here