Moreover, under a 2023 AI safety and security White House executive order, NIST released last week three final guidance documents and a draft guidance document from the newly created US AI Safety Institute, all intended to help mitigate AI risks. NIST also re-released a test platform called Dioptra for assessing AI’s “trustworthy” characteristics, namely AI that is “valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair,” with harmful bias managed.
CISOs should prepare for a rapidly changing environment
Despite the enormous intellectual, technical, and government resources devoted to creating AI risk models, practical advice for CISOs on how to best manage AI risks is currently in short supply.
Although CISOs and security teams have come to understand the supply chain risks of traditional software and code, particularly open-source software, managing AI risks is a whole new ballgame. “The difference is that AI and the use of AI models are new.” Alon Schindel, VP of data and threat research at Wiz tells CSO.