Many core GRC principles were established years ago, rooted in financial controls and risk management. In the age of generative AI and large language models, we must take bold steps to evolve GRC to enable rapid business uptake of AI, without exceeding the organizational risk appetite.
First, we'll examine ways that AI and ML challenge traditional GRC approaches. These include non-transparency of model logic, potential for training data leakage, and particular risks associated with cloud LLMs.
We'll then explore how GRC can move forward to tackle these unique challenges. This requires a strategic combination of classic GRC fundamentals like establishing system ownership, while embracing new techniques such as MLOps and next-generation testing techniques to ensure your AI workflows are both performant and secure.
Learning Objectives:
Articulate new security challenges and risks brought by AI and ML.
Explain how the pillars of classic Governance, Risk and Compliance must evolve to help organizations implement LLMs and other AI applications securely.
Explain three GRC strategies that will help their team or organization move forward boldly and securely with AI.