This session guides participants through understanding regulatory expectations for secure and trustworthy AI, covering frameworks such as the EU AI Act, US National AI Strategy, AI Bill of Rights, and standards like ISO 42001AIMS, OWSAP LLM Top 10, etc. We will delve into the cross-functional aspects of GRC frameworks for Gen AI, offering a maturity roadmap for GRC professionals to implement in their organizations.
Deep dives into security and privacy controls uplift, including threat factors and risk assessment methodologies, are also included in this interactive discussion and presentation. Participants will leave with a ready-to-use toolkit to start their organizations on their AI governance and risk management journey, thereby enabling their businesses to balance outcomes and risks.
Learning Objectives:
Learn the various regulatory expectations in terms of building a secure, safe and trustworthy AI to help understand the requirements that GRC teams have to prepare their organizations for on a holistic level.
Understand a comparative analysis of standards and frameworks such as NIST AI RMF, OECD AI Principles, OWASP LLM Top 10, etc., which have come up recently, to guide GRC specalists for gen AI use cases. Moreover, understand how adopting additional security and privacy measures addresses gen AI risks.
Learn how to bring together various parts of the organization to establish a well-rounded GRC function to handle Gen AI use cases. Participants will leave with a ready-to-use toolkit to start their organizations on the AI governance and risk management journey, thereby enabling their businesses to balance outcomes and risks.