This session will open with the foundations of LLM technology. With an understanding of the building blocks, we will segue to common threats and attacks and why they work against LLMs. Next, the presentation will compare attacks on LLMs with related attack vectors in traditional programs such as XSS and SQL injection.
Our discussion will address the control plane vs. the data plane and how those two things are conflated in LLM technology. Since there is no clear distinction between control and data plane, we have many opportunities to trick LLMs into creating responses they were not intended to generate. Additionally we will demo actual attacks focusing primarily on prompt injection and then show how a user can adjust their language to appear like a system command instead of a slice of data to be analyzed.
Learning Objectives:
Understand the basics of LLM technology.
Craft threat models to protect AI-based applications.
Collaborate with ML engineers on securing their AI pipelines.