Can My Model Be Hacked? Understanding and Mitigating Security Vulnerabilities within LLMs
, CEO, Mindgard
Large language models (LLMs) offer tremendous potential, but understanding their emerging security vulnerabilities and subsequent risk to an organization is immensely challenging. These vulnerabilities allow attackers to target and exploit specific LLM behavior, leading to service disruption and data leaks, or reverse-engineer LLM capability for a fraction of its original development cost — all possible by simply talking to a LLM via normal user channels. This is an urgent issue that's being actively worked upon by both academics and industry.
We'll discuss what the security vulnerabilities within LLMs are, how they occur, why they're so difficult to overcome, and what can businesses do to mitigate and manage their AI security risk.
Events & Trainings:
Date:
Industry:
Level:
Topic:
NVIDIA technology: Cloud / Data Center GPU,cuBLAS,CUDA,cuDNN,Jetson,Nsight Compute