Kubernetes MLSec: Securing AI in Space - Francesco Beltramini & James Callaghan, ControlPlane

less than 1 minute read

Abstract

In the gold rush to unearth the next groundbreaking AI technology, operational and data security have become the first victims. We feed ever greater volumes of PII and proprietary secrets into models running on “other people’s computers” and receive fewer guarantees than ever before about the safety and sanctity of our data. High-profile breaches with cross-customer data leaks and training on user inputs lead us to ask: do we trust that mode input data is unpolluted and verified? Are we sure inputs remain ours and aren’t used to train other systems? Will your financial history be used to define insurance rates? Cloud native is here to help! In this talk we: - Threat model Kubernetes-powered MLOps - Break into and poison a Kubernetes model-training environment - Demonstrate the dangers inherent in feeding data into any LLM and train ML models - Suggest cloud native architectural and procedural remediation

Sched URL

Video