A new technical paper, “Cascade: Composing Software-Hardware Attack Gadgets for Adversarial Threat Amplification in Compound AI Systems,” was published by the University of Texas, Austin, Intel Labs, Symmetry Systems, Microsoft and Georgia Tech.
Abstract
“Rapid progress in generative AI has given rise to Compound AI systems – pipelines comprised of multiple large language models (LLM), software to...
The “Cascade” research represents a critical inflection point in our understanding of AI security. While the field has largely fixated on algorithmic vulnerabilities within LLMs – model extraction, data leakage – this work exposes a shockingly simple, and potentially devastating, parallel: weaponizing the foundational infrastructure supporting these systems. The researchers’ approach is precisely what we should expect as Compound AI systems become more deeply embedded in critical operations – co...
