Exploiting AI application to gain RCE and full DB access
Abstract
This talk dives into the intriguing results of a security assessment performed by Deloitte Belgium on an application using a large language model (LLM) for document analysis. Combining code review and application penetration testing, we uncovered surprising and impactful vulnerabilities that exposed the application to critical risks.
We will explore how interactions between the LLM and backend systems can create unexpected attack vectors, including weaknesses that allow us to manipulate application behavior and access sensitive data. Attendees will learn how subtle flaws in input handling and integration design can lead to significant security breaches, with real-world examples from our findings.
Join us to uncover the hidden dangers of integrating LLMs into applications and discover best practices for building resilient, secure systems in the age of advanced AI.