Guest: Steve Wilson, Chief Product Officer, Exabeam [@exabeam] & Project Lead, OWASP Top 10 for Larage Language Model Applications [@owasp]
On LinkedIn | https://www.linkedin.com/in/wilsonsd/
On Twitter | https://x.com/virtualsteve
____________________________
Hosts:
Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]
On ITSPmagazine | https://www.itspmagazine.com/sean-martin
Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast
On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli
____________________________
Episode Notes
In this episode of the Chat on the Road On Location series for OWASP AppSec Global in San Francisco, Sean Martin hosts a compelling conversation with Steve Wilson, Project Lead for the OWASP Top 10 for Large Language Model AI Applications. The discussion, as you might guess, centers on the OWASP Top 10 list for Large Language Models (LLMs) and the security challenges associated with these technologies. Wilson highlights the growing relevance of AppSec, particularly with the surge in interest in AI and LLMs.
The conversation kicks off with an exploration of the LLM project that Wilson has been working on at OWASP, aimed at presenting an update on the OWASP Top 10 for LLMs. Wilson emphasizes the significance of prompt injection attacks, one of the key concerns on the OWASP list. He explains how attackers can craft prompts to manipulate LLMs into performing unintended actions, a tactic reminiscent of the SQL injection attacks that have plagued traditional software for years. This serves as a stark reminder of the need for vigilance in the development and deployment of LLMs.
Supply chain risks are another critical issue discussed. Wilson draws parallels to the Log4j incident, stressing that the AI software supply chain is currently a weak link. With the rapid growth of platforms like Hugging Face, the provenance of AI models and training datasets becomes a significant concern. Ensuring the integrity and security of these components is paramount to building robust AI-driven systems.
The notion of excessive agency is also explored—a concept that relates to the permissions and responsibilities assigned to LLMs. Wilson underscores the importance of limiting the scope of LLMs to prevent misuse or unauthorized actions. This point resonates with traditional security principles like least privilege but is recontextualized for the AI age. Overreliance on LLMs is another topic Martin and Wilson discuss.
The conversation touches on how people can place undue trust in AI outputs, leading to potentially hazardous outcomes. Ensuring users understand the limitations and potential inaccuracies of LLM-generated content is essential for safe and effective AI utilization.
Wilson also provides a preview of his upcoming session at the OWASP AppSec Global event, where he plans to share insights from the ongoing work on the 2.0 version of the OWASP Top 10 for LLMs. This next iteration will address how the field has matured and new security considerations that have emerged since the initial list.
Be sure to follow our Coverage Journey and subscribe to our podcasts!
____________________________
This Episode’s Sponsors
Are you interested in sponsoring our event coverage with an ad placement in the podcast?
Learn More 👉 https://itspm.ag/podadplc
____________________________
Follow our OWASP 2024 Global AppSec San Francisco coverage: https://www.itspmagazine.com/owasp-2024-global-appsec-san-francisco-cybersecurity-and-application-security-event-coverage
On YouTube: 📺 https://www.youtube.com/playlist?list=PLnYu0psdcllTcqoGpeR1rdo6p47Ozu1jt
Be sure to share and subscribe!
____________________________
Resources
OWASP Top 10 for Large Language Models: Project Update: https://owasp2024globalappsecsanfra.sched.com/event/1g3YF/owasp-top-10-for-large-language-models-project-update
Safeguarding Against Malicious Use of Large Language Models: A Review of the OWASP Top 10 for LLMs | A Conversation with Jason Haddix | Redefining CyberSecurity with Sean Martin: https://itsprad.io/redefining-cybersecurity-190
OWASP LLM AI Security & Governance Checklist: Practical Steps To Harness the Benefits of Large Language Models While Minimizing Potential Security Risks | A Conversation with Sandy Dunn | Redefining CyberSecurity Podcast with Sean Martin: https://itsprad.io/redefiningcybersecurity-287
Hacking Humans Using LLMs with Fredrik Heiding: Devising and Detecting Phishing: Large Language Models vs. Smaller Human Models | Las Vegas Black Hat 2023 Event Coverage | Redefining CyberSecurity Podcast With Sean Martin and Marco Ciappelli: https://itsprad.io/redefining-cybersecurity-208
Learn more about OWASP 2024 Global AppSec San Francisco: https://sf.globalappsec.org/
____________________________
Catch all of our event coverage: https://www.itspmagazine.com/technology-cybersecurity-society-humanity-conference-and-event-coverage
To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: https://www.itspmagazine.com/redefining-cybersecurity-podcast
To see and hear more Redefining Society stories on ITSPmagazine, visit:
https://www.itspmagazine.com/redefining-society-podcast
Are you interested in sponsoring our event coverage with an ad placement in the podcast?
Learn More 👉 https://itspm.ag/podadplc
Want to tell your Brand Story as part of our event coverage?
Learn More 👉 https://itspm.ag/evtcovbrf