With the increased interest and use of AI such as GTP 3/4, ChatGPT, GitHub Copilot, and internal modeling, there comes an array of use cases and examples for increased efficiency, but also inherent security risks that organizations should consider. In this talk, Invicti’s CTO & Head of Security Research Frank Catucci discusses potential use cases and talks through real-life examples of using AI in production environments. Frank delves into benefits, as well as security implications, touching on a number of security aspects to consider, including security from the supply chain perspective, SBOMs, licensing, as well as risk mitigation, and risk assessment. Frank also covers some of the types of attacks that might happen as a result of utilizing AI-generated code, like intellectual property leaking via a prompt injection attack, data poisoning, etc. And lastly, Frank shares the Invicti security team's real-life experience of utilizing AI, including early successes and failures.
Segment Resources:
This segment is sponsored by Invicti. Visit https://securityweekly.com/invicti to learn more about them!
Ferrari refuses ransomware, OpenAI deals with security issues from cacheing, video killed a crypto ATM, GitHub rotates their RSA SSH key, bypassing CloudTrail, terms and techniques for measuring AI security and safety
Visit https://www.securityweekly.com/asw for all the latest episodes!
Follow us on Twitter: https://www.twitter.com/secweekly
Like us on Facebook: https://www.facebook.com/secweekly
Show Notes: https://securityweekly.com/asw234