Adversa AI is known for its focus on AI red teaming and adversarial attacks. Can you share a particularly memorable red teaming exercise that exposed a surprising vulnerability in an AI system? What was the key takeaway for your team and the client?
Beyond traditional adversarial attacks, what emerging threats in the AI security landscape are you most concerned about right now?
What trips most clients, classic security mistakes in AI systems or AI-specific mistakes?
Are there truly new mistakes in AI systems or are they old mistakes in new clothing?
I know it is not your job to fix it, but much of this is unfixable, right?