You did research by analyzing 2000 papers on AI attacks released in the previous decade. What are the main insights?
How do you approach discovering the relevant threat models for various AI systems and scenarios?
Which threats are real today vs in a few years?
What are the common attack vectors? What do you see in the field of supply chain attacks on AI, software supply, data?
All these reported cyberphysical attacks on computer vision, how real are they, and what are the possible examples of exploitation? Are they a real danger to people?
What are the main differences between protecting AI vs protecting traditional enterprise applications?
Who should be responsible for Securing AI? What about for building trustworthy AI?
Given that the machinery of AI is often opaque, how to go about discovering vulnerabilities? Is there responsible disclosure for AI vulnerabilities, such as in open-source models and in public APIs?
What should companies do first, when embarking on an AI security program? Who should have such a program?