In this bonus episode, recorded live at our San Francisco office, security-startup founders Dean De Beer (Command Zero), Kevin Tian (Doppel), and Travis McPeak (Resourcely) share their thoughts on generative AI, as well as their experiences building with LLMs and dealing with LLM-based threats.
Here's a sample of what Dean had to say about the myriad considerations when choosing, and operating, a large language model:
"The more advanced your use case is, the more requirements you have, the more data you attach to it, the more complex your prompts — ll this is going to change your inference time.
"I liken this to perceived waiting time for an elevator. There's data scientists at places like Otis that actually work on that problem. You know, no one wants to wait 45 seconds for an elevator, but taking the stairs will take them half an hour if they're going to the top floor of . . . something. Same thing here: If I can generate an outcome in 90 seconds, it's still too long from the user's perspective, even if them building out and figuring out the data and building that report [would have] took them four hours . . . two days."
Follow everyone:
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.