Super Prompt: The Generative AI Podcast
How do you extract prohibited information from ChatGPT? What are Grandma and DAN exploits? Why do they work? What can Large Language Model (LLM) companies do to protect themselves? Grandma exploits or hacks are ways to trick chatGPT into giving you information that is in violation of company policy. For example, tricking chatGPT to give you confidential, dangerous, or inappropriate information. "Jailbreaking” is a slang for removing the artificial limitations in iPhones to install apps not approved by Apple. Turns out, there are ways to jailbreak LLMs. The tech companies supplying LLM as a service want to provide a safe, and legally-compliant environment. How can this be done without hampering the flexibility and usefulness of creative prompting?
For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.