This week, the Zuck strikes again - Meta unveils a state of the art AI code generator to challenge OpenAI's dominance. We explore the implications of AI models training themselves, and how it could accelerate capabilities. Then we put 11 labs' multilingual speech synthesis to the test, using it to generate a fake phishing call on our mother. Don't miss our scandalous experiments pushing AI to its limits in this jam-packed episode!
If you like the pod, please consider subbing, liking, commenting etc. xox
CHAPTERS:
=====
00:00 - Rehearsal of Phishing Our Mother (Cold Open)
00:19 - Meta's Code Llama
08:24 - Unnatural Instruction to Train AI Models
15:06 - Why Didn't Meta Release the Unnatural Instruction Code Llama Model? The Sparks of AGI?
16:50 - Evolution of GPT: Is Unnatural Instruction The Next Evolution of Models?
23:04 - DeepMind's Reinforced Self-Training ReST for Language Modeling paper and thoughts on future models
36:09 - Fine Tuning GPT-3.5 Turbo Announced by OpenAI: Should You Just Fine Tune Open Source?
44:05 - ElevenLabs Out of Beta and Multilingual v2: Explained by AI Us.
48:12 - Chris Tried to Figure Out AI Phishing
53:03 - Rehearsing Phishing Our Mother Call & Implications of This AI Tech
59:43 - How Much We Lost Not Investing in NVIDIA
1:01:29 - AI Bros Give Investment Advice
SOURCES:
======
https://ai.meta.com/blog/code-llama-large-language-model-coding/
https://www.theinformation.com/articles/metas-next-ai-attack-on-openai-free-code-generating-software
https://twitter.com/emollick/status/1694793231727210579?s=46&t=uXHUN4Glah4CaV-g2czc6Q
https://minimaxir.com/2023/08/stable-diffusion-xl-wrong/
https://twitter.com/abacaj/status/1679996952560246786/photo/1
https://openai.com/blog/gpt-3-5-turbo-fine-tuning-and-api-updates
https://arstechnica.com/ai/2023/08/how-chatgpt-turned-generative-ai-into-an-anything-tool/
https://elevenlabs.io/blog/multilingualv2/
https://www.businessinsider.com/nvidia-technology-spending-wave-build-out-google-meta-oracle-gpu-2023-8
PAPERS:
======
https://arxiv.org/pdf/2212.09689.pdf
https://arxiv.org/pdf/2308.08998.pdf