Welcome to The Daily AI Briefing, here are today's headlines! Today we're covering NVIDIA and Stanford's breakthrough in AI cartoon generation, Amazon's new voice and video models, a practical tutorial for creating YouTube thumbnails with AI, and former OpenAI talent joining Mira Murati's Thinking Machines Lab. These developments showcase how AI capabilities continue to expand in video, voice, practical applications, and talent migration within the industry. Let's dive into the details. First up, NVIDIA and Stanford researchers have unveiled "Test-Time Training," a revolutionary technique that enables longer video generation than ever before. This breakthrough allows for full minute-long animations with remarkable consistency across scenes - something that's been a significant challenge in AI video generation. The system works by using neural networks as memory, allowing models to remember and maintain consistency throughout longer sequences. Demonstrated with Tom and Jerry cartoons, the technology showed impressive multi-scene stories with dynamic motion and coherent character interactions. What makes this particularly interesting is that it modifies existing video models rather than building entirely new ones, adding specialized TTT layers that extend their capabilities well beyond their original design limits. For content creators, filmmakers, and animators, this could eventually unlock the ability to generate longer, more coherent visual stories without having to manually stitch together hundreds of smaller generations - potentially transforming how visual content is created. Moving on to Amazon's latest AI advancements, the company has launched Nova Sonic, a new voice model for human-like interactions, alongside an upgraded Nova Reels 1.1 video model. Nova Sonic processes voice input and generates natural speech with impressively low latency of just 1.09 seconds, reportedly outperforming OpenAI's voice models. The system achieved a 4.2% word error rate across multiple languages and showed 46.7% better accuracy than GPT-4o in noisy, multi-speaker environments - critical for real-world applications. Meanwhile, Nova Reels 1.1 extends video generations to a full 2 minutes in both automated and manual modes, giving users flexibility to craft content shot-by-shot or with single prompts. Both models are available through Amazon Bedrock, with Nova Sonic priced approximately 80% lower than comparable OpenAI options. This aggressive move into voice and video, combined with their Act agentic browser tool and Alexa+ AI features, shows Amazon making a serious play in the generative AI space. For content creators looking for practical AI tools, ChatGPT's native image generation can now create custom YouTube thumbnails with minimal effort. The process is straightforward: upload a reference image of yourself or your main subject to ChatGPT, then write a detailed prompt describing exactly what you want in your thumbnail. For style consistency, you can upload both a reference thumbnail you like and your subject image, then ask the AI to maintain the style while swapping elements. Results can be refined with follow-up prompts or by using the edit feature to highlight areas needing changes. For maximum creative control, uploading a rough sketch showing your desired layout, along with reference images, gives the AI clear direction. You can even use image expander tools like Canva or Adobe's Generative Fill to adjust your thumbnails to perfect YouTube dimensions. This practical application demonstrates how generative AI is becoming increasingly accessible for everyday creative tasks. In industry news, Thinking Machines Lab, the AI startup founded by former OpenAI CTO Mira Murati, continues to attract top talent from OpenAI. The company just added ex-OpenAI CRO Bob McGrew and GPT architect Alec Radford to its advisory board, bringing the number of OpenAI alumni on its roster to nearly half. An impressive 19 of the 38 listed 'Founding Team'