# Welcome to The Daily AI Briefing, here are today's headlines! In today's rapidly evolving AI landscape, we're tracking major funding news, breakthrough research, and important product updates. OpenAI is making history with a potential $40 billion funding round, Anthropic has revealed fascinating insights into Claude's internal workings, and Qwen has launched an impressive new visual reasoning model. Plus, we have updates on new AI tools, OpenAI's GPT-4o developments, and more industry movements that matter. ## OpenAI Nears Historic $40 Billion Funding Round OpenAI is reportedly finalizing a massive $40 billion funding round led by SoftBank, which would make it the largest private funding in history and nearly double the ChatGPT maker's valuation to $300 billion. The deal structure involves SoftBank investing an initial $7.5 billion, followed by another $22.5 billion later this year with other investors including Magnetar Capital, Coatue, and Founders Fund joining the round. Despite reportedly losing up to $5 billion on $3.7 billion of revenue in 2024, OpenAI has ambitious growth projections. The company expects to triple its revenue to $12.7 billion in 2025 and become cash-flow positive by 2029, with over $125 billion in projected revenue. These losses are primarily attributed to AI infrastructure and training costs – exactly what this new funding will help address. Part of the investment will also support OpenAI's commitment to Stargate, the $300 billion AI infrastructure joint venture announced with SoftBank and Oracle in January. ## Anthropic Reveals How Claude "Thinks" In a fascinating breakthrough for AI transparency, Anthropic has released two research papers that reveal how its AI assistant Claude processes information internally. The researchers developed what they call an "AI microscope" that reveals internal "circuits" in the model, showing how Claude transforms input to output during key tasks. Among the discoveries: Claude uses a universal "language of thought" across different languages, with shared conceptual processing for English, French, and Chinese. When writing poetry, the AI actually plans ahead several words, identifying rhyming options before constructing lines to reach those planned words. The team also discovered a default mechanism that prevents speculation unless overridden by strong confidence, helping explain how hallucination prevention works in the model. These insights not only help us better understand Claude's capabilities like multilingual reasoning and advanced planning, but also provide a window into the potential for making AI systems more transparent and interpretable. ## Qwen Releases QVQ-Max Visual Reasoning Model Alibaba's Qwen team has released QVQ-Max, an advanced visual reasoning model that goes well beyond basic image recognition to analyze and reason about visual information across images and videos. Building on their previous QVQ-72B-Preview, this new model expands capabilities across mathematical problem-solving, code generation, and creative tasks. What makes QVQ-Max particularly interesting is its "thinking" mechanism that can be adjusted in length to improve accuracy, showing scalable gains as thinking time increases. The model demonstrates complex visual capabilities like analyzing blueprints, solving geometry problems, and providing feedback on user-submitted sketches. This represents a significant step toward more sophisticated visual AI that can understand and reason about the world more like humans do. Looking ahead, Qwen has shared plans to create a complete visual agent capable of operating devices and playing games, potentially opening new frontiers for AI-human interaction through visual interfaces. ## Important AI Tool Updates and Industry Movements The AI tools landscape continues to evolve rapidly. Kilo released Code for VS Code, an AI agent extension that generates code, automates tasks, and provides suggestions. Ideogram launched version 3.0 of it