Sveriges 100 mest populära podcasts

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.

Prenumerera

iTunes / Overcast / RSS

Webbplats

twimlai.com

Avsnitt

V-JEPA, AI Reasoning from a Non-Generative Architecture with Mido Assran - #677

Today we?re joined by Mido Assran, a research scientist at Meta?s Fundamental AI Research (FAIR). In this conversation, we discuss V-JEPA, a new model being billed as ?the next step in Yann LeCun's vision? for true artificial reasoning. V-JEPA, the video version of Meta?s Joint Embedding Predictive Architecture, aims to bridge the gap between human and machine intelligence by training models to learn abstract concepts in a more efficient predictive manner than generative models. V-JEPA uses a novel self-supervised training approach that allows it to learn from unlabeled video data without being distracted by pixel-level detail. Mido walks us through the process of developing the architecture and explains why it has the potential to revolutionize AI. The complete show notes for this episode can be found at twimlai.com/go/677.
2024-03-25
Länk till avsnitt

Video as a Universal Interface for AI Reasoning with Sherry Yang - #676

Today we?re joined by Sherry Yang, senior research scientist at Google DeepMind and a PhD student at UC Berkeley. In this interview, we discuss her new paper, "Video as the New Language for Real-World Decision Making,? which explores how generative video models can play a role similar to language models as a way to solve tasks in the real world. Sherry draws the analogy between natural language as a unified representation of information and text prediction as a common task interface and demonstrates how video as a medium and generative video as a task exhibit similar properties. This formulation enables video generation models to play a variety of real-world roles as planners, agents, compute engines, and environment simulators. Finally, we explore UniSim, an interactive demo of Sherry's work and a preview of her vision for interacting with AI-generated environments. The complete show notes for this episode can be found at twimlai.com/go/676.
2024-03-18
Länk till avsnitt

Assessing the Risks of Open AI Models with Sayash Kapoor - #675

Today we?re joined by Sayash Kapoor, a Ph.D. student in the Department of Computer Science at Princeton University. Sayash walks us through his paper: "On the Societal Impact of Open Foundation Models.? We dig into the controversy around AI safety, the risks and benefits of releasing open model weights, and how we can establish common ground for assessing the threats posed by AI. We discuss the application of the framework presented in the paper to specific risks, such as the biosecurity risk of open LLMs, as well as the growing problem of "Non Consensual Intimate Imagery" using open diffusion models. The complete show notes for this episode can be found at twimlai.com/go/675.
2024-03-11
Länk till avsnitt

OLMo: Everything You Need to Train an Open Source LLM with Akshita Bhagia - #674

Today we?re joined by Akshita Bhagia, a senior research engineer at the Allen Institute for AI. Akshita joins us to discuss OLMo, a new open source language model with 7 billion and 1 billion variants, but with a key difference compared to similar models offered by Meta, Mistral, and others. Namely, the fact that AI2 has also published the dataset and key tools used to train the model. In our chat with Akshita, we dig into the OLMo models and the various projects falling under the OLMo umbrella, including Dolma, an open three-trillion-token corpus for language model pretraining, and Paloma, a benchmark and tooling for evaluating language model performance across a variety of domains. The complete show notes for this episode can be found at twimlai.com/go/674.
2024-03-04
Länk till avsnitt

Training Data Locality and Chain-of-Thought Reasoning in LLMs with Ben Prystawski - #673

Today we?re joined by Ben Prystawski, a PhD student in the Department of Psychology at Stanford University working at the intersection of cognitive science and machine learning. Our conversation centers on Ben?s recent paper, ?Why think step by step? Reasoning emerges from the locality of experience,? which he recently presented at NeurIPS 2023. In this conversation, we start out exploring basic questions about LLM reasoning, including whether it exists, how we can define it, and how techniques like chain-of-thought reasoning appear to strengthen it. We then dig into the details of Ben?s paper, which aims to understand why thinking step-by-step is effective and demonstrates that local structure is the key property of LLM training data that enables it. The complete show notes for this episode can be found at twimlai.com/go/673.
2024-02-26
Länk till avsnitt

Reasoning Over Complex Documents with DocLLM with Armineh Nourbakhsh - #672

Today we're joined by Armineh Nourbakhsh of JP Morgan AI Research to discuss the development and capabilities of DocLLM, a layout-aware large language model for multimodal document understanding. Armineh provides a historical overview of the challenges of document AI and an introduction to the DocLLM model. Armineh explains how this model, distinct from both traditional LLMs and document AI models, incorporates both textual semantics and spatial layout in processing enterprise documents like reports and complex contracts. We dig into her team?s approach to training DocLLM, their choice of a generative model as opposed to an encoder-based approach, the datasets they used to build the model, their approach to incorporating layout information, and the various ways they evaluated the model?s performance. The complete show notes for this episode can be found at twimlai.com/go/672.
2024-02-19
Länk till avsnitt

Are Emergent Behaviors in LLMs an Illusion? with Sanmi Koyejo - #671

Today we?re joined by Sanmi Koyejo, assistant professor at Stanford University, to continue our NeurIPS 2024 series. In our conversation, Sanmi discusses his two recent award-winning papers. First, we dive into his paper, ?Are Emergent Abilities of Large Language Models a Mirage??. We discuss the different ways LLMs are evaluated and the excitement surrounding their?emergent abilities? such as the ability to perform arithmetic Sanmi describes how evaluating model performance using nonlinear metrics can lead to the illusion that the model is rapidly gaining new capabilities, whereas linear metrics show smooth improvement as expected, casting doubt on the significance of emergence. We continue on to his next paper, ?DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models,? discussing the methodology it describes for evaluating concerns such as the toxicity, privacy, fairness, and robustness of LLMs. The complete show notes for this episode can be found at twimlai.com/go/671.
2024-02-12
Länk till avsnitt

AI Trends 2024: Reinforcement Learning in the Age of LLMs with Kamyar Azizzadenesheli - #670

Today we?re joined by Kamyar Azizzadenesheli, a staff researcher at Nvidia, to continue our AI Trends 2024 series. In our conversation, Kamyar updates us on the latest developments in reinforcement learning (RL), and how the RL community is taking advantage of the abstract reasoning abilities of large language models (LLMs). Kamyar shares his insights on how LLMs are pushing RL performance forward in a variety of applications, such as ALOHA, a robot that can learn to fold clothes, and Voyager, an RL agent that uses GPT-4 to outperform prior systems at playing Minecraft. We also explore the progress being made in assessing and addressing the risks of RL-based decision-making in domains such as finance, healthcare, and agriculture. Finally, we discuss the future of deep reinforcement learning, Kamyar?s top predictions for the field, and how greater compute capabilities will be critical in achieving general intelligence. The complete show notes for this episode can be found at twimlai.com/go/670.
2024-02-05
Länk till avsnitt

Building and Deploying Real-World RAG Applications with Ram Sriharsha - #669

Today we?re joined by Ram Sriharsha, VP of engineering at Pinecone. In our conversation, we dive into the topic of vector databases and retrieval augmented generation (RAG). We explore the trade-offs between relying solely on LLMs for retrieval tasks versus combining retrieval in vector databases and LLMs, the advantages and complexities of RAG with vector databases, the key considerations for building and deploying real-world RAG-based applications, and an in-depth look at Pinecone's new serverless offering. Currently in public preview, Pinecone Serverless is a vector database that enables on-demand data loading, flexible scaling, and cost-effective query processing. Ram discusses how the serverless paradigm impacts the vector database?s core architecture, key features, and other considerations. Lastly, Ram shares his perspective on the future of vector databases in helping enterprises deliver RAG systems. The complete show notes for this episode can be found at twimlai.com/go/669.
2024-01-29
Länk till avsnitt

Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao - #668

Today we?re joined by Ben Zhao, a Neubauer professor of computer science at the University of Chicago. In our conversation, we explore his research at the intersection of security and generative AI. We focus on Ben?s recent Fawkes, Glaze, and Nightshade projects, which use ?poisoning? approaches to provide users with security and protection against AI encroachments. The first tool we discuss, Fawkes, imperceptibly ?cloaks? images in such a way that models perceive them as highly distorted, effectively shielding individuals from recognition by facial recognition models. We then dig into Glaze, a tool that employs machine learning algorithms to compute subtle alterations that are indiscernible to human eyes but adept at tricking the models into perceiving a significant shift in art style, giving artists a unique defense against style mimicry. Lastly, we cover Nightshade, a strategic defense tool for artists akin to a 'poison pill' which allows artists to apply imperceptible changes to their images that effectively ?breaks? generative AI models that are trained on them. The complete show notes for this episode can be found at twimlai.com/go/668.
2024-01-22
Länk till avsnitt

Learning Transformer Programs with Dan Friedman - #667

Today, we continue our NeurIPS series with Dan Friedman, a PhD student in the Princeton NLP group. In our conversation, we explore his research on mechanistic interpretability for transformer models, specifically his paper, Learning Transformer Programs. The LTP paper proposes modifications to the transformer architecture which allow transformer models to be easily converted into human-readable programs, making them inherently interpretable. In our conversation, we compare the approach proposed by this research with prior approaches to understanding the models and their shortcomings. We also dig into the approach?s function and scale limitations and constraints. The complete show notes for this episode can be found at twimlai.com/go/667.
2024-01-15
Länk till avsnitt

AI Trends 2024: Machine Learning & Deep Learning with Thomas Dietterich - #666

Today we continue our AI Trends 2024 series with a conversation with Thomas Dietterich, distinguished professor emeritus at Oregon State University. As you might expect, Large Language Models figured prominently in our conversation, and we covered a vast array of papers and use cases exploring current research into topics such as monolithic vs. modular architectures, hallucinations, the application of uncertainty quantification (UQ), and using RAG as a sort of memory module for LLMs. Lastly, don?t miss Tom?s predictions on what he foresees happening this year as well as his words of encouragement for those new to the field. The complete show notes for this episode can be found at twimlai.com/go/666.
2024-01-08
Länk till avsnitt

AI Trends 2024: Computer Vision with Naila Murray - #665

Today we kick off our AI Trends 2024 series with a conversation with Naila Murray, director of AI research at Meta. In our conversation with Naila, we dig into the latest trends and developments in the realm of computer vision. We explore advancements in the areas of controllable generation, visual programming, 3D Gaussian splatting, and multimodal models, specifically vision plus LLMs. We discuss tools and open source projects, including Segment Anything?a tool for versatile zero-shot image segmentation using simple text prompts clicks, and bounding boxes; ControlNet?which adds conditional control to stable diffusion models; and DINOv2?a visual encoding model enabling object recognition, segmentation, and depth estimation, even in data-scarce scenarios. Finally, Naila shares her view on the most exciting opportunities in the field, as well as her predictions for upcoming years. The complete show notes for this episode can be found at twimlai.com/go/665.
2024-01-02
Länk till avsnitt

Are Vector DBs the Future Data Platform for AI? with Ed Anuff - #664

Today we?re joined by Ed Anuff, chief product officer at DataStax. In our conversation, we discuss Ed?s insights on RAG, vector databases, embedding models, and more. We dig into the underpinnings of modern vector databases (like HNSW and DiskANN) that allow them to efficiently handle massive and unstructured data sets, and discuss how they help users serve up relevant results for RAG, AI assistants, and other use cases. We also discuss embedding models and their role in vector comparisons and database retrieval as well as the potential for GPU usage to enhance vector database performance. The complete show notes for this episode can be found at twimlai.com/go/664.
2023-12-28
Länk till avsnitt

Quantizing Transformers by Helping Attention Heads Do Nothing with Markus Nagel - #663

Today we?re joined by Markus Nagel, research scientist at Qualcomm AI Research, who helps us kick off our coverage of NeurIPS 2023. In our conversation with Markus, we cover his accepted papers at the conference, along with other work presented by Qualcomm AI Research scientists. Markus? first paper, Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing, focuses on tackling activation quantization issues introduced by the attention mechanism and how to solve them. We also discuss Pruning vs Quantization: Which is Better?, which focuses on comparing the effectiveness of these two methods in achieving model weight compression. Additional papers discussed focus on topics like using scalarization in multitask and multidomain learning to improve training and inference, using diffusion models for a sequence of state models and actions, applying geometric algebra with equivariance to transformers, and applying a deductive verification of chain of thought reasoning performed by LLMs. The complete show notes for this episode can be found at twimlai.com/go/663.
2023-12-26
Länk till avsnitt

Responsible AI in the Generative Era with Michael Kearns - #662

Today we?re joined by Michael Kearns, professor in the Department of Computer and Information Science at the University of Pennsylvania and an Amazon scholar. In our conversation with Michael, we discuss the new challenges to responsible AI brought about by the generative AI era. We explore Michael?s learnings and insights from the intersection of his real-world experience at AWS and his work in academia. We cover a diverse range of topics under this banner, including service card metrics, privacy, hallucinations, RLHF, and LLM evaluation benchmarks. We also touch on Clean Rooms ML, a secured environment that balances accessibility to private datasets through differential privacy techniques, offering a new approach for secure data handling in machine learning. The complete show notes for this episode can be found at twimlai.com/go/662.
2023-12-22
Länk till avsnitt

Edutainment for AI and AWS PartyRock with Mike Miller - #661

Today we?re joined by Mike Miller, director of product at AWS responsible for the company?s ?edutainment? products. In our conversation with Mike, we explore AWS PartyRock, a no-code generative AI app builder that allows users to easily create fun and shareable AI applications by selecting a model, chaining prompts together, and linking different text, image, and chatbot widgets together. Additionally, we discuss some of the previous tools Mike?s team has delivered at the intersection of developer education and entertainment, including DeepLens, a computer vision hardware device, DeepRacer, a programmable vehicle that uses reinforcement learning to navigate a track, and lastly, DeepComposer, a generative AI model that transforms musical inputs and creates accompanying compositions. The complete show notes for this episode can be found at twimlai.com/go/661.
2023-12-18
Länk till avsnitt

Data, Systems and ML for Visual Understanding with Cody Coleman - #660

Today we?re joined by Cody Coleman, co-founder and CEO of Coactive AI. In our conversation with Cody, we discuss how Coactive has leveraged modern data, systems, and machine learning techniques to deliver its multimodal asset platform and visual search tools. Cody shares his expertise in the area of data-centric AI, and we dig into techniques like active learning and core set selection, and how they can drive greater efficiency throughout the machine learning lifecycle. We explore the various ways Coactive uses multimodal embeddings to enable their core visual search experience, and we cover the infrastructure optimizations they?ve implemented in order to scale their systems. We conclude with Cody?s advice for entrepreneurs and engineers building companies around generative AI technologies. The complete show notes for this episode can be found at twimlai.com/go/660.
2023-12-14
Länk till avsnitt

Patterns and Middleware for LLM Applications with Kyle Roche - #659

Today we?re joined by Kyle Roche, founder and CEO of Griptape to discuss patterns and middleware for LLM applications. We dive into the emerging patterns for developing LLM applications, such as off prompt data?which allows data retrieval without compromising the chain of thought within language models?and pipelines, which are sequential tasks that are given to LLMs that can involve different models for each task or step in the pipeline. We also explore Griptape, an open-source, Python-based middleware stack that aims to securely connect LLM applications to an organization?s internal and external data systems. We discuss the abstractions it offers, including drivers, memory management, rule sets, DAG-based workflows, and a prompt stack. Additionally, we touch on common customer concerns such as privacy, retraining, and sovereignty issues, and several use cases that leverage role-based retrieval methods to optimize human augmentation tasks. The complete show notes for this episode can be found at twimlai.com/go/659.
2023-12-12
Länk till avsnitt

AI Access and Inclusivity as a Technical Challenge with Prem Natarajan - #658

Today we?re joined by Prem Natarajan, chief scientist and head of enterprise AI at Capital One. In our conversation, we discuss AI access and inclusivity as technical challenges and explore some of Prem and his team?s multidisciplinary approaches to tackling these complexities. We dive into the issues of bias, dealing with class imbalances, and the integration of various research initiatives to achieve additive results. Prem also shares his team?s work on foundation models for financial data curation, highlighting the importance of data quality and the use of federated learning, and emphasizing the impact these factors have on the model performance and reliability in critical applications like fraud detection. Lastly, Prem shares his overall approach to tackling AI research in the context of a banking enterprise, including prioritizing mission-inspired research aiming to deliver tangible benefits to customers and the broader community, investing in diverse talent and the best infrastructure, and forging strategic partnerships with a variety of academic labs. The complete show notes for this episode can be found at twimlai.com/go/658.
2023-12-04
Länk till avsnitt

Building LLM-Based Applications with Azure OpenAI with Jay Emery - #657

Today we?re joined by Jay Emery, director of technical sales & architecture at Microsoft Azure. In our conversation with Jay, we discuss the challenges faced by organizations when building LLM-based applications, and we explore some of the techniques they are using to overcome them. We dive into the concerns around security, data privacy, cost management, and performance as well as the ability and effectiveness of prompting to achieve the desired results versus fine-tuning, and when each approach should be applied. We cover methods such as prompt tuning and prompt chaining, prompt variance, fine-tuning, and RAG to enhance LLM output along with ways to speed up inference performance such as choosing the right model, parallelization, and provisioned throughput units (PTUs). In addition to that, Jay also shared several intriguing use cases describing how businesses use tools like Azure Machine Learning prompt flow and Azure ML AI Studio to tailor LLMs to their unique needs and processes. The complete show notes for this episode can be found at twimlai.com/go/657.
2023-11-28
Länk till avsnitt

Visual Generative AI Ecosystem Challenges with Richard Zhang - #656

Today we?re joined by Richard Zhang, senior research scientist at Adobe Research. In our conversation with Richard, we explore the research challenges that arise when regarding visual generative AI from an ecosystem perspective, considering the disparate needs of creators, consumers, and contributors. We start with his work on perceptual metrics and the LPIPS paper, which allow us to better align human perception and computer vision and which remain used in contemporary generative AI applications such as stable diffusion, GANs, and latent diffusion. We look at his work creating detection tools for fake visual content, highlighting the importance of generalization of these detection methods to new, unseen models. Lastly, we dig into his work on data attribution and concept ablation, which aim to address the challenging open problem of allowing artists and others to manage their contributions to generative AI training data sets. The complete show notes for this episode can be found at twimlai.com/go/656.
2023-11-20
Länk till avsnitt

Deploying Edge and Embedded AI Systems with Heather Gorr - #655

Today we?re joined by Heather Gorr, principal MATLAB product marketing manager at MathWorks. In our conversation with Heather, we discuss the deployment of AI models to hardware devices and embedded AI systems. We explore factors to consider during data preparation, model development, and ultimately deployment, to ensure a successful project. Factors such as device constraints and latency requirements which dictate the amount and frequency of data flowing onto the device are discussed, as are modeling needs such as explainability, robustness and quantization; the use of simulation throughout the modeling process; the need to apply robust verification and validation methodologies to ensure safety and reliability; and the need to adapt and apply MLOps techniques for speed and consistency. Heather also shares noteworthy anecdotes about embedded AI deployments in industries including automotive and oil & gas. The complete show notes for this episode can be found at twimlai.com/go/655.
2023-11-13
Länk till avsnitt

AI Sentience, Agency and Catastrophic Risk with Yoshua Bengio - #654

Today we?re joined by Yoshua Bengio, professor at Université de Montréal. In our conversation with Yoshua, we discuss AI safety and the potentially catastrophic risks of its misuse. Yoshua highlights various risks and the dangers of AI being used to manipulate people, spread disinformation, cause harm, and further concentrate power in society. We dive deep into the risks associated with achieving human-level competence in enough areas with AI, and tackle the challenges of defining and understanding concepts like agency and sentience. Additionally, our conversation touches on solutions to AI safety, such as the need for robust safety guardrails, investments in national security protections and countermeasures, bans on systems with uncertain safety, and the development of governance-driven AI systems. The complete show notes for this episode can be found at twimlai.com/go/654.
2023-11-06
Länk till avsnitt

Delivering AI Systems in Highly Regulated Environments with Miriam Friedel - #653

Today we?re joined by Miriam Friedel, senior director of ML engineering at Capital One. In our conversation with Miriam, we discuss some of the challenges faced when delivering machine learning tools and systems in highly regulated enterprise environments, and some of the practices her teams have adopted to help them operate with greater speed and agility. We also explore how to create a culture of collaboration, the value of standardized tooling and processes, leveraging open-source, and incentivizing model reuse. Miriam also shares her thoughts on building a ?unicorn? team, and what this means for the team she?s built at Capital One, as well as her take on build vs. buy decisions for MLOps, and the future of MLOps and enterprise AI more broadly. Throughout, Miriam shares examples of these ideas at work in some of the tools their team has built, such as Rubicon, an open source experiment management tool, and Kubeflow pipeline components that enable Capital One data scientists to efficiently leverage and scale models.  The complete show notes for this episode can be found at twimlai.com/go/653.
2023-10-30
Länk till avsnitt

Mental Models for Advanced ChatGPT Prompting with Riley Goodside - #652

Today we?re joined by Riley Goodside, staff prompt engineer at Scale AI. In our conversation with Riley, we explore LLM capabilities and limitations, prompt engineering, and the mental models required to apply advanced prompting techniques. We dive deep into understanding LLM behavior, discussing the mechanism of autoregressive inference, comparing k-shot and zero-shot prompting, and dissecting the impact of RLHF. We also discuss the idea that prompting is a scaffolding structure that leverages the model context, resulting in achieving the desired model behavior and response rather than focusing solely on writing ability. The complete show notes for this episode can be found at twimlai.com/go/652.
2023-10-23
Länk till avsnitt

Multilingual LLMs and the Values Divide in AI with Sara Hooker - #651

Today we?re joined by Sara Hooker, director at Cohere and head of Cohere For AI, Cohere?s research lab. In our conversation with Sara, we explore some of the challenges with multilingual models like poor data quality and tokenization, and how they rely on data augmentation and preference training to address these bottlenecks. We also discuss the disadvantages and the motivating factors behind the Mixture of Experts technique, and the importance of common language between ML researchers and hardware architects to address the pain points in frameworks and create a better cohesion between the distinct communities. Sara also highlights the impact and the emotional connection that language models have created in society, the benefits and the current safety concerns of universal models, and the significance of having grounded conversations to characterize and mitigate the risk and development of AI models. Along the way, we also dive deep into Cohere and Cohere for AI, along with their Aya project, an open science project that aims to build a state-of-the-art multilingual generative language model as well as some of their recent research papers. The complete show notes for this episode can be found at twimlai.com/go/651.
2023-10-16
Länk till avsnitt

Scaling Multi-Modal Generative AI with Luke Zettlemoyer - #650

Today we?re joined by Luke Zettlemoyer, professor at University of Washington and a research manager at Meta. In our conversation with Luke, we cover multimodal generative AI, the effect of data on models, and the significance of open source and open science. We explore the grounding problem, the need for visual grounding and embodiment in text-based models, the advantages of discretization tokenization in image generation, and his paper Scaling Laws for Generative Mixed-Modal Language Models, which focuses on simultaneously training LLMs on various modalities. Additionally, we cover his papers on Self-Alignment with Instruction Backtranslation, and LIMA: Less Is More for Alignment. The complete show notes for this episode can be found at twimlai.com/go/650.
2023-10-09
Länk till avsnitt

Pushing Back on AI Hype with Alex Hanna - #649

Today we?re joined by Alex Hanna, the Director of Research at the Distributed AI Research Institute (DAIR). In our conversation with Alex, we discuss the topic of AI hype and the importance of tackling the issues and impacts it has on society. Alex highlights how the hype cycle started, concerning use cases, incentives driving people towards the rapid commercialization of AI tools, and the need for robust evaluation tools and frameworks to assess and mitigate the risks of these technologies. We also talked about DAIR and how they?ve crafted their research agenda. We discuss current research projects like DAIR Fellow Asmelash Teka Hadgu?s research supporting machine translation and speech recognition tools for the low-resource Amharic and Tigrinya languages of Ethiopia and Eritrea, in partnership with his startup Lesan.AI. We also explore the ?Do Data Sets Have Politics? paper, which focuses on coding various variables and conducting a qualitative analysis of computer vision data sets to uncover the inherent politics present in data sets and the challenges in data set creation. The complete show notes for this episode can be found at twimlai.com/go/649.
2023-10-02
Länk till avsnitt

Personalization for Text-to-Image Generative AI with Nataniel Ruiz - #648

Today we?re joined by Nataniel Ruiz, a research scientist at Google. In our conversation with Nataniel, we discuss his recent work around personalization for text-to-image AI models. Specifically, we dig into DreamBooth, an algorithm that enables ?subject-driven generation,? that is, the creation of personalized generative models using a small set of user-provided images about a subject. The personalized models can then be used to generate the subject in various contexts using a text prompt. Nataniel gives us a dive deep into the fine-tuning approach used in DreamBooth, the potential reasons behind the algorithm?s effectiveness, the challenges of fine-tuning diffusion models in this way, such as language drift, and how the prior preservation loss technique avoids this setback, as well as the evaluation challenges and metrics used in DreamBooth. We also touched base on his other recent papers including SuTI, StyleDrop, HyperDreamBooth, and lastly, Platypus. The complete show notes for this episode can be found at twimlai.com/go/648.
2023-09-25
Länk till avsnitt

Ensuring LLM Safety for Production Applications with Shreya Rajpal - #647

Today we?re joined by Shreya Rajpal, founder and CEO of Guardrails AI. In our conversation with Shreya, we discuss ensuring the safety and reliability of language models for production applications. We explore the risks and challenges associated with these models, including different types of hallucinations and other LLM failure modes. We also talk about the susceptibility of the popular retrieval augmented generation (RAG) technique to closed-domain hallucination, and how this challenge can be addressed. We also cover the need for robust evaluation metrics and tooling for building with large language models. Lastly, we explore Guardrails, an open-source project that provides a catalog of validators that run on top of language models to enforce correctness and reliability efficiently. The complete show notes for this episode can be found at twimlai.com/go/647.
2023-09-18
Länk till avsnitt

What?s Next in LLM Reasoning? with Roland Memisevic - #646

Today we?re joined by Roland Memisevic, a senior director at Qualcomm AI Research. In our conversation with Roland, we discuss the significance of language in humanlike AI systems and the advantages and limitations of autoregressive models like Transformers in building them. We cover the current and future role of recurrence in LLM reasoning and the significance of improving grounding in AI?including the potential of developing a sense of self in agents. Along the way, we discuss Fitness Ally, a fitness coach trained on a visually grounded large language model, which has served as a platform for Roland?s research into neural reasoning, as well as recent research that explores topics like visual grounding for large language models and state-augmented architectures for AI agents. The complete show notes for this episode can be found at twimlai.com/go/646.
2023-09-11
Länk till avsnitt

Is ChatGPT Getting Worse? with James Zou - #645

Today we?re joined by James Zou, an assistant professor at Stanford University. In our conversation with James, we explore the differences in ChatGPT?s behavior over the last few months. We discuss the issues that can arise from inconsistencies in generative AI models, how he tested ChatGPT?s performance in various tasks, drawing comparisons between March 2023 and June 2023 for both GPT-3.5 and GPT-4 versions, and the possible reasons behind the declining performance of these models. James also shared his thoughts on how surgical AI editing akin to CRISPR could potentially revolutionize LLM and AI systems, and how adding monitoring tools can help in tracking behavioral changes in these models. Finally, we discuss James' recent paper on pathology image analysis using Twitter data, in which he explores the challenges of obtaining large medical datasets and data collection, as well as detailing the model?s architecture, training, and the evaluation process. The complete show notes for this episode can be found at twimlai.com/go/645.
2023-09-04
Länk till avsnitt

Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644

Today we?re joined by Sophia Sanborn, a postdoctoral scholar at the University of California, Santa Barbara. In our conversation with Sophia, we explore the concept of universality between neural representations and deep neural networks, and how these principles of efficiency provide an ability to find consistent features across networks and tasks. We also discuss her recent paper on Bispectral Neural Networks which focuses on Fourier transform and its relation to group theory, the implementation of bi-spectral spectrum in achieving invariance in deep neural networks, the expansion of geometric deep learning on the concept of CNNs from other domains, the similarities in the fundamental structure of artificial neural networks and biological neural networks and how applying similar constraints leads to the convergence of their solutions. The complete show notes for this episode can be found at twimlai.com/go/644.
2023-08-28
Länk till avsnitt

Inverse Reinforcement Learning Without RL with Gokul Swamy - #643

Today we?re joined by Gokul Swamy, a Ph.D. Student at the Robotics Institute at Carnegie Mellon University. In the final conversation of our ICML 2023 series, we sat down with Gokul to discuss his accepted papers at the event, leading off with ?Inverse Reinforcement Learning without Reinforcement Learning.? In this paper, Gokul explores the challenges and benefits of inverse reinforcement learning, and the potential and advantages it holds for various applications. Next up, we explore the ?Complementing a Policy with a Different Observation Space? paper which applies causal inference techniques to accurately estimate sampling balance and make decisions based on limited observed features. Finally, we touched on ?Learning Shared Safety Constraints from Multi-task Demonstrations? which centers on learning safety constraints from demonstrations using the inverse reinforcement learning approach. The complete show notes for this episode can be found at twimlai.com/go/643.
2023-08-21
Länk till avsnitt

Explainable AI for Biology and Medicine with Su-In Lee - #642

Today we?re joined by Su-In Lee, a professor at the Paul G. Allen School of Computer Science And Engineering at the University Of Washington. In our conversation, Su-In details her talk from the ICML 2023 Workshop on Computational Biology which focuses on developing explainable AI techniques for the computational biology and clinical medicine fields. Su-In discussed the importance of explainable AI contributing to feature collaboration, the robustness of different explainability approaches, and the need for interdisciplinary collaboration between the computer science, biology, and medical fields. We also explore her recent paper on the use of drug combination therapy, challenges with handling biomedical data, and how they aim to make meaningful contributions to the healthcare industry by aiding in cause identification and treatments for Cancer and Alzheimer's diseases. The complete show notes for this episode can be found at twimlai.com/go/642.
2023-08-14
Länk till avsnitt

Transformers On Large-Scale Graphs with Bayan Bruss - #641

Today we?re joined by Bayan Bruss, Vice President of Applied ML Research at Capital One. In our conversation with Bayan, we covered a pair of papers his team presented at this year?s ICML conference. We begin with the paper Interpretable Subspaces in Image Representations, where Bayan gives us a dive deep into the interpretability framework, embedding dimensions, contrastive approaches, and how their model can accelerate image representation in deep learning. We also explore GOAT: A Global Transformer on Large-scale Graphs, a scalable global graph transformer. We talk through the computation challenges, homophilic and heterophilic principles, model sparsity, and how their research proposes methodologies to get around the computational barrier when scaling to large-scale graph models. The complete show notes for this episode can be found at twimlai.com/go/641.
2023-08-07
Länk till avsnitt

The Enterprise LLM Landscape with Atul Deo - #640

Today we?re joined by Atul Deo, General Manager of Amazon Bedrock. In our conversation with Atul, we discuss the process of training large language models in the enterprise, including the pain points of creating and training machine learning models, and the power of pre-trained models. We explore different approaches to how companies can leverage large language models, dealing with the hallucination, and the transformative process of retrieval augmented generation (RAG). Finally, Atul gives us an inside look at Bedrock, a fully managed service that simplifies the deployment of generative AI-based apps at scale. The complete show notes for this episode can be found at twimlai.com/go/640.
2023-07-31
Länk till avsnitt

BloombergGPT - an LLM for Finance with David Rosenberg - #639

Today we?re joined by David Rosenberg, head of the machine learning strategy team in the Office of the CTO at Bloomberg. In our conversation with David, we discuss the creation of BloombergGPT, a custom-built LLM focused on financial applications. We explore the model?s architecture, validation process, benchmarks, and its distinction from other language models. David also discussed the evaluation process, performance comparisons, progress, and the future directions of the model. Finally, we discuss the ethical considerations that come with building these types of models, and how they've approached dealing with these issues. The complete show notes for this episode can be found at twimlai.com/go/639
2023-07-24
Länk till avsnitt

Are LLMs Good at Causal Reasoning? with Robert Osazuwa Ness - #638

Today we?re joined by Robert Osazuwa Ness, a senior researcher at Microsoft Research, Professor at Northeastern University, and Founder of Altdeep.ai. In our conversation with Robert, we explore whether large language models, specifically GPT-3, 3.5, and 4, are good at causal reasoning. We discuss the benchmarks used to evaluate these models and the limitations they have in answering specific causal reasoning questions, while Robert highlights the need for access to weights, training data, and architecture to correctly answer these questions. The episode discusses the challenge of generalization in causal relationships and the importance of incorporating inductive biases, explores the model's ability to generalize beyond the provided benchmarks, and the importance of considering causal factors in decision-making processes. The complete show notes for this episode can be found at twimlai.com/go/638.
2023-07-17
Länk till avsnitt

Privacy vs Fairness in Computer Vision with Alice Xiang - #637

Today we?re joined by Alice Xiang, Lead Research Scientist at Sony AI, and Global Head of AI Ethics at Sony Group Corporation. In our conversation with Alice, we discuss the ongoing debate between privacy and fairness in computer vision, diving into the impact of data privacy laws on the AI space while highlighting concerns about unauthorized use and lack of transparency in data usage. We explore the potential harm of inaccurate AI model outputs and the need for legal protection against biased AI products, and Alice suggests various solutions to address these challenges, such as working through third parties for data collection and establishing closer relationships with communities. Finally, we talk through the history of unethical data collection practices in CV and the emergence of generative AI technologies that exacerbate the problem, the importance of operationalizing ethical data collection and practice, including appropriate consent, representation, diversity, and compensation, and the need for interdisciplinary collaboration in AI ethics and the growing interest in AI regulation, including the EU AI Act and regulatory activities in the US. The complete show notes for this episode can be found at twimlai.com/go/637.
2023-07-10
Länk till avsnitt

Unifying Vision and Language Models with Mohit Bansal - #636

Today we're joined by Mohit Bansal, Parker Professor, and Director of the MURGe-Lab at UNC, Chapel Hill. In our conversation with Mohit, we explore the concept of unification in AI models, highlighting the advantages of shared knowledge and efficiency. He addresses the challenges of evaluation in generative AI, including biases and spurious correlations. Mohit introduces groundbreaking models such as UDOP and VL-T5, which achieved state-of-the-art results in various vision and language tasks while using fewer parameters. Finally, we discuss the importance of data efficiency, evaluating bias in models, and the future of multimodal models and explainability. The complete show notes for this episode can be found at twimlai.com/go/636.
2023-07-03
Länk till avsnitt

Data Augmentation and Optimized Architectures for Computer Vision with Fatih Porikli - #635

Today we kick off our coverage of the 2023 CVPR conference joined by Fatih Porikli, a Senior Director of Technology at Qualcomm. In our conversation with Fatih, we covered quite a bit of ground, touching on a total of 12 papers/demos, focusing on topics like data augmentation and optimized architectures for computer vision. We explore advances in optical flow estimation networks, cross-model, and stage knowledge distillation for efficient 3D object detection, and zero-shot learning via language models for fine-grained labeling. We also discuss generative AI advancements and computer vision optimization for running large models on edge devices. Finally, we discuss objective functions, architecture design choices for neural networks, and efficiency and accuracy improvements in AI models via the techniques introduced in the papers.
2023-06-26
Länk till avsnitt

Mojo: A Supercharged Python for AI with Chris Lattner - #634

Today we?re joined by Chris Lattner, Co-Founder and CEO of Modular. In our conversation with Chris, we discuss Mojo, a new programming language for AI developers. Mojo is unique in this space and simplifies things by making the entire stack accessible and understandable to people who are not compiler engineers. It also offers Python programmers the ability to make it high-performance and capable of running accelerators, making it more accessible to more people and researchers. We discuss the relationship between the Modular Engine and Mojo, the challenge of packaging Python, particularly when incorporating C code, and how Mojo aims to solve these problems to make the AI stack more dependable. The complete show notes for this episode can be found at twimlai.com/go/634
2023-06-19
Länk till avsnitt

Stable Diffusion and LLMs at the Edge with Jilei Hou - #633

Today we?re joined by Jilei Hou, a VP of Engineering at Qualcomm Technologies. In our conversation with Jilei, we focus on the emergence of generative AI, and how they've worked towards providing these models for use on edge devices. We explore how the distribution of models on devices can help amortize large models' costs while improving reliability and performance and the challenges of running machine learning workloads on devices, including model size and inference latency. Finally, Jilei we explore how these emerging technologies fit into the existing AI Model Efficiency Toolkit (AIMET) framework.  The complete show notes for this episode can be found at twimlai.com/go/633
2023-06-12
Länk till avsnitt

Modeling Human Behavior with Generative Agents with Joon Sung Park - #632

Today we?re joined by Joon Sung Park, a PhD Student at Stanford University. Joon shares his passion for creating AI systems that can solve human problems and his work on the recent paper Generative Agents: Interactive Simulacra of Human Behavior, which showcases generative agents that exhibit believable human behavior. We discuss using empirical methods to study these systems and the conflicting papers on whether AI models have a worldview and common sense. Joon talks about the importance of context and environment in creating believable agent behavior and shares his team's work on scaling emerging community behaviors. He also dives into the importance of a long-term memory module in agents and the use of knowledge graphs in retrieving associative information. The goal, Joon explains, is to create something that people can enjoy and empower people, solving existing problems and challenges in the traditional HCI and AI field.
2023-06-05
Länk till avsnitt

Towards Improved Transfer Learning with Hugo Larochelle - #631

Today we?re joined by Hugo Larochelle, a research scientist at Google Deepmind. In our conversation with Hugo, we discuss his work on transfer learning, understanding the capabilities of deep learning models, and creating the Transactions on Machine Learning Research journal. We explore the use of large language models in NLP, prompting, and zero-shot learning. Hugo also shares insights from his research on neural knowledge mobilization for code completion and discusses the adaptive prompts used in their system.  The complete show notes for this episode can be found at twimlai.com/go/631.
2023-05-29
Länk till avsnitt

Language Modeling With State Space Models with Dan Fu - #630

Today we?re joined by Dan Fu, a PhD student at Stanford University. In our conversation with Dan, we discuss the limitations of state space models in language modeling and the search for alternative building blocks that can help increase context length without being computationally infeasible. Dan walks us through the H3 architecture and Flash Attention technique, which can reduce the memory footprint of a model and make it feasible to fine-tune. We also explore his work on improving language models using synthetic languages, the issue of long sequence length affecting both training and inference in models, and the hope for finding something sub-quadratic that can perform language processing more effectively than the brute force approach of attention. The complete show notes for this episode can be found at https://twimlai.com/go/630
2023-05-22
Länk till avsnitt

Building Maps and Spatial Awareness in Blind AI Agents with Dhruv Batra - #629

Today we continue our coverage of ICLR 2023 joined by Dhruv Batra, an associate professor at Georgia Tech and research director of the Fundamental AI Research (FAIR) team at META. In our conversation, we discuss Dhruv?s work on the paper Emergence of Maps in the Memories of Blind Navigation Agents, which won an Outstanding Paper Award at the event. We explore navigation with multilayer LSTM and the question of whether embodiment is necessary for intelligence. We delve into the Embodiment Hypothesis and the progress being made in language models and caution on the responsible use of these models. We also discuss the history of AI and the importance of using the right data sets in training. The conversation explores the different meanings of "maps" across AI and cognitive science fields, Dhruv?s experience in navigating mapless systems, and the early discovery stages of memory representation and neural mechanisms. The complete show notes for this episode can be found at https://twimlai.com/go/629
2023-05-15
Länk till avsnitt

AI Agents and Data Integration with GPT and LLaMa with Jerry Liu - #628

Today we?re joined by Jerry Liu, co-founder and CEO of Llama Index. In our conversation with Jerry, we explore the creation of Llama Index, a centralized interface to connect your external data with the latest large language models. We discuss the challenges of adding private data to language models and how Llama Index connects the two for better decision-making. We discuss the role of agents in automation, the evolution of the agent abstraction space, and the difficulties of optimizing queries over large amounts of complex data. We also discuss a range of topics from combining summarization and semantic search, to automating reasoning, to improving language model results by exploiting relationships between nodes in data.  The complete show notes for this episode can be found at twimlai.com/go/628.
2023-05-08
Länk till avsnitt
Hur lyssnar man på podcast?

En liten tjänst av I'm With Friends. Finns även på engelska.
Uppdateras med hjälp från iTunes.