Sveriges 100 mest populära podcasts

Gradient Dissent: Conversations on AI

Gradient Dissent: Conversations on AI

Join Lukas Biewald on Gradient Dissent, an AI-focused podcast brought to you by Weights & Biases. Dive into fascinating conversations with industry giants from NVIDIA, Meta, Google, Lyft, OpenAI, and more. Explore the cutting-edge of AI and learn the intricacies of bringing models into production.

Prenumerera

iTunes / Overcast / RSS

Webbplats

wandb.com/podcast

Avsnitt

Accelerating drug discovery with AI: Insights from Isomorphic Labs

In this episode of Gradient Dissent, Isomorphic Labs Chief AI Officer Max Jaderberg, and Chief Technology Officer Sergei Yakneen join our host Lukas Biewald to discuss the advancements in biotech and drug discovery being unlocked with machine learning.

With backgrounds in advanced AI research at DeepMind, Max and Sergei offer their unique insights into the challenges and successes of applying AI in a complex field like biotechnology. They share their journey at Isomorphic Labs, a company dedicated to revolutionizing drug discovery with AI. In this episode, they discuss the transformative impact of deep learning on the drug development process and Isomorphic Labs' strategy to innovate from molecular design to clinical trials.

You?ll come away with valuable insights into the challenges of applying AI in biotech, the role of AI in streamlining the drug discovery pipeline, and peer into the  future of AI-driven solutions in healthcare.

Connect with Sergei Yakneen & Max Jaderberg:

https://www.linkedin.com/in/maxjaderberg/ 

https://www.linkedin.com/in/yakneensergei/ 

https://twitter.com/SergeiIakhnin 

https://twitter.com/maxjaderberg 

Follow Weights & Biases:

https://twitter.com/weights_biases 

https://www.linkedin.com/company/wandb 

2024-04-25
Länk till avsnitt

Redefining AI Hardware for Enterprise with SambaNova?s Rodrigo Liang

? Discover the cutting-edge AI hardware development for enterprises in this episode of Gradient Dissent, featuring Rodrigo Liang, CEO of SambaNova Systems. 

Rodrigo Liang?s journey from Oracle to founding SambaNova is a tale of innovation and determination. In this episode, Rodrigo discusses the importance of specialized hardware in unlocking AI's potential for Enterprise businesses and SambaNova's mission to deliver comprehensive AI solutions from chips to models. 

Explore the critical insights on navigating the challenges of introducing AI to executives and the evolution of AI applications within large enterprises, and get a glimpse into the future of AI in the business world.

? Get our podcasts on these platforms:

Apple Podcasts: http://wandb.me/apple-podcasts

Spotify: http://wandb.me/spotify

Google: http://wandb.me/gd_google

YouTube: http://wandb.me/youtube

Connect with Rodrigo Liang:

https://www.linkedin.com/in/rodrigo-liang/

https://twitter.com/RodrigoLiang 

 

Follow Weights & Biases:

https://twitter.com/weights_biases 

https://www.linkedin.com/company/wandb 

Join the Weights & Biases Discord Server:

https://discord.gg/CkZKRNnaf3

2024-04-11
Länk till avsnitt

Navigating the Vector Database Landscape with Pinecone's Edo Liberty

? This episode of Gradient Dissent welcomes Edo Liberty, the mind behind Pinecone's revolutionary vector database technology.

As a former leader at Amazon AI Labs and Yahoo's New York lab, Edo Liberty's extensive background in AI research and development showcases the complexities behind vector databases and their essential role in enhancing AI's capabilities.

Discover the pivotal moments and key decisions that have defined Pinecone's journey, learn about the different embedding strategies that are reshaping AI applications, and understand how Pinecone's success has had a profound impact on the technology landscape.

Connect with Edo Liberty:

https://www.linkedin.com/in/edo-liberty-4380164/ 

https://twitter.com/EdoLiberty 

Follow Weights & Biases:

https://twitter.com/weights_biases 

https://www.linkedin.com/company/wandb 

Join the Weights & Biases Discord Server:

https://discord.gg/CkZKRNnaf3

2024-03-28
Länk till avsnitt

Transforming Data into Business Solutions with Salesforce AI CEO, Clara Shih

In this episode of Gradient Dissent, we explore the revolutionary impact of AI across industries with Clara Shih, CEO of Salesforce AI and Founder of Hearsay Systems. 

Dive into Salesforce AI's cutting-edge approach to customer service through AI, the importance of a trust-first strategy, and the future of AI policies and education. Learn how Salesforce empowers businesses and shapes the future with AI innovations like Prompt Builder and Copilot Studio. Whether you're an AI enthusiast, a business leader, or someone curious about the future of technology, this discussion offers valuable insights into navigating the rapidly evolving world of AI.

Subscribe to Weights & Biases YouTube ?  https://bit.ly/45BCkYz

Get our podcasts on these platforms:

Apple Podcasts: http://wandb.me/apple-podcasts

Spotify: http://wandb.me/spotify

Google: http://wandb.me/gd_google

Connect with Clara:

https://www.linkedin.com/in/clarashih/

https://x.com/clarashih?s=20  

Follow Weights & Biases:

https://twitter.com/weights_biases 

https://www.linkedin.com/company/wandb 

2024-03-14
Länk till avsnitt

Upgrading Your Health: Navigating AI's Future In Healthcare with John Halamka of Mayo Clinic Platform

In the newest episode of Gradient Dissent, we explore the intersecting worlds of AI and Healthcare with John Halamka, President of the Mayo Clinic Platform.

Journey with us down John Halamka's remarkable path from his early tech startup days to leading innovations as the President of the Mayo Clinic Platform, one of the world's most esteemed healthcare institutions. This deep dive into AI's role in modern medicine covers the technology's evolution, its potential to redefine patient care, and the visionary work of Mayo Clinic Platform in harnessing AI responsibly.

Explore the misconceptions surrounding AI in healthcare and discover the ethical and regulatory frameworks guiding its application. Glimpse into the future with Halamka's visionary perspective on AI's potential to democratize and revolutionize healthcare across the globe. Join us for an enlightening discussion on the challenges, triumphs, and the horizon of AI in healthcare through the lens of John Halamka's pioneering experiences.

? Get our podcasts on these platforms:

Apple Podcasts: http://wandb.me/apple-podcasts

Spotify: http://wandb.me/spotify

Google: http://wandb.me/gd_google

YouTube: http://wandb.me/youtube

? Follow Weights & Biases:

https://twitter.com/weights_biases 

https://www.linkedin.com/company/wandb 

2024-02-29
Länk till avsnitt

Shaping the World of Robotics with Chelsea Finn

In the newest episode of Gradient Dissent, Chelsea Finn, Assistant Professor at Stanford's Computer Science Department, discusses the forefront of robotics and machine learning.

Discover her groundbreaking work, where two-armed robots learn to cook shrimp (messes included!), and discuss how robotic learning could transform student feedback in education.

We'll dive into the challenges of developing humanoid and quadruped robots, explore the limitations of simulated environments and discuss why real-world experience is key for adaptable machines. Plus, Chelsea will offer a glimpse into the future of household robotics and why it may be a few years before a robot is making your bed.

Whether you're an AI enthusiast, a robotics professional, or simply curious about the potential and future of the technology, this episode offers unique insights into the evolving world of robotics and where it's headed next.

*Subscribe to Weights & Biases* ? https://bit.ly/45BCkYz

? Get our podcasts on these platforms:

Apple Podcasts: http://wandb.me/apple-podcasts

Spotify: http://wandb.me/spotify

Google: http://wandb.me/gd_google

YouTube: http://wandb.me/youtube

Connect with Chelsea Finn:

https://www.linkedin.com/in/cbfinn/

https://twitter.com/chelseabfinn

Follow Weights & Biases:

https://twitter.com/weights_biases

https://www.linkedin.com/company/wandb

Join the Weights & Biases Discord Server:

https://discord.gg/CkZKRNnaf3

2024-02-15
Länk till avsnitt

The Power of AI in Search with You.com's Richard Socher

In the latest episode of Gradient Dissent, Richard Socher, CEO of You.com, shares his insights on the power of AI in search. The episode focuses on how advanced language models like GPT-4 are transforming search engines and changing the way we interact with digital platforms. The discussion covers the practical applications and challenges of integrating AI into search functionality, as well as the ethical considerations and future implications of AI in our digital lives. Join us for an enlightening conversation on how AI and you.com are reshaping how we access and interact with information online.

*Subscribe to Weights & Biases* ?  https://bit.ly/45BCkYz

Timestamps:

00:00 - Introduction to Gradient Dissent Podcast

00:48 - Richard Socher?s Journey: From Linguistic Computer Science to AI

06:42 - The Genesis and Evolution of MetaMind

13:30 - Exploring You.com's Approach to Enhanced Search

18:15 - Demonstrating You.com's AI in Mortgage Calculations

24:10 - The Power of AI in Search: A Deep Dive with You.com

30:25 - Security Measures in Running AI-Generated Code

35:50 - Building a Robust and Secure AI Tech Stack

42:33 - The Role of AI in Automating and Transforming Digital Work

48:50 - Discussing Ethical Considerations and the Societal Impact of AI

55:15 - Envisioning the Future of AI in Daily Life and Work

01:02:00 - Reflecting on the Evolution of AI and Its Future Prospects

01:05:00 - Closing Remarks and Podcast Wrap-Up

? Get our podcasts on these platforms:

Apple Podcasts: http://wandb.me/apple-podcasts

Spotify: http://wandb.me/spotify

Google: http://wandb.me/gd_google

YouTube: http://wandb.me/youtube

Connect with Richard Socher:

https://www.linkedin.com/in/richardsocher/ 

https://twitter.com/RichardSocher 

Follow Weights & Biases:

https://twitter.com/weights_biases 

https://www.linkedin.com/company/wandb 

Join the Weights & Biases Discord Server:

https://discord.gg/CkZKRNnaf3

2024-02-01
Länk till avsnitt

AI?s Future: Investment & Impact with Sarah Guo and Elad Gil

Explore the Future of Investment & Impact in AI with Host Lukas Biewald and Guests Elad Gill and Sarah Guo of the No Priors podcast.

Sarah is the founder of Conviction VC, an AI-centric $100 million venture fund. Elad, a seasoned entrepreneur and startup investor, boasts an impressive portfolio in over 40 companies, each valued at $1 billion or more, and wrote the influential "High Growth Handbook."

Join us for a deep dive into the nuanced world of AI, where we'll explore its broader industry impact, focusing on how startups can seamlessly blend product-centric approaches with a balance of innovation and practical development.

*Subscribe to Weights & Biases* ? https://bit.ly/45BCkYz

Timestamps:

0:00 - Introduction 

5:15 - Exploring Fine-Tuning vs RAG in AI

10:30 - Evaluating AI Research for Investment

15:45 - Impact of AI Models on Product Development

20:00 - AI's Role in Evolving Job Markets

25:15 - The Balance Between AI Research and Product Development

30:00 - Code Generation Technologies in Software Engineering

35:00 - AI's Broader Industry Implications

40:00 - Importance of Product-Driven Approaches in AI Startups

45:00 - AI in Various Sectors: Beyond Software Engineering

50:00 - Open Source vs Proprietary AI Models

55:00 - AI's Impact on Traditional Roles and Industries

1:00:00 - Closing Thoughts 

Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

Follow Weights & Biases:

YouTube: http://wandb.me/youtube

Twitter: https://twitter.com/weights_biases 

LinkedIn: https://www.linkedin.com/company/wandb 

Join the Weights & Biases Discord Server:

https://discord.gg/CkZKRNnaf3

#OCR #DeepLearning #AI #Modeling #ML

2024-01-18
Länk till avsnitt

Revolutionizing AI Data Management with Jerry Liu, CEO of LlamaIndex

In the latest episode of Gradient Dissent, we explore the innovative features and impact of LlamaIndex in AI data management with Jerry Liu, CEO of LlamaIndex. Jerry shares insights on how LlamaIndex integrates diverse data formats with advanced AI technologies, addressing challenges in data retrieval, analysis, and conversational memory. We also delve into the future of AI-driven systems and LlamaIndex's role in this rapidly evolving field. This episode is a must-watch for anyone interested in AI, data science, and the future of technology.

Timestamps:

0:00 - Introduction 

4:46 - Differentiating  LlamaIndex in the AI framework ecosystem.

9:00 - Discussing data analysis, search, and retrieval applications.

14:17 - Exploring Retrieval Augmented Generation (RAG) and vector databases.

19:33 - Implementing and optimizing One Bot in Discord.

24:19 - Developing and evaluating datasets for AI systems.

28:00 - Community contributions and the growth of LlamaIndex.

34:34 - Discussing embedding models and the use of vector databases.

39:33 - Addressing AI model hallucinations and fine-tuning.

44:51 - Text extraction applications and agent-based systems in AI.

49:25 - Community contributions to LlamaIndex and managing refactors.

52:00 - Interactions with big tech's corpus and AI context length.

54:59 - Final thoughts on underrated aspects of ML and challenges in AI.

Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

Connect with Jerry:

https://twitter.com/jerryjliu0

https://www.linkedin.com/in/jerry-liu-64390071/

Follow Weights & Biases:

YouTube: http://wandb.me/youtube

Twitter: https://twitter.com/weights_biases 

LinkedIn: https://www.linkedin.com/company/wandb 

Join the Weights & Biases Discord Server:

https://discord.gg/CkZKRNnaf3

#OCR #DeepLearning #AI #Modeling #ML

2024-01-04
Länk till avsnitt

Bridging AI and Science: The Impact of Machine Learning on Material Innovation with Joe Spisak of Meta

In the latest episode of Gradient Dissent, we hear from Joseph Spisak, Product Director, Generative AI @Meta, to explore the boundless impacts of AI and its expansive role in reshaping various sectors. 

We delve into the intricacies of models like GPT and Llama2, their influence on user experiences, and AI's groundbreaking contributions to fields like biology, material science, and green hydrogen production through the Open Catalyst Project. The episode also examines AI's practical business applications, from document summarization to intelligent note-taking, addressing the ethical complexities of AI deployment. 

We wrap up with a discussion on the significance of open-source AI development, community collaboration, and AI democratization. 

Tune in for valuable insights into the expansive world of AI, relevant to developers, business leaders, and tech enthusiasts.

We discuss:

0:00 Intro0:32 Joe is Back at Meta3:28 What Does Meta Get Out Of Putting Out LLMs?8:24 Measuring The Quality Of LLMs10:55 How Do You Pick The Sizes Of Models16:45 Advice On Choosing Which Model To Start With24:57 The Secret Sauce In The Training26:17 What Is Being Worked On Now33:00 The Safety Mechanisms In Llama 237:00 The Datasets Llama 2 Is Trained On38:00 On Multilingual Capabilities & Tone43:30 On The Biggest Applications Of Llama 247:25 On Why The Best Teams Are Built By Users54:01 The Culture Differences Of Meta vs Open Source57:39 The AI Learning Alliance1:01:34 Where To Learn About Machine Learning1:05:10 Why AI For Science Is Under-rated1:11:36 What Are The Biggest Issues With Real-World Applications

Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

#OCR #DeepLearning #AI #Modeling #ML

2023-12-07
Länk till avsnitt

Unlocking the Power of Language Models in Enterprise: A Deep Dive with Chris Van Pelt

In the premiere episode of Gradient Dissent Business, we're joined by Weights & Biases co-founder Chris Van Pelt for a deep dive into the world of large language models like GPT-3.5 and GPT-4. Chris bridges his expertise as both a tech founder and AI expert, offering key strategies for startups seeking to connect with early users, and for enterprises experimenting with AI. He highlights the melding of AI and traditional web development, sharing his insights on product evolution, leadership, and the power of customer conversations?even for the most introverted founders. He shares how personal development and authentic co-founder relationships enrich business dynamics. Join us for a compelling episode brimming with actionable advice for those looking to innovate with language models, all while managing the inherent complexities. Don't miss Chris Van Pelt's invaluable take on the future of AI in this thought-provoking installment of Gradient Dissent Business.

We discuss:

0:00 - Intro5:59 - Impactful relationships in Chris's life13:15 - Advice for finding co-founders16:25 - Chris's fascination with challenging problems22:30 - Tech stack for AI labs30:50 - Impactful capabilities of AI models36:24 - How this AI era is different47:36 - Advising large enterprises on language model integration51:18 - Using language models for business intelligence and automation52:13 - Closing thoughts and appreciation

Thanks for listening to the Gradient Dissent Business podcast, with hosts Lavanya Shukla and Caryn Marooney, brought to you by Weights & Biases. Be sure to click the subscribe button below, to keep your finger on the pulse of this fast-moving space and hear from other amazing guests

#OCR #DeepLearning #AI #Modeling #ML

2023-11-16
Länk till avsnitt

Providing Greater Access to LLMs with Brandon Duderstadt, Co-Founder and CEO of Nomic AI

On this episode, we?re joined by Brandon Duderstadt, Co-Founder and CEO of Nomic AI. Both of Nomic AI?s products, Atlas and GPT4All, aim to improve the explainability and accessibility of AI.

We discuss:

- (0:55) What GPT4All is and its value proposition.

- (6:56) The advantages of using smaller LLMs for specific tasks. 

- (9:42) Brandon?s thoughts on the cost of training LLMs. 

- (10:50) Details about the current state of fine-tuning LLMs. 

- (12:20) What quantization is and what it does. 

- (21:16) What Atlas is and what it allows you to do.

- (27:30) Training code models versus language models.

- (32:19) Details around evaluating different models.

- (38:34) The opportunity for smaller companies to build open-source models. 

- (42:00) Prompt chaining versus fine-tuning models.

Resources mentioned:

Brandon Duderstadt - https://www.linkedin.com/in/brandon-duderstadt-a3269112a/

Nomic AI - https://www.linkedin.com/company/nomic-ai/

Nomic AI Website - https://home.nomic.ai/

Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

#OCR #DeepLearning #AI #Modeling #ML

2023-07-27
Länk till avsnitt

Exploring PyTorch and Open-Source Communities with Soumith Chintala, VP/Fellow of Meta, Co-Creator of PyTorch

On this episode, we?re joined by Soumith Chintala, VP/Fellow of Meta and Co-Creator of PyTorch. Soumith and his colleagues? open-source framework impacted both the development process and the end-user experience of what would become PyTorch.

We discuss:

- The history of PyTorch?s development and TensorFlow?s impact on development decisions.

- How a symbolic execution model affects the implementation speed of an ML compiler.

- The strengths of different programming languages in various development stages.

- The importance of customer engagement as a measure of success instead of hard metrics.

- Why community-guided innovation offers an effective development roadmap.

- How PyTorch?s open-source nature cultivates an efficient development ecosystem.

- The role of community building in consolidating assets for more creative innovation.

- How to protect community values in an open-source development environment.

- The value of an intrinsic organizational motivation structure.

- The ongoing debate between open-source and closed-source products, especially as it relates to AI and machine learning.

Resources:

- Soumith Chintala

https://www.linkedin.com/in/soumith/

- Meta | LinkedIn

https://www.linkedin.com/company/meta/

- Meta | Website

https://about.meta.com/

- Pytorch

https://pytorch.org/

Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

#OCR #DeepLearning #AI #Modeling #ML

2023-07-13
Länk till avsnitt

Advanced AI Accelerators and Processors with Andrew Feldman of Cerebras Systems

On this episode, we?re joined by Andrew Feldman, Founder and CEO of Cerebras Systems. Andrew and the Cerebras team are responsible for building the largest-ever computer chip and the fastest AI-specific processor in the industry.

We discuss:

- The advantages of using large chips for AI work.

- Cerebras Systems? process for building chips optimized for AI.

- Why traditional GPUs aren?t the optimal machines for AI work.

- Why efficiently distributing computing resources is a significant challenge for AI work.

- How much faster Cerebras Systems? machines are than other processors on the market.

- Reasons why some ML-specific chip companies fail and what Cerebras does differently.

- Unique challenges for chip makers and hardware companies.

- Cooling and heat-transfer techniques for Cerebras machines.

- How Cerebras approaches building chips that will fit the needs of customers for years to come.

- Why the strategic vision for what data to collect for ML needs more discussion.

Resources:

Andrew Feldman - https://www.linkedin.com/in/andrewdfeldman/

Cerebras Systems - https://www.linkedin.com/company/cerebras-systems/

Cerebras Systems | Website - https://www.cerebras.net/

Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

#OCR #DeepLearning #AI #Modeling #ML

2023-06-22
Länk till avsnitt

Enabling LLM-Powered Applications with Harrison Chase of LangChain

On this episode, we?re joined by Harrison Chase, Co-Founder and CEO of LangChain. Harrison and his team at LangChain are on a mission to make the process of creating applications powered by LLMs as easy as possible.

We discuss:

- What LangChain is and examples of how it works. 

- Why LangChain has gained so much attention. 

- When LangChain started and what sparked its growth. 

- Harrison?s approach to community-building around LangChain. 

- Real-world use cases for LangChain.

- What parts of LangChain Harrison is proud of and which parts can be improved.

- Details around evaluating effectiveness in the ML space.

- Harrison's opinion on fine-tuning LLMs.

- The importance of detailed prompt engineering.

- Predictions for the future of LLM providers.

Resources:

Harrison Chase - https://www.linkedin.com/in/harrison-chase-961287118/

LangChain | LinkedIn - https://www.linkedin.com/company/langchain/

LangChain | Website - https://docs.langchain.com/docs/

Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

#OCR #DeepLearning #AI #Modeling #ML

2023-06-01
Länk till avsnitt

Deploying Autonomous Mobile Robots with Jean Marc Alkazzi at idealworks

On this episode, we?re joined by Jean Marc Alkazzi, Applied AI at idealworks. Jean focuses his attention on applied AI, leveraging the use of autonomous mobile robots (AMRs) to improve efficiency within factories and more.

We discuss:

- Use cases for autonomous mobile robots (AMRs) and how to manage a fleet of them. 

- How AMRs interact with humans working in warehouses.

- The challenges of building and deploying autonomous robots.

- Computer vision vs. other types of localization technology for robots.

- The purpose and types of simulation environments for robotic testing.

- The importance of aligning a robotic fleet?s workflow with concrete business objectives.

- What the update process looks like for robots.

- The importance of avoiding your own biases when developing and testing AMRs.

- The challenges associated with troubleshooting ML systems.

Resources: 

Jean Marc Alkazzi - https://www.linkedin.com/in/jeanmarcjeanazzi/

idealworks |LinkedIn - https://www.linkedin.com/company/idealworks-gmbh/

idealworks | Website - https://idealworks.com/

Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

#OCR #DeepLearning #AI #Modeling #ML

2023-05-18
Länk till avsnitt

How EleutherAI Trains and Releases LLMs: Interview with Stella Biderman

On this episode, we?re joined by Stella Biderman, Executive Director at EleutherAI and Lead Scientist - Mathematician at Booz Allen Hamilton.

EleutherAI is a grassroots collective that enables open-source AI research and focuses on the development and interpretability of large language models (LLMs).

We discuss:

- How EleutherAI got its start and where it's headed.

- The similarities and differences between various LLMs.

- How to decide which model to use for your desired outcome.

- The benefits and challenges of reinforcement learning from human feedback.

- Details around pre-training and fine-tuning LLMs.

- Which types of GPUs are best when training LLMs.

- What separates EleutherAI from other companies training LLMs.

- Details around mechanistic interpretability.

- Why understanding what and how LLMs memorize is important.

- The importance of giving researchers and the public access to LLMs.

Stella Biderman - https://www.linkedin.com/in/stellabiderman/

EleutherAI - https://www.linkedin.com/company/eleutherai/

?Resources:

- https://www.eleuther.ai/

Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

#OCR #DeepLearning #AI #Modeling #ML

2023-05-04
Länk till avsnitt

Scaling LLMs and Accelerating Adoption with Aidan Gomez at Cohere

On this episode, we?re joined by Aidan Gomez, Co-Founder and CEO at Cohere. Cohere develops and releases a range of innovative AI-powered tools and solutions for a variety of NLP use cases.

We discuss:

- What ?attention? means in the context of ML.

- Aidan?s role in the ?Attention Is All You Need? paper.

- What state-space models (SSMs) are, and how they could be an alternative to transformers. 

- What it means for an ML architecture to saturate compute.

- Details around data constraints for when LLMs scale.

- Challenges of measuring LLM performance.

- How Cohere is positioned within the LLM development space.

- Insights around scaling down an LLM into a more domain-specific one.

- Concerns around synthetic content and AI changing public discourse.

- The importance of raising money at healthy milestones for AI development.

Aidan Gomez - https://www.linkedin.com/in/aidangomez/

Cohere - https://www.linkedin.com/company/cohere-ai/

Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

Resources:

- https://cohere.ai/

- ?Attention Is All You Need?

#OCR #DeepLearning #AI #Modeling #ML

2023-04-20
Länk till avsnitt

Neural Network Pruning and Training with Jonathan Frankle at MosaicML

Jonathan Frankle, Chief Scientist at MosaicML and Assistant Professor of Computer Science at Harvard University, joins us on this episode. With comprehensive infrastructure and software tools, MosaicML aims to help businesses train complex machine-learning models using their own proprietary data.

We discuss:

- Details of Jonathan?s Ph.D. dissertation which explores his ?Lottery Ticket Hypothesis.?

- The role of neural network pruning and how it impacts the performance of ML models.

- Why transformers will be the go-to way to train NLP models for the foreseeable future.

- Why the process of speeding up neural net learning is both scientific and artisanal. 

- What MosaicML does, and how it approaches working with clients.

- The challenges for developing AGI.

- Details around ML training policy and ethics.

- Why data brings the magic to customized ML models.

- The many use cases for companies looking to build customized AI models.

Jonathan Frankle - https://www.linkedin.com/in/jfrankle/

Resources:

- https://mosaicml.com/

- The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

#OCR #DeepLearning #AI #Modeling #ML

2023-04-04
Länk till avsnitt

Shreya Shankar ? Operationalizing Machine Learning

About This Episode

Shreya Shankar is a computer scientist, PhD student in databases at UC Berkeley, and co-author of "Operationalizing Machine Learning: An Interview Study", an ethnographic interview study with 18 machine learning engineers across a variety of industries on their experience deploying and maintaining ML pipelines in production.

Shreya explains the high-level findings of "Operationalizing Machine Learning"; variables that indicate a successful deployment (velocity, validation, and versioning), common pain points, and a grouping of the MLOps tool stack into four layers. Shreya and Lukas also discuss examples of data challenges in production, Jupyter Notebooks, and reproducibility.

Show notes (transcript and links): http://wandb.me/gd-shreya

---

? *Host:* Lukas Biewald

---

*Subscribe and listen to Gradient Dissent today!*

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2023-03-03
Länk till avsnitt

Sarah Catanzaro ? Remembering the Lessons of the Last AI Renaissance

Sarah Catanzaro is a General Partner at Amplify Partners, and one of the leading investors in AI and ML. Her investments include RunwayML, OctoML, and Gantry.

Sarah and Lukas discuss lessons learned from the "AI renaissance" of the mid 2010s and compare the general perception of ML back then to now. Sarah also provides insights from her perspective as an investor, from selling into tech-forward companies vs. traditional enterprises, to the current state of MLOps/developer tools, to large language models and hype bubbles.

Show notes (transcript and links): http://wandb.me/gd-sarah-catanzaro

---

? Timestamps:

0:00 Intro

1:10 Lessons learned from previous AI hype cycles

11:46 Maintaining technical knowledge as an investor

19:05 Selling into tech-forward companies vs. traditional enterprises

25:09 Building point solutions vs. end-to-end platforms

36:27 LLMS, new tooling, and commoditization

44:39 Failing fast and how startups can compete with large cloud vendors

52:31 The gap between research and industry, and vice versa

1:00:01 Advice for ML practitioners during hype bubbles

1:03:17 Sarah's thoughts on Rust and bottlenecks in deployment

1:11:23 The importance of aligning technology with people

1:15:58 Outro

---

? Links

? "Operationalizing Machine Learning: An Interview Study" (Shankar et al., 2022), an interview study on deploying and maintaining ML production pipelines: https://arxiv.org/abs/2209.09125

---

Connect with Sarah:

? Sarah on Twitter: https://twitter.com/sarahcat21

? Sarah's Amplify Partners profile: https://www.amplifypartners.com/investment-team/sarah-catanzaro

---

? Host: Lukas Biewald

? Producers: Riley Fields, Angelica Pan

---

Subscribe and listen to Gradient Dissent today!

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2023-02-02
Länk till avsnitt

Cristóbal Valenzuela ? The Next Generation of Content Creation and AI

Cristóbal Valenzuela is co-founder and CEO of Runway ML, a startup that's building the future of AI-powered content creation tools. Runway's research areas include diffusion systems for image generation.

Cris gives a demo of Runway's video editing platform. Then, he shares how his interest in combining technology with creativity led to Runway, and where he thinks the world of computation and content might be headed to next. Cris and Lukas also discuss Runway's tech stack and research.

Show notes (transcript and links): http://wandb.me/gd-cristobal-valenzuela

---

? Timestamps:

0:00 Intro

1:06 How Runway uses ML to improve video editing

6:04 A demo of Runway?s video editing capabilities

13:36 How Cris entered the machine learning space

18:55 Cris? thoughts on the future of ML for creative use cases

28:46 Runway?s tech stack

32:38 Creativity, and keeping humans in the loop

36:15 The potential of audio generation and new mental models

40:01 Outro

---

? Runway's AI Film Festival is accepting submissions through January 23! ?

They are looking for art and artists that are at the forefront of AI filmmaking. Submissions should be between 1-10 minutes long, and a core component of the film should include generative content

? https://aiff.runwayml.com/

--

? Links

? "High-Resolution Image Synthesis with Latent Diffusion Models" (Rombach et al., 2022)", the research paper behind Stable Diffusion: https://research.runwayml.com/publications/high-resolution-image-synthesis-with-latent-diffusion-models

? Lexman Artificial, a 100% AI-generated podcast: https://twitter.com/lexman_ai

---

Connect with Cris and Runway:

? Cris on Twitter: https://twitter.com/c_valenzuelab

? Runway on Twitter: https://twitter.com/runwayml

? Careers at Runway: https://runwayml.com/careers/

---

? Host: Lukas Biewald

? Producers: Riley Fields, Angelica Pan

---

Subscribe and listen to Gradient Dissent today!

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2023-01-19
Länk till avsnitt

Jeremy Howard ? The Simple but Profound Insight Behind Diffusion

Jeremy Howard is a co-founder of fast.ai, the non-profit research group behind the popular massive open online course "Practical Deep Learning for Coders", and the open source deep learning library "fastai".

Jeremy is also a co-founder of #Masks4All, a global volunteer organization founded in March 2020 that advocated for the public adoption of homemade face masks in order to help slow the spread of COVID-19. His Washington Post article "Simple DIY masks could help flatten the curve." went viral in late March/early April 2020, and is associated with the U.S CDC's change in guidance a few days later to recommend wearing masks in public.

In this episode, Jeremy explains how diffusion works and how individuals with limited compute budgets can engage meaningfully with large, state-of-the-art models. Then, as our first-ever repeat guest on Gradient Dissent, Jeremy revisits a previous conversation with Lukas on Python vs. Julia for machine learning.

Finally, Jeremy shares his perspective on the early days of COVID-19, and what his experience as one of the earliest and most high-profile advocates for widespread mask-wearing was like.

Show notes (transcript and links): http://wandb.me/gd-jeremy-howard-2

---

? Timestamps:

0:00 Intro

1:06 Diffusion and generative models

14:40 Engaging with large models meaningfully

20:30 Jeremy's thoughts on Stable Diffusion and OpenAI

26:38 Prompt engineering and large language models

32:00 Revisiting Julia vs. Python

40:22 Jeremy's science advocacy during early COVID days

1:01:03 Researching how to improve children's education

1:07:43 The importance of executive buy-in

1:11:34 Outro

1:12:02 Bonus: Weights & Biases

---

? Links

? Jeremy's previous Gradient Dissent episode (8/25/2022): http://wandb.me/gd-jeremy-howard

? "Simple DIY masks could help flatten the curve. We should all wear them in public.", Jeremy's viral Washington Post article: https://www.washingtonpost.com/outlook/2020/03/28/masks-all-coronavirus/

? "An evidence review of face masks against COVID-19" (Howard et al., 2021), one of the first peer-reviewed papers on the effectiveness of wearing masks: https://www.pnas.org/doi/10.1073/pnas.2014564118

? Jeremy's Twitter thread summary of "An evidence review of face masks against COVID-19": https://twitter.com/jeremyphoward/status/1348771993949151232

? Read more about Jeremy's mask-wearing advocacy: https://www.smh.com.au/world/north-america/australian-expat-s-push-for-universal-mask-wearing-catches-fire-in-the-us-20200401-p54fu2.html

---

Connect with Jeremy and fast.ai:

? Jeremy on Twitter: https://twitter.com/jeremyphoward

? fast.ai on Twitter: https://twitter.com/FastDotAI

? Jeremy on LinkedIn: https://www.linkedin.com/in/howardjeremy/

---

? Host: Lukas Biewald

? Producers: Riley Fields, Angelica Pan

2023-01-05
Länk till avsnitt

Jerome Pesenti ? Large Language Models, PyTorch, and Meta

Jerome Pesenti is the former VP of AI at Meta, a tech conglomerate that includes Facebook, WhatsApp, and Instagram, and one of the most exciting places where AI research is happening today.

Jerome shares his thoughts on Transformers-based large language models, and why he's excited by the progress but skeptical of the term "AGI". Then, he discusses some of the practical applications of ML at Meta (recommender systems and moderation!) and dives into the story behind Meta's development of PyTorch. Jerome and Lukas also chat about Jerome's time at IBM Watson and in drug discovery.

Show notes (transcript and links): http://wandb.me/gd-jerome-pesenti

---

? Timestamps:

0:00 Intro

0:28 Jerome's thought on large language models

12:53 AI applications and challenges at Meta

18:41 The story behind developing PyTorch

26:40 Jerome's experience at IBM Watson

28:53 Drug discovery, AI, and changing the game

36:10 The potential of education and AI

40:10 Meta and AR/VR interfaces

43:43 Why NVIDIA is such a powerhouse

47:08 Jerome's advice to people starting their careers

48:50 Going back to coding, the challenges of scaling

52:11 Outro

---

Connect with Jerome:

? Jerome on Twitter: https://twitter.com/an_open_mind

? Jerome on LinkedIn: https://www.linkedin.com/in/jpesenti/

---

? Host: Lukas Biewald

? Producers: Riley Fields, Angelica Pan, Lavanya Shukla

---

Subscribe and listen to our podcast today!

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2022-12-22
Länk till avsnitt

D. Sculley ? Technical Debt, Trade-offs, and Kaggle

D. Sculley is CEO of Kaggle, the beloved and well-known data science and machine learning community.

D. discusses his influential 2015 paper "Machine Learning: The High Interest Credit Card of Technical Debt" and what the current challenges of deploying models in the real world are now, in 2022. Then, D. and Lukas chat about why Kaggle is like a rain forest, and about Kaggle's historic, current, and potential future roles in the broader machine learning community.

Show notes (transcript and links): http://wandb.me/gd-d-sculley

---

? Timestamps:

0:00 Intro

1:02 Machine learning and technical debt

11:18 MLOps, increased stakes, and realistic expectations

19:12 Evaluating models methodically

25:32 Kaggle's role in the ML world

33:34 Kaggle competitions, datasets, and notebooks

38:49 Why Kaggle is like a rain forest

44:25 Possible future directions for Kaggle

46:50 Healthy competitions and self-growth

48:44 Kaggle's relevance in a compute-heavy future

53:49 AutoML vs. human judgment

56:06 After a model goes into production

1:00:00 Outro

---

Connect with D. and Kaggle:

? D. on LinkedIn: https://www.linkedin.com/in/d-sculley-90467310/

? Kaggle on Twitter: https://twitter.com/kaggle

---

Links:

? "Machine Learning: The High Interest Credit Card of Technical Debt" (Sculley et al. 2014): https://research.google/pubs/pub43146/

---

? Host: Lukas Biewald

? Producers: Riley Fields, Angelica Pan, Anish Shah, Lavanya Shukla

---

Subscribe and listen to our podcast today!

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2022-12-01
Länk till avsnitt

Emad Mostaque ? Stable Diffusion, Stability AI, and What?s Next

Emad Mostaque is CEO and co-founder of Stability AI, a startup and network of decentralized developer communities building open AI tools. Stability AI is the company behind Stable Diffusion, the well-known, open source, text-to-image generation model.

Emad shares the story and mission behind Stability AI (unlocking humanity's potential with open AI technology), and explains how Stability's role as a community catalyst and compute provider might evolve as the company grows. Then, Emad and Lukas discuss what the future might hold in store: big models vs "optimal" models, better datasets, and more decentralization.

-

? Special note: This week?s theme music was composed by Weights & Biases? own Justin Tenuto with help from Harmonai?s Dance Diffusion.

-

Show notes (transcript and links): http://wandb.me/gd-emad-mostaque

-

? Timestamps:

00:00 Intro

00:42 How AI fits into the safety/security industry

09:33 Event matching and object detection

14:47 Running models on the right hardware

17:46 Scaling model evaluation

23:58 Monitoring and evaluation challenges

26:30 Identifying and sorting issues

30:27 Bridging vision and language domains

39:25 Challenges and promises of natural language technology

41:35 Production environment

43:15 Using synthetic data

49:59 Working with startups

53:55 Multi-task learning, meta-learning, and user experience

56:44 Optimization and testing across multiple platforms

59:36 Outro

-

Connect with Jehan and Motorola Solutions:

? Jehan on LinkedIn: https://www.linkedin.com/in/jehanw/

? Jehan on Twitter: https://twitter.com/jehan/

? Motorola Solutions on Twitter: https://twitter.com/MotoSolutions/

? Careers at Motorola Solutions: https://www.motorolasolutions.com/en_us/about/careers.html

-

? Host: Lukas Biewald

? Producers: Riley Fields, Angelica Pan, Lavanya Shukla, Anish Shah

-

Subscribe and listen to our podcast today!

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2022-11-15
Länk till avsnitt

Jehan Wickramasuriya ? AI in High-Stress Scenarios

Jehan Wickramasuriya is the Vice President of AI, Platform & Data Services at Motorola Solutions, a global leader in public safety and enterprise security.

In this episode, Jehan discusses how Motorola Solutions uses AI to simplify data streams to help maximize human potential in high-stress situations. He also shares his thoughts on augmenting synthetic data with real data and the challenges posed in partnering with startups.

Show notes (transcript and links): http://wandb.me/gd-jehan-wickramasuriya

-

? Timestamps:

00:00 Intro

00:42 How AI fits into the safety/security industry

09:33 Event matching and object detection

14:47 Running models on the right hardware

17:46 Scaling model evaluation

23:58 Monitoring and evaluation challenges

26:30 Identifying and sorting issues

30:27 Bridging vision and language domains

39:25 Challenges and promises of natural language technology

41:35 Production environment

43:15 Using synthetic data

49:59 Working with startups

53:55 Multi-task learning, meta-learning, and user experience

56:44 Optimization and testing across multiple platforms

59:36 Outro

-

Connect with Jehan and Motorola Solutions:

? Jehan on LinkedIn: https://www.linkedin.com/in/jehanw/

? Jehan on Twitter: https://twitter.com/jehan/

? Motorola Solutions on Twitter: https://twitter.com/MotoSolutions/

? Careers at Motorola Solutions: https://www.motorolasolutions.com/en_us/about/careers.html

-

? Host: Lukas Biewald

? Producers: Riley Fields, Cayla Sharp, Angelica Pan, Lavanya Shukla

-

Subscribe and listen to our podcast today!

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2022-10-06
Länk till avsnitt

Will Falcon ? Making Lightning the Apple of ML

Will Falcon is the CEO and co-founder of Lightning AI, a platform that enables users to quickly build and publish ML models.

In this episode, Will explains how Lightning addresses the challenges of a fragmented AI ecosystem and reveals which framework PyTorch Lightning was originally built upon (hint: not PyTorch!) He also shares lessons he took from his experience serving in the military and offers a recommendation to veterans who want to work in tech.

Show notes (transcript and links): http://wandb.me/gd-will-falcon

---

? Timestamps:

00:00 Intro

01:00 From SEAL training to FAIR

04:17 Stress-testing Lightning

07:55 Choosing PyTorch over TensorFlow and other frameworks

13:16 Components of the Lightning platform

17:01 Launching Lightning from Facebook

19:09 Similarities between leadership and research

22:08 Lessons from the military

26:56 Scaling PyTorch Lightning to Lightning AI

33:21 Hiring the right people

35:21 The future of Lightning

39:53 Reducing algorithm complexity in self-supervised learning

42:19 A fragmented ML landscape

44:35 Outro

---

Connect with Lightning

? Website: https://lightning.ai

? Twitter: https://twitter.com/LightningAI

? LinkedIn: https://www.linkedin.com/company/pytorch-lightning/

? Careers: https://boards.greenhouse.io/lightningai

---

? Host: Lukas Biewald

? Producers: Riley Fields, Anish Shah, Cayla Sharp, Angelica Pan, Lavanya Shukla

---

Subscribe and listen to our podcast today!

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2022-09-15
Länk till avsnitt

Aaron Colak ? ML and NLP in Experience Management

Aaron Colak is the Leader of Core Machine Learning at Qualtrics, an experiment management company that takes large language models and applies them to real-world, B2B use cases.

In this episode, Aaron describes mixing classical linguistic analysis with deep learning models and how Qualtrics organized their machine learning organizations and model to leverage the best of these techniques. He also explains how advances in NLP have invited new opportunities in low-resource languages.

Show notes (transcript and links): http://wandb.me/gd-aaron-colak

---

? Timestamps:

00:00 Intro

00:57 Evolving from surveys to experience management

04:56 Detecting sentiment with ML

10:57 Working with large language models and rule-based systems

14:50 Zero-shot learning, NLP, and low-resource languages

20:11 Letting customers control data

25:13 Deep learning and tabular data

28:40 Hyperscalers and performance monitoring

34:54 Combining deep learning with linguistics

40:03 A sense of accomplishment

42:52 Causality and observational data in healthcare

45:09 Challenges of interdisciplinary collaboration

49:27 Outro

---

Connect with Aaron and Qualtrics

? Aaron on LinkedIn: https://www.linkedin.com/in/aaron-r-colak-3522308/

? Qualtrics on Twitter: https://twitter.com/qualtrics/

? Careers at Qualtrics: https://www.qualtrics.com/careers/

---

? Host: Lukas Biewald

? Producers: Riley Fields, Cayla Sharp, Angelica Pan, Lavanya Shukla

---

Subscribe and listen to our podcast today!

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2022-08-26
Länk till avsnitt

Jordan Fisher ? Skipping the Line with Autonomous Checkout

Jordan Fisher is the CEO and co-founder of Standard AI, an autonomous checkout company that?s pushing the boundaries of computer vision.

In this episode, Jordan discusses ?the Wild West? of the MLOps stack and tells Lukas why Rust beats Python. He also explains why AutoML shouldn't be overlooked and uses a bag of chips to help explain the Manifold Hypothesis.

Show notes (transcript and links): http://wandb.me/gd-jordan-fisher

---

? Timestamps:

00:00 Intro

00:40 The origins of Standard AI

08:30 Getting Standard into stores

18:00 Supervised learning, the advent of synthetic data, and the manifold hypothesis

24:23 What's important in a MLOps stack

27:32 The merits of AutoML

30:00 Deep learning frameworks

33:02 Python versus Rust

39:32 Raw camera data versus video

42:47 The future of autonomous checkout

48:02 Sharing the StandardSim data set

52:30 Picking the right tools

54:30 Overcoming dynamic data set challenges

57:35 Outro

---

Connect with Jordan and Standard AI

? Jordan on LinkedIn: https://www.linkedin.com/in/jordan-fisher-81145025/

? Standard AI on Twitter: https://twitter.com/StandardAi

? Careers at Standard AI: https://careers.standard.ai/

---

? Host: Lukas Biewald

? Producers: Riley Fields, Cayla Sharp, Angelica Pan, Lavanya Shukla

---

Subscribe and listen to our podcast today!

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2022-08-04
Länk till avsnitt

Drago Anguelov ? Robustness, Safety, and Scalability at Waymo

Drago Anguelov is a Distinguished Scientist and Head of Research at Waymo, an autonomous driving technology company and subsidiary of Alphabet Inc.

We begin by discussing Drago's work on the original Inception architecture, winner of the 2014 ImageNet challenge and introduction of the inception module. Then, we explore milestones and current trends in autonomous driving, from Waymo's release of the Open Dataset to the trade-offs between modular and end-to-end systems.

Drago also shares his thoughts on finding rare examples, and the challenges of creating scalable and robust systems.

Show notes (transcript and links): http://wandb.me/gd-drago-anguelov

---

? Timestamps:

0:00 Intro

0:45 The story behind the Inception architecture

13:51 Trends and milestones in autonomous vehicles

23:52 The challenges of scalability and simulation

30:19 Why LiDar and mapping are useful

35:31 Waymo Via and autonomous trucking

37:31 Robustness and unsupervised domain adaptation

40:44 Why Waymo released the Waymo Open Dataset

49:02 The domain gap between simulation and the real world

56:40 Finding rare examples

1:04:34 The challenges of production requirements

1:08:36 Outro

---

Connect with Drago & Waymo

? Drago on LinkedIn: https://www.linkedin.com/in/dragomiranguelov/

? Waymo on Twitter: https://twitter.com/waymo/

? Careers at Waymo: https://waymo.com/careers/

---

Links:

? Inception v1: https://arxiv.org/abs/1409.4842

? "SPG: Unsupervised Domain Adaptation for 3D Object Detection via Semantic Point Generation", Qiangeng Xu et al. (2021), https://arxiv.org/abs/2108.06709

? "GradTail: Learning Long-Tailed Data Using Gradient-based Sample Weighting", Zhao Chen et al. (2022), https://arxiv.org/abs/2201.05938

---

? Host: Lukas Biewald

? Producers: Cayla Sharp, Angelica Pan, Lavanya Shukla

---

Subscribe and listen to our podcast today!

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2022-07-14
Länk till avsnitt

James Cham ? Investing in the Intersection of Business and Technology

James Cham is a co-founder and partner at Bloomberg Beta, an early-stage venture firm that invests in machine learning and the future of work, the intersection between business and technology.

James explains how his approach to investing in AI has developed over the last decade, which signals of success he looks for in the ever-adapting world of venture startups (tip: look for the "gradient of admiration"), and why it's so important to demystify ML for executives and decision-makers.

Lukas and James also discuss how new technologies create new business models, and what the ethical considerations of a world where machine learning is accepted to be possibly fallible would be like.

Show notes (transcript and links): http://wandb.me/gd-james-cham

---

? Timestamps:

0:00 Intro

0:46 How investment in AI has changed and developed

7:08 Creating the first MI landscape infographics

10:30 The impact of ML on organizations and management

17:40 Demystifying ML for executives

21:40 Why signals of successful startups change over time

27:07 ML and the emergence of new business models

37:58 New technology vs new consumer goods

39:50 What James considers when investing

44:19 Ethical considerations of accepting that ML models are fallible

50:30 Reflecting on past investment decisions

52:56 Thoughts on consciousness and Theseus' paradox

59:08 Why it's important to increase general ML literacy

1:03:09 Outro

1:03:30 Bonus: How James' faith informs his thoughts on ML

---

Connect with James:

? Twitter: https://twitter.com/jamescham

? Bloomberg Beta: https://github.com/Bloomberg-Beta/Manual

---

Links:

? "Street-Level Algorithms: A Theory at the Gaps Between Policy and Decisions" by Ali Alkhatib and Michael Bernstein (2019): https://doi.org/10.1145/3290605.3300760

---

? Host: Lukas Biewald

? Producers: Cayla Sharp, Angelica Pan, Lavanya Shukla

---

Subscribe and listen to our podcast today!

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2022-07-07
Länk till avsnitt

Boris Dayma ? The Story Behind DALL·E mini, the Viral Phenomenon


Check out this report by Boris about DALL-E mini:

https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini-Generate-images-from-any-text-prompt--VmlldzoyMDE4NDAy

https://wandb.ai/_scott/wandb_example/reports/Collaboration-in-ML-made-easy-with-W-B-Teams--VmlldzoxMjcwMDU5

https://twitter.com/weirddalle

Connect with Boris:

? Twitter: https://twitter.com/borisdayma

---

? Host: Lukas Biewald

? Producers: Cayla Sharp, Angelica Pan, Sanyam Bhutani, Lavanya Shukla

---

Subscribe and listen to our podcast today!

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2022-06-17
Länk till avsnitt

Tristan Handy ? The Work Behind the Data Work

Tristan Handy is CEO and founder of dbt Labs. dbt (data build tool) simplifies the data transformation workflow and helps organizations make better decisions.

Lukas and Tristan dive into the history of the modern data stack and the subsequent challenges that dbt was created to address; communities of identity and product-led growth; and thoughts on why SQL has survived and thrived for so long. Tristan also shares his hopes for the future of BI tools and the data stack.

Show notes (transcript and links): http://wandb.me/gd-tristan-handy

---

? Timestamps:

0:00 Intro

0:40 How dbt makes data transformation easier

4:52 dbt and avoiding bad data habits

14:23 Agreeing on organizational ground truths

19:04 Staying current while running a company

22:15 The origin story of dbt

26:08 Why dbt is conceptually simple but hard to execute

34:47 The dbt community and the bottom-up mindset

41:50 The future of data and operations

47:41 dbt and machine learning

49:17 Why SQL is so ubiquitous

55:20 Bridging the gap between the ML and data worlds

1:00:22 Outro

---

Connect with Tristan:

? Twitter: https://twitter.com/jthandy

? The Analytics Engineering Roundup: https://roundup.getdbt.com/

---

? Host: Lukas Biewald

? Producers: Cayla Sharp, Angelica Pan, Sanyam Bhutani, Lavanya Shukla

---

Subscribe and listen to our podcast today!

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2022-06-09
Länk till avsnitt

Johannes Otterbach ? Unlocking ML for Traditional Companies

Johannes Otterbach is VP of Machine Learning Research at Merantix Momentum, an ML consulting studio that helps their clients build AI solutions.

Johannes and Lukas talk about Johannes' background in physics and applications of ML to quantum computing, why Merantix is investing in creating a cloud-agnostic tech stack, and the unique challenges of developing and deploying models for different customers. They also discuss some of Johannes' articles on the impact of NLP models and the future of AI regulations.

Show notes (transcript and links): http://wandb.me/gd-johannes-otterbach

---

? Timestamps:

0:00 Intro

1:04 Quantum computing and ML applications

9:21 Merantix, Ventures, and ML consulting

19:09 Building a cloud-agnostic tech stack

24:40 The open source tooling ecosystem

30:28 Handing off models to customers

31:42 The impact of NLP models on the real world

35:40 Thoughts on AI and regulation

40:10 Statistical physics and optimization problems

42:50 The challenges of getting high-quality data

44:30 Outro

---

Connect with Johannes:

? LinkedIn: https://twitter.com/jsotterbach

? Personal website: http://jotterbach.github.io/

? Careers at Merantix Momentum: https://merantix-momentum.com/about#jobs

---

? Host: Lukas Biewald

? Producers: Cayla Sharp, Angelica Pan, Sanyam Bhutani, Lavanya Shukla

---

Subscribe and listen to our podcast today!

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2022-05-12
Länk till avsnitt

Mircea Neagovici ? Robotic Process Automation (RPA) and ML

Mircea Neagovici is VP, AI and Research at UiPath, where his team works on task mining and other ways of combining robotic process automation (RPA) with machine learning for their B2B products.

Mircea and Lukas talk about the challenges of allowing customers to fine-tune their models, the trade-offs between traditional ML and more complex deep learning models, and how Mircea transitioned from a more traditional software engineering role to running a machine learning organization.

Show notes (transcript and links): http://wandb.me/gd-mircea-neagovici

---

? Timestamps:

0:00 Intro?

1:05 Robotic Process Automation (RPA)?

4:20 RPA and machine learning at UiPath?

8:20 Fine-tuning & PyTorch vs TensorFlow?

14:50 Monitoring models in production?

16:33 Task mining?

22:37 Trade-offs in ML models?

29:45 Transitioning from software engineering to ML?

34:02 ML teams vs engineering teams?

40:41 Spending more time on data?

43:55 The organizational machinery behind ML models?

45:57 Outro

---

Connect with Mircea:

? LinkedIn: https://www.linkedin.com/in/mirceaneagovici/

? Careers at UiPath: https://www.uipath.com/company/careers

---

? Host: Lukas Biewald

? Producers: Cayla Sharp, Angelica Pan, Sanyam Bhutani, Lavanya Shukla

2022-04-21
Länk till avsnitt

Jensen Huang ? NVIDIA?s CEO on the Next Generation of AI and MLOps

Jensen Huang is founder and CEO of NVIDIA, whose GPUs sit at the heart of the majority of machine learning models today.

Jensen shares the story behind NVIDIA's expansion from gaming to deep learning acceleration, leadership lessons that he's learned over the last few decades, and why we need a virtual world that obeys the laws of physics (aka the Omniverse) in order to take AI to the next era. Jensen and Lukas also talk about the singularity, the slow-but-steady approach to building a new market, and the importance of MLOps.

The complete show notes (transcript and links) can be found here: http://wandb.me/gd-jensen-huang

---

? Timestamps:

0:00 Intro

0:50 Why NVIDIA moved into the deep learning space

7:33 Balancing the compute needs of different audiences

10:40 Quantum computing, Huang's Law, and the singularity

15:53 Democratizing scientific computing

20:59 How Jensen stays current with technology trends

25:10 The global chip shortage

27:00 Leadership lessons that Jensen has learned

32:32 Keeping a steady vision for NVIDIA

35:48 Omniverse and the next era of AI

42:00 ML topics that Jensen's excited about

45:05 Why MLOps is vital

48:38 Outro

---

Subscribe and listen to our podcast today!

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2022-03-03
Länk till avsnitt

Peter & Boris ? Fine-tuning OpenAI's GPT-3

Peter Welinder is VP of Product & Partnerships at OpenAI, where he runs product and commercialization efforts of GPT-3, Codex, GitHub Copilot, and more. Boris Dayma is Machine Learning Engineer at Weights & Biases, and works on integrations and large model training.

Peter, Boris, and Lukas dive into the world of GPT-3:

- How people are applying GPT-3 to translation, copywriting, and other commercial tasks

- The performance benefits of fine-tuning GPT-3-

- Developing an API on top of GPT-3 that works out of the box, but is also flexible and customizable

They also discuss the new OpenAI and Weights & Biases collaboration, which enables a user to log their GPT-3 fine-tuning projects to W&B with a single line of code.

The complete show notes (transcript and links) can be found here: http://wandb.me/gd-peter-and-boris

---

Connect with Peter & Boris:

? Peter's Twitter: https://twitter.com/npew

? Boris' Twitter: https://twitter.com/borisdayma

---

? Timestamps:

0:00 Intro

1:01 Solving real-world problems with GPT-3

6:57 Applying GPT-3 to translation tasks

14:58 Copywriting and other commercial GPT-3 applications

20:22 The OpenAI API and fine-tuning GPT-3

28:22 Logging GPT-3 fine-tuning projects to W&B

38:25 Engineering challenges behind OpenAI's API

43:15 Outro

---

Subscribe and listen to our podcast today!

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2022-02-10
Länk till avsnitt

Ion Stoica ? Spark, Ray, and Enterprise Open Source

Ion Stoica is co-creator of the distributed computing frameworks Spark and Ray, and co-founder and Executive Chairman of Databricks and Anyscale. He is also a Professor of computer science at UC Berkeley and Principal Investigator of RISELab, a five-year research lab that develops technology for low-latency, intelligent decisions.

Ion and Lukas chat about the challenges of making a simple (but good!) distributed framework, the similarities and differences between developing Spark and Ray, and how Spark and Ray led to the formation of Databricks and Anyscale. Ion also reflects on the early startup days, from deciding to commercialize to picking co-founders, and shares advice on building a successful company.

The complete show notes (transcript and links) can be found here: http://wandb.me/gd-ion-stoica

---

Timestamps:

0:00 Intro

0:56 Ray, Anyscale, and making a distributed framework

11:39 How Spark informed the development of Ray

18:53 The story behind Spark and Databricks

33:00 Why TensorFlow and PyTorch haven't monetized

35:35 Picking co-founders and other startup advice

46:04 The early signs of sky computing

49:24 Breaking problems down and prioritizing

53:17 Outro

---

Subscribe and listen to our podcast today!

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2022-01-20
Länk till avsnitt

Stephan Fabel ? Efficient Supercomputing with NVIDIA's Base Command Platform

Stephan Fabel is Senior Director of Infrastructure Systems & Software at NVIDIA, where he works on Base Command, a software platform to coordinate access to NVIDIA's DGX SuperPOD infrastructure.

Lukas and Stephan talk about why having a supercomputer is one thing but using it effectively is another, why a deeper understanding of hardware on the practitioner level is becoming more advantageous, and which areas of the ML tech stack NVIDIA is looking to expand into.

The complete show notes (transcript and links) can be found here: http://wandb.me/gd-stephan-fabel

---

Timestamps:

0:00 Intro

1:09 NVIDIA Base Command and DGX SuperPOD

10:33 The challenges of multi-node processing at scale

18:35 Why it's hard to use a supercomputer effectively

25:14 The advantages of de-abstracting hardware

29:09 Understanding Base Command's product-market fit

36:59 Data center infrastructure as a value center

42:13 Base Command's role in tech stacks

47:16 Why crowdsourcing is underrated

49:24 The challenges of scaling beyond a POC

51:39 Outro

---

Subscribe and listen to our podcast today!

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2022-01-06
Länk till avsnitt

Chris Padwick ? Smart Machines for More Sustainable Farming

Chris Padwick is Director of Computer Vision Machine Learning at Blue River Technology, a subsidiary of John Deere. Their core product, See & Spray, is a weeding robot that identifies crops and weeds in order to spray only the weeds with herbicide.

Chris and Lukas dive into the challenges of bringing See & Spray to life, from the hard computer vision problem of classifying weeds from crops, to the engineering feat of building and updating embedded systems that can survive on a farming machine in the field. Chris also explains why user feedback is crucial, and shares some of the surprising product insights he's gained from working with farmers.

The complete show notes (transcript and links) can be found here: http://wandb.me/gd-chris-padwick

---

Connect with Chris:

? LinkedIn: https://www.linkedin.com/in/chris-padwick-75b5761/

? Blue River on Twitter: https://twitter.com/BlueRiverTech

---

Timestamps:

0:00 Intro

1:09 How does See & Spray reduce herbicide usage?

9:15 Classifying weeds and crops in real time

17:45 Insights from deployment and user feedback

29:08 Why weed and crop classification is surprisingly hard

37:33 Improving and updating models in the field

40:55 Blue River's ML stack

44:55 Autonomous tractors and upcoming directions

48:05 Why data pipelines are underrated

52:10 The challenges of scaling software & hardware

54:44 Outro

55:55 Bonus: Transporters and the singularity

---

Subscribe and listen to our podcast today!

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2021-12-23
Länk till avsnitt

Kathryn Hume ? Financial Models, ML, and 17th-Century Philosophy

Kathryn Hume is Vice President Digital Investments Technology at the Royal Bank of Canada (RBC). At the time of recording, she was Interim Head of Borealis AI, RBC's research institute for machine learning.

Kathryn and Lukas talk about ML applications in finance, from building a personal finance forecasting model to applying reinforcement learning to trade execution, and take a philosophical detour into the 17th century as they speculate on what Newton and Descartes would have thought about machine learning.

The complete show notes (transcript and links) can be found here: http://wandb.me/gd-kathryn-hume

---

Connect with Kathryn:

? Twitter: https://twitter.com/humekathryn

? Website: https://quamproxime.com/

---

Timestamps:

0:00 Intro

0:54 Building a personal finance forecasting model

10:54 Applying RL to trade execution

18:55 Transparent financial models and fairness

26:20 Semantic parsing and building a text-to-SQL interface

29:20 From comparative literature and math to product

37:33 What would Newton and Descartes think about ML?

44:15 On sentient AI and transporters

47:33 Why casual inference is under-appreciated

49:25 The challenges of integrating models into the business

51:45 Outro

---

Subscribe and listen to our podcast today!

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2021-12-16
Länk till avsnitt

Sean & Greg ? Biology and ML for Drug Discovery

Sean McClain is the founder and CEO, and Gregory Hannum is the VP of AI Research at Absci, a biotech company that's using deep learning to expedite drug discovery and development.

Lukas, Sean, and Greg talk about why Absci started investing so heavily in ML research (it all comes back to the data), what it'll take to build the GPT-3 of DNA, and where the future of pharma is headed. Sean and Greg also share some of the challenges of building cross-functional teams and combining two highly specialized fields like biology and ML.

The complete show notes (transcript and links) can be found here: http://wandb.me/gd-sean-and-greg

---

Connect with Sean and Greg:

? Sean's Twitter: https://twitter.com/seanrmcclain

? Greg's Twitter: https://twitter.com/gregory_hannum

? Absci's Twitter: https://twitter.com/abscibio

---

Timestamps:

0:00 Intro

0:53 How Absci merges biology and AI

11:24 Why Absci started investing in ML

19:00 Creating the GPT-3 of DNA

25:34 Investing in data collection and in ML teams

33:14 Clinical trials and Absci's revenue structure

38:17 Combining knowledge from different domains

45:22 The potential of multitask learning

50:43 Why biological data is tricky to work with

55:00 Outro

---

Subscribe and listen to our podcast today!

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2021-12-02
Länk till avsnitt

Chris, Shawn, and Lukas ? The Weights & Biases Journey

You might know him as the host of Gradient Dissent, but Lukas is also the CEO of Weights & Biases, a developer-first ML tools platform!

In this special episode, the three W&B co-founders ? Chris (CVP), Shawn (CTO), and Lukas (CEO) ? sit down to tell the company's origin stories, reflect on the highs and lows, and give advice to engineers looking to start their own business.

Chris reveals the W&B server architecture (tl;dr - React + GraphQL), Shawn shares his favorite product feature (it's a hidden frontend layer), and Lukas explains why it's so important to work with customers that inspire you.

The complete show notes (transcript and links) can be found here: http://wandb.me/gd-wandb-cofounders

---

Connect with us:

? Chris' Twitter: https://twitter.com/vanpelt

? Shawn's Twitter: https://twitter.com/shawnup

? Lukas' Twitter: https://twitter.com/l2k

? W&B's Twitter: https://twitter.com/weights_biases

---

Timestamps:

0:00 Intro

1:29 The stories behind Weights & Biases

7:45 The W&B tech stack

9:28 Looking back at the beginning

11:42 Hallmark moments

14:49 Favorite product features

16:49 Rewriting the W&B backend

18:21 The importance of customer feedback

21:18 How Chris and Shawn have changed

22:35 How the ML space has changed

28:24 Staying positive when things look bleak

32:19 Lukas' advice to new entrepreneurs

35:29 Hopes for the next five years

38:09 Making a paintbot & model understanding

41:30 Biggest bottlenecks in deployment

44:08 Outro

44:38 Bonus: Under- vs overrated technologies

---

Subscribe and listen to our podcast today!

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2021-11-05
Länk till avsnitt

Pete Warden ? Practical Applications of TinyML

Pete is the Technical Lead of the TensorFlow Micro team, which works on deep learning for mobile and embedded devices.

Lukas and Pete talk about hacking a Raspberry Pi to run AlexNet, the power and size constraints of embedded devices, and techniques to reduce model size. Pete also explains real world applications of TensorFlow Lite Micro and shares what it's been like to work on TensorFlow from the beginning.

The complete show notes (transcript and links) can be found here: http://wandb.me/gd-pete-warden

---

Connect with Pete:

? Twitter: https://twitter.com/petewarden

? Website: https://petewarden.com/

---

Timestamps:

0:00 Intro

1:23 Hacking a Raspberry Pi to run neural nets

13:50 Model and hardware architectures

18:56 Training a magic wand

21:47 Raspberry Pi vs Arduino

27:51 Reducing model size

33:29 Training on the edge

39:47 What it's like to work on TensorFlow

47:45 Improving datasets and model deployment

53:05 Outro

---

Subscribe and listen to our podcast today!

? Apple Podcasts: http://wandb.me/apple-podcasts??

? Google Podcasts: http://wandb.me/google-podcasts?

? Spotify: http://wandb.me/spotify?

2021-10-21
Länk till avsnitt

Pieter Abbeel ? Robotics, Startups, and Robotics Startups

Pieter is the Chief Scientist and Co-founder at Covariant, where his team is building universal AI for robotic manipulation. Pieter also hosts The Robot Brains Podcast, in which he explores how far humanity has come in its mission to create conscious computers, mindful machines, and rational robots.

Lukas and Pieter explore the state of affairs of robotics in 2021, the challenges of achieving consistency and reliability, and what it'll take to make robotics more ubiquitous. Pieter also shares some perspective on entrepreneurship, from how he knew it was time to commercialize Gradescope to what he looks for in co-founders to why he started Covariant.

Show notes: http://wandb.me/gd-pieter-abbeel

---

Connect with Pieter:

? Twitter: https://twitter.com/pabbeel

? Website: https://people.eecs.berkeley.edu/~pabbeel/

? The Robot Brains Podcast: https://www.therobotbrains.ai/

---

Timestamps:

0:00 Intro

1:15 The challenges of robotics

8:10 Progress in robotics

13:34 Imitation learning and reinforcement learning

21:37 Simulated data, real data, and reliability

27:53 The increasing capabilities of robotics

36:23 Entrepreneurship and co-founding Gradescope

44:35 The story behind Covariant

47:50 Pieter's communication tips

52:13 What Pieter's currently excited about

55:08 Focusing on good UI and high reliability

57:01 Outro

2021-10-07
Länk till avsnitt

Chris Albon ? ML Models and Infrastructure at Wikimedia

In this episode we're joined by Chris Albon, Director of Machine Learning at the Wikimedia Foundation.

Lukas and Chris talk about Wikimedia's approach to content moderation, what it's like to work in a place so transparent that even internal chats are public, how Wikimedia uses machine learning (spoiler: they do a lot of models to help editors), and why they're switching to Kubeflow and Docker. Chris also shares how his focus on outcomes has shaped his career and his approach to technical interviews.

Show notes: http://wandb.me/gd-chris-albon

---

Connect with Chris:

- Twitter: https://twitter.com/chrisalbon

- Website: https://chrisalbon.com/

---

Timestamps:

0:00 Intro

1:08 How Wikimedia approaches moderation

9:55 Working in the open and embracing humility

16:08 Going down Wikipedia rabbit holes

20:03 How Wikimedia uses machine learning

27:38 Wikimedia's ML infrastructure

42:56 How Chris got into machine learning

46:43 Machine Learning Flashcards and technical interviews

52:10 Low-power models and MLOps

55:58 Outro

2021-09-23
Länk till avsnitt

Emily M. Bender ? Language Models and Linguistics

In this episode, Emily and Lukas dive into the problems with bigger and bigger language models, the difference between form and meaning, the limits of benchmarks, and why it's important to name the languages we study.

Show notes (links to papers and transcript): http://wandb.me/gd-emily-m-bender

---

Emily M. Bender is a Professor of Linguistics at and Faculty Director of the Master's Program in Computational Linguistics at University of Washington. Her research areas include multilingual grammar engineering, variation (within and across languages), the relationship between linguistics and computational linguistics, and societal issues in NLP.

---

Timestamps:

0:00 Sneak peek, intro

1:03 Stochastic Parrots

9:57 The societal impact of big language models

16:49 How language models can be harmful

26:00 The important difference between linguistic form and meaning

34:40 The octopus thought experiment

42:11 Language acquisition and the future of language models

49:47 Why benchmarks are limited

54:38 Ways of complementing benchmarks

1:01:20 The #BenderRule

1:03:50 Language diversity and linguistics

1:12:49 Outro

2021-09-09
Länk till avsnitt

Jeff Hammerbacher ? From data science to biomedicine

Jeff talks about building Facebook's early data team, founding Cloudera, and transitioning into biomedicine with Hammer Lab and Related Sciences.

(Read more: http://wandb.me/gd-jeff-hammerbacher)

---

Jeff Hammerbacher is a scientist, software developer, entrepreneur, and investor. Jeff's current work focuses on drug discovery at Related Sciences, a biotech venture creation firm that he co-founded in 2020.

Prior to his work at Related Sciences, Jeff was the Principal Investigator of Hammer Lab, a founder and the Chief Scientist of Cloudera, an Entrepreneur-in-Residence at Accel, and the manager of the Data team at Facebook.

---

Follow Gradient Dissent on Twitter: https://twitter.com/weights_biases

---

0:00 Sneak peek, intro

1:13 The start of Facebook's data science team

6:53 Facebook's early tech stack

14:20 Early growth strategies at Facebook

17:37 The origin story of Cloudera

24:51 Cloudera's success, in retrospect

31:05 Jeff's transition into biomedicine

38:38 Immune checkpoint blockade in cancer therapy

48:55 Data and techniques for biomedicine

53:00 Why Jeff created Related Sciences

56:32 Outro

2021-08-26
Länk till avsnitt

Josh Bloom ? The Link Between Astronomy and ML

Josh explains how astronomy and machine learning have informed each other, their current limitations, and where their intersection goes from here.

(Read more: http://wandb.me/gd-josh-bloom)

---

Josh is a Professor of Astronomy and Chair of the Astronomy Department at UC Berkeley. His research interests include the intersection of machine learning and physics, time-domain transients events, artificial intelligence, and optical/infared instrumentation.

---

Follow Gradient Dissent on Twitter: https://twitter.com/weights_biases

---

0:00 Intro, sneak peek

1:15 How astronomy has informed ML

4:20 The big questions in astronomy today

10:15 On dark matter and dark energy

16:37 Finding life on other planets

19:55 Driving advancements in astronomy

27:05 Putting telescopes in space

31:05 Why Josh started using ML in his research

33:54 Crowdsourcing in astronomy

36:20 How ML has (and hasn't) informed astronomy

47:22 The next generation of cross-functional grad students

50:50 How Josh started coding

56:11 Incentives and maintaining research codebases

1:00:01 ML4Science's tech stack

1:02:11 Uncertainty quantification in a sensor-based world

1:04:28 Why it's not good to always get an answer

1:07:47 Outro

2021-08-20
Länk till avsnitt
Hur lyssnar man på podcast?

En liten tjänst av I'm With Friends. Finns även på engelska.
Uppdateras med hjälp från iTunes.