How AI Happens is a podcast featuring experts and practitioners explaining their work at the cutting edge of Artificial Intelligence. Tune in to hear AI Researchers, Data Scientists, ML Engineers, and the leaders of today’s most exciting AI companies explain the newest and most challenging facets of their field. Powered by Sama.
The podcast How AI Happens is created by Sama. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
We explore the current trends of AI-based solutions in retail, what has driven its adoption in the industry, and how AI-based customer service technology has improved over time. We also discuss the correct mix of technology and humans, the importance of establishing boundaries for AI, and why it won't replace humans but will augment workflow. Hear examples of AI retail success stories, what companies got AI wrong, and the reasons behind the wins and failures. Gain insights into the value of copilots, business strategies to avoid investing in ineffective AI solutions, and much more. Tune in now!
Key Points From This Episode:
Quotes:
“I think [the evolution] in terms of accessibility to AI-solutions for people who don't have the massive IT departments and massive data analytics departments is really remarkable.” — Mika Yamamoto [0:04:25]
“Whether it's generative AI for creative or content or whatever, it's not going to replace humans. It's going to augment our workflows.” — Lisa Avvocato [0:10:46]
“Retail is actually one of the fastest adopting industries out there [of] AI.” — Mika Yamamoto [0:14:17]
“Having conversations with peers, I think, is absolutely invaluable to figure out what's hype and what's reality [regarding AI].” — Mika Yamamoto [0:30:19]
Links Mentioned in Today’s Episode:
We hear about Nitzan’s AI expertise, motivation for joining eBay, and approach to implementing AI into eBay's business model. Gain insights into the impacts of centralizing and federating AI, leveraging generative AI to create personalized content, and why patience is essential to AI development. We also unpack eBay's approach to LLM development, tailoring AI tools for eBay sellers, the pitfalls of generic marketing content, and the future of AI in retail. Join us to discover how AI is revolutionizing e-commerce and disrupting the retail sector with Nitzan Mekel-Bobrov!
Key Points From This Episode:
Quotes:
“It’s tricky to balance the short-term wins with the long-term transformation.” — Nitzan Mekel-Bobrov [0:06:50]
“An experiment is only a failure if you haven’t learned anything yourself and – generated institutional knowledge from it.” — Nitzan Mekel-Bobrov [0:09:36]
“What's nice about [eBay's] business model — is that our incentive is to enable each seller to maintain their own uniqueness.” — Nitzan Mekel-Bobrov [0:27:33]
“The companies that will thrive in this AI transformation are the ones that can figure out how to marry parts of their current culture and what all of their talent brings with what the AI delivers.” — Nitzan Mekel-Bobrov [0:33:58]
Links Mentioned in Today’s Episode:
Satya unpacks how Unilever utilizes its database to inform its models and how to determine the right amount of data needed to solve complex problems. Dr. Wattamwar explains why contextual problem-solving is vital, the notion of time constraints in data science, the system point of view of modeling, and how Unilever incorporates AI into its models. Gain insights into how AI can increase operational efficiency, exciting trends in the AI space, how AI makes experimentation accessible, and more! Tune in to learn about the power of data science and AI with Dr. Satyajit Wattamwar.
Key Points From This Episode:
Quotes:
“Around – 30 or 40 years ago, people started realizing the importance of data-driven modeling because you can never capture physics perfectly in an equation.” — Dr. Satyajit Wattamwar [0:03:10]
“Having large volumes of data which are less related with each other is a different thing than a large volume of data for one problem.” — Dr. Satyajit Wattamwar [0:09:12]
“More data [does] not always lead to good quality models. Unless it is for the same use-case.” — Dr. Satyajit Wattamwar [0:11:56]
“If somebody is looking [to] grow in their career ladder, then it's not about one's own interest.” — Dr. Satyajit Wattamwar [0:24:07]
Links Mentioned in Today’s Episode:
Jing explains how Vanguard uses machine learning and reinforcement learning to deliver personalized "nudges," helping investors make smarter financial decisions. Jing dives into the importance of aligning AI efforts with Vanguard’s mission and discusses generative AI’s potential for boosting employee productivity while improving customer experiences. She also reveals how generative AI is poised to play a key role in transforming the company's future, all while maintaining strict data privacy standards.
Key Points From This Episode:
Quotes:
“We make sure all our AI work is aligned with [Vanguard’s] four pillars to deliver business impact.” — Jing Wang [0:08:56]
“We found those simple nudges have tremendous power in terms of guiding the investors to adopt the right things. And this year, we started to use a machine learning model to actually personalize those nudges.” — Jing Wang [0:19:39]
“Ultimately, we see that generative AI could help us to build more differentiated products. – We want to have AI be able to train language models [to have] much more of a Vanguard mindset.” — Jing Wang [0:29:22]
Links Mentioned in Today’s Episode:
Key Points From This Episode:
Quotes:
“I’ve spent the last 30 years in data. So, if there’s a database out there, whether it’s relational or object or XML or JSON, I’ve done something unspeakable to it at some point.” — @ramvzz [0:01:46]
“As people are getting more experienced with how they could apply GenAI to solve their problems, then they’re realizing that they do need to organize their data and that data is really important.” — @ramvzz [0:18:58]
“Following the technology and where it can go, there’s a lot of fun to be had with that.” — @ramvzz [0:23:29]
“Now that we can see how software development itself is evolving, I think that 12-year-old me would’ve built so many more cooler things than I did with all the tech that’s out here now.” — @ramvzz [0:29:14]
Links Mentioned in Today’s Episode:
Pascal & Yannick delve into the kind of human involvement SAM-2 needs before discussing the use cases it enables. Hear all about the importance of having realistic expectations of AI, what the cost of SAM-2 looks like, and the the importance of humans in LLMs.
Key Points From This Episode:
Quotes:
“We’re kind of shifting towards more of a validation period than just annotating from scratch.” — Yannick Donnelly [0:22:01]
“Models have their place but they need to be evaluated.” — Yannick Donnelly [0:25:16]
“You’re never just using a model for the sake of using a model. You’re trying to solve something and you’re trying to improve a business metric.” — Pascal Jauffret [0:32:59]
“We really shouldn’t underestimate the human aspect of using models.” — Pascal Jauffret [0:40:08]
Links Mentioned in Today’s Episode:
Today we are joined by Siddhika Nevrekar, an experienced product leader passionate about solving complex problems in ML by bringing people and products together in an environment of trust. We unpack the state of free computing, the challenges of training AI models for edge, what Siddhika hopes to achieve in her role at Qualcomm, and her methods for solving common industry problems that developers face.
Key Points From This Episode:
Quotes:
“Ultimately, we are constrained with the size of the device. It’s all physics. How much can you compress a small little chip to do what hundreds and thousands of chips can do which you can stack up in a cloud? Can you actually replicate that experience on the device?” — @siddhika_
“By the time I left Apple, we had 1000-plus [AI] models running on devices and 10,000 applications that were powered by AI on the device, exclusively on the device. Which means the model is entirely on the device and is not going into the cloud. To me, that was the realization that now the moment has arrived where something magical is going to start happening with AI and ML.” — @siddhika_
Links Mentioned in Today’s Episode:
Today we are joined by Developer Advocate at Block, Rizel Scarlett, who is here to explain how to bridge the gap between the technical and non-technical aspects of a business. We also learn about AI hallucinations and how Rizel and Block approach this particular pain point, the burdens of responsibility of AI users, why it’s important to make AI tools accessible to all, and the ins and outs of G{Code} House – a learning community for Indigenous and women of color in tech. To end, Rizel explains what needs to be done to break down barriers to entry for the G{Code} population in tech, and she describes the ideal relationship between a developer advocate and the technical arm of a business.
Key Points From This Episode:
Quotes:
“Every company is embedding AI into their product someway somehow, so it’s being more embraced.” — @blackgirlbytes [0:11:37]
“I always respect someone that’s like, ‘I don’t know, but this is the closest I can get to it.’” — @blackgirlbytes [0:15:25]
“With AI tools, when you’re more specific, the results are more refined.” — @blackgirlbytes [0:16:29]
Links Mentioned in Today’s Episode:
Key Points From This Episode:
Quotes:
“Our observation was [that] there needs to be some sort of way to prepare and curate data sets inside of a cloud data warehouse. And there was nothing out there that could do that on [Amazon] Redshift, so we set out to build it.” — Drew Banin [0:02:18]
“One of the things we're thinking a ton about today is how AI and the semantic layer intersect.” — Drew Banin [0:08:49]
“I don't fundamentally think that LLMs are reasoning in the way that human beings reason.” — Drew Banin [0:15:36]
“My belief is that prompt engineering will – become less important – over time for most use cases. I just think that there are enough people that are not well versed in this skill that the people building LLMs will work really hard to solve that problem.” — Drew Banin [0:23:06]
Links Mentioned in Today’s Episode:
Understanding the Limitations of Mathematical Reasoning in Large Language Models
Drew Banin on LinkedIn
dbt Labs
In this episode, you’ll hear about Meeri's incredible career, insights from the recent AI Pact conference she attended, her company's involvement, and how we can articulate the reality of holding companies accountable to AI governance practices. We discuss how to know if you have an AI problem, what makes third-party generative AI more risky, and so much more! Meeri even shares how she thinks the Use AI Act will impact AI companies and what companies can do to take stock of their risk factors and ensure that they are building responsibly. You don’t want to miss this one, so be sure to tune in now!
Key Points From This Episode:
Quotes:
“It’s best to work with companies who know that they already have a problem.” — @meerihaataja [0:09:58]
“Third-party risks are way bigger in the context of [generative AI].” — @meerihaataja [0:14:22]
“Use and use-context-related risks are the major source of risks.” — @meerihaataja [0:17:56]
“Risk is fine if it’s on an acceptable level. That’s what governance seeks to do.” — @meerihaataja [0:21:17]
Links Mentioned in Today’s Episode:
In this episode, Dr. Zoldi offers insight into the transformative potential of blockchain for ensuring transparency in AI development, the critical need for explainability over mere predictive power, and how FICO maintains trust in its AI systems through rigorous model development standards. We also delve into the essential integration of data science and software engineering teams, emphasizing that collaboration from the outset is key to operationalizing AI effectively.
Key Points From This Episode:
Quotes:
“I have to stay ahead of where the industry is moving and plot out the directions for FICO in terms of where AI and machine learning is going – [Being an inventor is critical for] being effective as a chief analytics officer.” — @ScottZoldi [0:01:53]
“[AI and machine learning] is software like any other type of software. It's just software that learns by itself and, therefore, we need [stricter] levels of control.” — @ScottZoldi [0:23:59]
“Data scientists and AI scientists need to have partners in software engineering. That's probably the number one reason why [companies fail during the operationalization process].” — @ScottZoldi [0:29:02]
Links Mentioned in Today’s Episode:
Jay breaks down the critical role of software optimizations and how they drive performance gains in AI, highlighting the importance of reducing inefficiencies in hardware. He also discusses the long-term vision for Lemurian Labs and the broader future of AI, pointing to the potential breakthroughs that could redefine industries and accelerate innovation, plus a whole lot more.
Key Points From This Episode:
Quotes:
“Every single problem I've tried to pick up has been one that – most people have considered as being almost impossible. There’s something appealing about that.” — Jay Dawani [0:02:58]
“No matter how good of an idea you put out into the world, most people don't have the motivation to go and solve it. You have to have an insane amount of belief and optimism that this problem is solvable, regardless of how much time it's going to take.” — Jay Dawani [0:07:14]
“If the world's just betting on one company, then the amount of compute you can have available is pretty limited. But if there's a lot of different kinds of compute that are slightly optimized with different resources, making them accessible allows us to get there faster.” — Jay Dawani [0:19:36]
“Basically what we're trying to do [at Lemurian Labs] is make it easy for programmers to get [the best] performance out of any hardware.” — Jay Dawani [0:20:57]
Links Mentioned in Today’s Episode:
Melissa explains the importance of giving developers the choice of working with open source or proprietary options, experimenting with flexible application models, and choosing the size of your model according to the use case you have in mind. Discussing the democratization of technology, we explore common challenges in the context of AI including the potential of generative AI versus the challenge of its implementation, where true innovation lies, and what Melissa is most excited about seeing in the future.
Key Points From This Episode:
Quotes:
“One of the things that is true about software in general is that the role that open source plays within the ecosystem has dramatically shifted and accelerated technology development at large.” — @melisevers [0:03:02]
“It’s important for all citizens of the open source community, corporate or not, to understand and own their responsibilities with regard to the hard work of driving the technology forward.” — @melisevers [0:05:18]
“We believe that innovation is best served when folks have the tools at their disposal on which to innovate.” — @melisevers [0:09:38]
“I think the focus for open source broadly should be on the elements that are going to be commodified.” — @melisevers [0:25:04]
Links Mentioned in Today’s Episode:
VP of AI and ML at Synopsys, Thomas Andersen joins us to discuss designing AI chips. Tuning in, you’ll hear all about our guest’s illustrious career, how he became interested in technology, tech in East Germany, what it was like growing up there, and so much more! We delve into his company, Synopsys, and the chips they build before discussing his role in building algorithms.
Key Points From This Episode:
Quotes:
“It’s not really the technology that makes life great, it’s how you use it, and what you make of it.” — Thomas Andersen [0:07:31]
“There is, of course, a lot of opportunities to use AI in chip design.” — Thomas Andersen [0:25:39]
“Be bold, try as many new things [as you can, and] make sure you use the right approach for the right tasks.” — Thomas Andersen [0:40:09]
Links Mentioned in Today’s Episode:
Developing AI and generative AI initiatives demands significant investment, and without delivering on customer satisfaction, these costs can be tough to justify. Today, SVP of Engineering and General Manager of Xactly India, Kandarp Desai joins us to discuss Xactly's AI initiatives and why customer satisfaction remains their top priority.
Key Points From This Episode:
Quotes:
“[Generative AI] is only useful if it drives higher customer satisfaction. Otherwise, it doesn't matter.” — Kandarp Desai [0:11:36]
“Justifying the ROI of anything is hard – If you can tie any new invention back to its ROI in customer satisfaction, that can drive an easy sell across an organization.” — Kandarp Desai [0:15:35]
“The whole AI trend is overhyped in the short term and underhyped long term. [It’s experienced an] oversell recently, and people are still trying to figure it out.” — Kandarp Desai [0:20:48]
Links Mentioned in Today’s Episode:
Srujana is Vice President and Group Director at Walmart’s Machine Learning Center of Excellence and is an experienced and respected AI, machine learning, and data science professional. She has a strong background in developing AI and machine learning models, with expertise in natural language processing, deep learning, and data-driven decision-making. Srujana has worked in various capacities in the tech industry, contributing to advancing AI technologies and their applications in solving complex problems. In our conversation, we unpack the trends shaping AI governance, the importance of consumer data protection, and the role of human-centered AI. Explore why upskilling the workforce is vital, the potential impact AI could have on white-collar jobs, and which roles AI cannot replace. We discuss the interplay between bias and transparency, the role of governments in creating AI development guardrails, and how the regulatory framework has evolved. Join us to learn about the essential considerations of deploying algorithms at scale, striking a balance between latency and accuracy, the pros and cons of generative AI, and more.
Key Points From This Episode:
Quotes:
“By deploying [bias] algorithms we may be going ahead and causing some unintended consequences.” — @Srujanadev [0:03:11]
“I think it is extremely important to have the right regulations and guardrails in place.” — @Srujanadev [0:11:32]
“Just using generative AI for the sake of it is not necessarily a great idea.” — @Srujanadev [0:25:27]
“I think there are a lot of applications in terms of how generative AI can be used but not everybody is seeing the return on investment.” — @Srujanadev [0:27:12]
Links Mentioned in Today’s Episode:
Srujana Kaddevarmuth on LinkedIn
Our guest goes on to share the different kinds of research they use for machine learning development before explaining why he is more conservative when it comes to driving generative AI use cases. He even shares some examples of generative use cases he feels are worthwhile. We hear about how these changes will benefit all UPS customers and how they avoid sharing private and non-compliant information with chatbots. Finally, Sunzay shares some advice for anyone wanting to become a leader in the tech world.
Key Points From This Episode:
Quotes:
“There’s a lot of complexities in the kind of global operations we are running on a day-to-day basis [at UPS].” — Sunzay Passari [0:04:35]
“There is no magic wand – so it becomes very important for us to better our resources at the right time in the right initiative.” — Sunzay Passari [0:09:15]
“Keep learning on a daily basis, keep experimenting and learning, and don’t be afraid of the failures.” — Sunzay Passari [0:22:48]
Links Mentioned in Today’s Episode:
Martin shares what reinforcement learning does differently in executing complex tasks, overcoming feedback loops in reinforcement learning, the pitfalls of typical agent-based learning methods, and how being a robotic soccer champion exposed the value of deep learning. We unpack the advantages of deep learning over modeling agent approaches, how finding a solution can inspire a solution in an unrelated field, and why he is currently focusing on data efficiency. Gain insights into the trade-offs between exploration and exploitation, how Google DeepMind is leveraging large language models for data efficiency, the potential risk of using large language models, and much more.
Key Points From This Episode:
Quotes:
“You really want to go all the way down to learn the direct connections to actions only via learning [for training AI].” — Martin Riedmiller [0:07:55]
“I think engineers often work with analogies or things that they have learned from different [projects].” — Martin Riedmiller [0:11:16]
“[With reinforcement learning], you are spending the precious real robots time only on things that you don’t know and not on the things you probably already know.” — Martin Riedmiller [0:17:04]
“We have not achieved AGI (Artificial General Intelligence) until we have removed the human completely out of the loop.” — Martin Riedmiller [0:21:42]
Links Mentioned in Today’s Episode:
Jia shares the kinds of AI courses she teaches at Stanford, how students are receiving machine learning education, and the impact of AI agents, as well as understanding technical boundaries, being realistic about the limitations of AI agents, and the importance of interdisciplinary collaboration. We also delve into how Jia prioritizes latency at LiveX before finding out how machine learning has changed the way people interact with agents; both human and AI.
Key Points From This Episode:
Quotes:
“[The field of AI] is advancing so fast every day.” — Jia Li [0:03:05]
“It is very important to have more sharing and collaboration within the [AI field].” — Jia Li [0:12:40]
“Having an efficient algorithm [and] having efficient hardware and software optimization is really valuable.” — Jia Li [0:14:42]
Links Mentioned in Today’s Episode:
Key Points From This Episode:
Quotes:
“Sometimes, people are very bad at asking for what they want. If you do any stint in, particularly, the more hardcore sales jobs out there, it's one of the things you're going to have to learn how to do to survive. You have to be uncomfortable and learn how to ask for things.” — @Reidoutloud_ [0:05:07]
“In order to really start to drive the accuracy of [our AI models], we needed to understand, what were users trying to do with this?” — @Reidoutloud_ [0:15:34]
“The people who being enabled the most with AI in the current stage are the technical tinkerers. I think a lot of these tools are too technical for average-knowledge workers.” — @Reidoutloud_ [0:28:32]
“Quick advice for anyone listening to this, do not start a company when you have your first kid! Horrible idea.” — @Reidoutloud_ [0:29:28]
Links Mentioned in Today’s Episode:
In this episode of How AI Happens, Justin explains how his project, Wondr Search, injects creativity into AI in a way that doesn’t alienate creators. You’ll learn how this new form of AI uses evolutionary algorithms (EAs) and differential evolution (DE) to generate music without learning from or imitating existing creative work. We also touch on the success of the six songs created by Wondr Search, why AI will never fully replace artists, and so much more. For a fascinating conversation at the intersection of art and AI, be sure to tune in today!
Key Points From This Episode:
Quotes:
“[Wondr Search] is definitely not an effort to stand up against generative AI that uses traditional ML methods. I use those a lot and there’s going to be a lot of good that comes from those – but I also think there’s going to be a market for more human-centric generative methods.” — Justin Kilb [0:06:12]
“The definition of intelligence continues to change as [humans and artificial systems] progress.” — Justin Kilb [0:24:29]
“As we make progress, people can access [AI] everywhere as long as they have an internet connection. That's exciting because you see a lot of people doing a lot of great things.” — Justin Kilb [0:26:06]
Links Mentioned in Today’s Episode:
Jacob shares how Gong uses AI, how it empowers its customers to build their own models, and how this ease of access for users holds the promise of a brighter future. We also learn more about the inner workings of Gong and how it trains its own models, why it’s not too interested in tracking soft skills right now, what we need to be doing more of to build more trust in chatbots, and our guest’s summation of why technology is advancing like a runaway train.
Key Points From This Episode:
Quotes:
“We don’t expect our customers to suddenly become data scientists and learn about modeling and everything, so we give them a very intuitive, relatively simple environment in which they can define their own models.” — @eckely [0:07:03]
“[Data] is not a huge obstacle to adopting smart trackers.” — @eckely [0:12:13]
“Our current vibe is there’s a limit to this technology. We are still unevolved apes.” — @eckely [0:16:27]
Links Mentioned in Today’s Episode:
Bobak further opines on the pros and cons of Perplexity and GPT 4.0, why the technology uses both models, the differences, and the pros and cons. Finally, our guest tells us why Brilliant Labs is open-source and reminds us why public participation is so important.
Key Points From This Episode:
Quotes:
“To have a second pair of eyes that can connect everything we see with all the information on the web and everything we’ve seen previously – is an incredible thing.” — @btavangar [0:13:12]
“For live web search, Perplexity – is the most precise [and] it gives the most meaningful answers from the live web.” — @btavangar [0:26:40]
“The [AI] space is changing so fast. It’s exciting [and] it’s good for all of us but we don’t believe you should ever be locked to one model or another.” — @btavangar [0:28:45]
Links Mentioned in Today’s Episode:
Andrew shares how generative AI is used by academic institutions, why employers and educators need to curb their fear of AI, what we need to consider for using AI responsibly, and the ins and outs of Andrew’s podcast, Insight x Design.
Key Points From This Episode:
Quotes:
“Once I learned about lakehouses and Apache Iceberg and how you can just do all of your work on top of the data lake itself, it really made my life a lot easier with doing real-time analytics.” — @insightsxdesign [0:04:24]
“Data analysts have always been expected to be technical, but now, given the rise of the amount of data that we’re dealing with and the limitations of data engineering teams and their capacity, data analysts are expected to do a lot more data engineering.” — @insightsxdesign [0:07:49]
“Keeping it simple and short is ideal when dealing with AI.” — @insightsxdesign [0:12:58]
“The purpose of higher education isn’t to get a piece of paper, it’s to learn something and to gain new skills.” — @insightsxdesign [0:17:35]
Links Mentioned in Today’s Episode:
Tom shares further thoughts on financing AI tech venture capital and whether or not data centers pose a threat to the relevance of the Cloud, as well as his predictions for the future of GPUs and much more.
Key Points From This Episode:
Quotes:
“Innovation is happening at such a deep technological level and that is at the core of machine learning models.” — @tomastungusz [0:03:37]
“Right now, we’re looking at where [is] there rote work or human toil that can be repeated with AI? That’s one big question where there’s not a really big incumbent.” — @tomastungusz [0:05:51]
“If you are the leader of a team or a department or a business unit or a company, you can not be in a position where you are caught off guard by AI. You need to be on the forefront.” — @tomastungusz [0:08:30]
“The dominant dynamic within consumer products is the least friction in a user experience always wins.” — @tomastungusz [0:14:05]
Links Mentioned in Today’s Episode:
Kordel is the CTO and Founder of Theta Diagnostics, and today he joins us to discuss the work he is doing to develop a sense of smell in AI. We discuss the current and future use cases they’ve been working on, the advancements they’ve made, and how to answer the question “What is smell?” in the context of AI. Kordel also provides a breakdown of their software program Alchemy, their approach to collecting and interpreting data on scents, and how he plans to help machines recognize the context for different smells. To learn all about the fascinating work that Kordel is doing in AI and the science of smell, be sure to tune in!
Key Points From This Episode:
Quotes:
“I became interested in machine smell because I didn't see a lot of work being done on that.” — @kordelkfrance [0:08:25]
“There's a lot of people that argue we can't actually achieve human-level intelligence until we've met we've incorporated all five senses into an artificial being.” — @kordelkfrance [0:08:36]
“To me, a smell is a collection of compounds that represent something that we can recognize. A pattern that we can recognize.” — @kordelkfrance [0:17:28]
“Right now we have about three dozen to four dozen compounds that we can with confidence detect.” — @kordelkfrance [0:19:04]
“[Our autonomous gas system] is really this interesting system that's hooked up to a bunch of machine learning, that helps calibrate and detect and determine what a smell looks like for a specific use case and breaking that down into its constituent compounds.” — @kordelkfrance [0:23:20]
“The success of our device is not just the sensing technology, but also the ability of Alchemy [our software program] to go in and make sense of all of these noise patterns and just make sense of the signals themselves.” — @kordelkfrance [0:25:41]
Links Mentioned in Today’s Episode:
After describing the work done at StoneX and her role at the organization, Elettra explains what drew her to neural networks, defines data science and how she overcame the challenges of learning something new on the job, breaks down what a data scientist needs to succeed, and shares her thoughts on why many still don’t fully understand the industry. Our guest also tells us how she identifies an inadequate data set, the recent innovations that are under construction at StoneX, how to ensure that your AI and ML models are compliant, and the importance of understanding AI as a mere tool to help you solve a problem.
Key Points From This Episode:
Quotes:
“The best thing that you can have as a data scientist to be set up for success is to have a decent data warehouse.” — Elettra Damaggio [0:09:17]
“I am very much an introverted person. With age, I learned how to talk to people, but that wasn’t [always] the case.” — Elettra Damaggio [0:12:38]
“In reality, the hard part is to get to the data set – and the way you get to that data set is by being curious about the business you’re working with.” — Elettra Damaggio [0:13:58]
“[First], you need to have an idea of what is doable, what is not doable, [and] more importantly, what might solve the problem that [the client may] have, and then you can have a conversation with them.” — Elettra Damaggio [0:19:58]
“AI and ML is not the goal; it’s the tool. The goal is solving the problem.” — Elettra Damaggio [0:28:28]
Links Mentioned in Today’s Episode:
Mike Miller is the Director of Project Management at AWS, and he joins us today to share about the inspirational AI-powered products and services that are making waves at Amazon, particularly those with generative prompt engineering capabilities. We discuss how Mike and his team choose which products to bring to market, the ins and outs of PartyRock including the challenges of developing it, AWS’s strategy for generative AI, and how the company aims to serve everyone, even those with very little technical knowledge. Mike also explains how customers are using his products and what he’s learned from their behaviors, and we discuss what may lie ahead in the future of generative prompt engineering.
Key Points From This Episode:
Quotes:
“We were working on AI and ML [at Amazon] and discovered that developers learned best when they found relevant, interesting, [and] hands-on projects that they could work on. So, we built DeepLens as a way to provide a fun opportunity to get hands-on with some of these new technologies.” — Mike Miller [0:02:20]
“When we look at AIML and generative AI, these things are transformative technologies that really require almost a new set of intuition for developers who want to build on these things.” — Mike Miller [0:05:19]
“In the long run, innovations are going to come from everywhere; from all walks of life, from all skill levels, [and] from different backgrounds. The more of those people that we can provide the tools and the intuition and the power to create innovations, the better off we all are.” — Mike Miller [0:13:58]
“Given a paintbrush and a blank canvas, most people don’t wind up with The Sistine Chapel. [But] I think it’s important to give people an idea of what is possible.” — Mike Miller [0:25:34]
Links Mentioned in Today’s Episode:
Key Points From This Episode:
Quotes:
“In many ways, Carrier is going to be a necessary condition in order for AI to exist.” — Seth Walker [0:04:08]
“What’s hard about generating value with AI is doing it in a way that is actually actionable toward a specific business problem.” — Seth Walker [0:09:49]
“One of the things that we’ve found through experimentation with generative AI models is that they’re very sensitive to your content. I mean, there’s a reason that prompt engineering has become such an important skill to have.” — Seth Walker [0:25:56]
Links Mentioned in Today’s Episode:
Philip recently had the opportunity to speak with 371 customers from 15 different countries to hear their thoughts, fears, and hopes for AI. Tuning in you’ll hear Philip share his biggest takeaways from these conversations, his opinion on the current state of AI, and his hopes and predictions for the future. Our conversation explores key topics, like government and company attitudes toward AI, why adversarial datasets will need to be audited, and much more. To hear the full scope of our conversation with Philip – and to find out how 2024 resembles 1997 – be sure to tune in today!
Key Points From This Episode:
Quotes:
“What's been so incredible to me is how forward-thinking – a lot of governments are on this topic [of AI] and their understanding of – the need to be able to make sure that both their citizens as well as their businesses make the best use of artificial intelligence.” — Philip Moyer [0:02:52]
“Nobody's ahead and nobody's behind. Every single company that I'm speaking to, has about one to five use cases live. And they have hundreds that are on the docket.” — Philip Moyer [0:15:36]
“All of us are facing the exact same challenges right now of doing [generative AI] at scale.” — Philip Moyer [0:17:03]
“You should just make an assumption that you're going to be somewhere on the order of about 10 to 15% more productive with AI.” — Philip Moyer [0:25:22]
“[With AI] I get excited around proficiency and job satisfaction because I really do think – we have an opportunity to make work fun again.” — Philip Moyer [0:27:10]
Links Mentioned in Today’s Episode:
Joelle further discusses the relationship between her work, AI, and the end users of her products as well as her summation of information modalities, world models versus word models, and the role of responsibility in the current high-stakes of technology development.
Key Points From This Episode:
Quotes:
“Perhaps, the most important thing in research is asking the right question.” — @jpineau1 [0:05:10]
“My role isn't to set the problems for [the research team], it's to set the conditions for them to be successful.” — @jpineau1 [0:07:29]
“If we're going to push for state-of-the-art on the scientific and engineering aspects, we must push for state-of-the-art in terms of social responsibility.” — @jpineau1 [0:20:26]
Links Mentioned in Today’s Episode:
Key Points From This Episode:
Quotes:
“Amii is all about capacity building, so we’re not a traditional agent in that sense. We are trying to educate and inform industry on how to do this work, with Amii at first, but then without Amii at the end.” — Mara Cairo [0:06:20]
“We need to ask the right questions. That’s one of the first things we need to do, is to explore where the problems are.” — Mara Cairo [0:07:46]
“We certainly are comfortable turning certain business problems away if we don’t feel it’s an ethical match or if we truly feel it isn’t a problem that will benefit much from machine learning.” — Mara Cairo [0:11:52]
Links Mentioned in Today’s Episode:
Jerome discusses Meta's Segment Anything Model, Ego Exo 4D, the nature of Self Supervised Learning, and what it would mean to have a non-language based approach to machine teaching.
For more, including quotes from Meta Researchers, check out the Sama Blog
Bryan discusses what constitutes industrial AI, its applications, and how it differs from standard AI processes. We explore the innovative process of deep reinforcement learning (DRL), replicating human expertise with machines, and the types of AI approaches available. Gain insights into the current trends and the future of generative AI, the existing gaps and opportunities, why DRL is a game-changer and much more! Join us as we unpack the nuances of industrial AI, its vast potential, and how it is shaping the industries of tomorrow. Tune in now!
Key Points From This Episode:
Quotes:
“We typically look at industrial [AI] as you are either making something or you are moving something.” — Bryan DeBois [0:04:36]
“One of the key distinctions with deep reinforcement learning is that it learns by doing and not by data.” — Bryan DeBois [0:10:22]
“Autonomous AI is more of a technique than a technology.” — Bryan DeBois [0:16:00]
“We have to have [AI] systems that we can count on, that work within constraints, and give right answers every time.” — Bryan DeBois [0:29:04]
Links Mentioned in Today’s Episode:
Joining us today are our panelists, Duncan Curtis, SVP of AI products and technology at Sama, and Jason Corso, a professor of robotics, electrical engineering, and computer science at the University of Michigan. Jason is also the chief science officer at Voxel51, an AI software company specializing in developer tools for machine learning. We use today’s conversation to discuss the findings of the latest Machine Learning (ML) Pulse report, published each year by our friends at Sama. This year’s report focused on the role of generative AI by surveying thousands of practitioners in this space. Its findings include feedback on how respondents are measuring their model’s effectiveness, how confident they feel that their models will survive production, and whether they believe generative AI is worth the hype. Tuning in you’ll hear our panelists’ thoughts on key questions in the report and its findings, along with their suggested solutions for some of the biggest challenges faced by professionals in the AI space today. We also get into a bunch of fascinating topics like the opportunities presented by synthetic data, the latent space in language processing approaches, the iterative nature of model development, and much more. Be sure to tune in for all the latest insights on the ML Pulse Report!
Key Points From This Episode:
Quotes:
“It's really hard to know how well your model is going to do.” — Jason Corso [0:27:10]
“With debugging and detecting errors in your data, I would definitely say look at some of the tooling that can enable you to move more quickly and understand your data better.” — Duncan Curtis [0:33:55]
“Work with experts – there's no replacement for good experience when it comes to actually boxing in a problem, especially in AI.” — Jason Corso [0:35:37]
“It's not just about how your model performs. It's how your model performs when it's interacting with the end user.” — Duncan Curtis [0:41:11]
“Remember, what we do in this field, and in all fields really, is by humans, for humans, and with humans. And I think if you miss that idea [then] you will not achieve – either your own potential, the group you're working with, or the tool.” — Jason Corso [0:48:20]
Links Mentioned in Today’s Episode:
ML Pulse Report: How AI Happens Live Webinar
Our guest today is Ian Ferreira, the Chief Product Officer for Artificial Intelligence over at Core Scientific until they were purchased by his current employer Advanced Micro Devices, AMD, where he is now the Senior Director of AI Software. In our conversation, we talk about when in his career he shifted his focus to AI, his thoughts on the nobility of ChatGPT and applications beyond advertising for AI, and he touches on the scary aspect of Large Language Models (LLMs). We explore the possibility of replacing our standard conceptions of search, how he conceptualizes his role at AMD, and Ian shares his insights and thoughts on the “Arms Race for GPUs”. Be sure not to miss out on this episode as Ian shares valuable insights from his perspective as the Senior Director of AI Software at AMD.
Key Points From This Episode:
Quotes:
“It’s just remarkable, the potential of AI —and now I’m fully in it and I think it’s a game-changer.” — @Ianfe [0:03:41]
“There are significantly more noble applications than advertising for AI and ChatGPT was great in that it put a face on AI for a lot of people who couldn’t really get their heads wrapped around [AI].” — @Ianfe [0:04:25]
“An LLM allows you to have a natural conversation with the search agent, so to speak.” — @Ianfe [0:09:21]
“All our stuff is open-sourced. AMD has a strong ethos, both in open-source and in partnerships. We don’t compete with our customers, and so being open allows you to go and look at all our code and make sure that whatever you are going to deploy is something you’ve looked at.” — @Ianfe [0:12:15]
Links Mentioned in Today’s Episode:
Generative AI is becoming more common in our lives as the technology grows and evolves. There are now AI companions to help other AI models execute their tasks more efficiently, and Amazon CodeWhisperer (ACW) is among the best in the game. We are joined today by the General Manager of Amazon CodeWhisperer and Director of Software Development at Amazon Web Services (AWS), Doug Seven. We discuss how Doug and his team are able to remain agile in such a huge organization like Amazon before getting a crash course on the two-pizza-team philosophy and everything you need to know about ACW and how it works. Then, we dive into the characteristics that make up a generative AI model, why Amazon felt it necessary to create its own AI companion, why AI is not here to take our jobs, how Doug and his team ensure that ACW is safe and responsible, and how generative AI will become common in most households much sooner than we may think.
Key Points From This Episode:
In today’s episode, we are joined by Dalia Shanshal, Senior Data Scientist at Bell, Canada's largest communications company that offers advanced broadband wireless, Internet, TV, media, and business communications services. With over five years of experience working on hands-on projects, Dalia has a diverse background in data science and AI. We start our conversation by talking about the recent GeekFest Conference, what it is about, and key takeaways from the event. We then delve into her professional career journey and how a fascinating article inspired her to become a data scientist. During our conversation, Dalia reflects on the evolving nature of data science, discussing the skills and qualities that are now more crucial than ever for excelling in the field. We also explore why creativity is essential for problem-solving, the value of starting simple, and how to stand out as a data scientist before she explains her unique root cause analysis framework.Key Points From This Episode:
Tweetables:
“What I do is to try leverage AI and machine learning to speed up and fastrack investigative processes.” — Dalia Shanshal [0:06:52]
“Data scientists today are key in business decisions. We always need business decisions based on facts and data, so the ability to mine that data is super important.” — Dalia Shanshal [0:08:35]
“The most important skill set [of a data scientist] is to be able to [develop] creative approaches to problem-solving. That is why we are called scientists.” — Dalia Shanshal [0:11:24]
“I think it is very important for data scientists to keep up to date with the science. Whenever I am [faced] with a problem, I start by researching what is out there.” — Dalia Shanshal [0:22:18]
“One of the things that is really important to me is making sure that whatever [data scientists] are doing has an impact.” — Dalia Shanshal [0:33:50]
Links Mentioned in Today’s Episode:
Canadian Conference on Artificial Intelligence (CANAI)
‘Towards an Automated Framework of Root Cause Analysis in the Canadian Telecom Industry’
EXAMPLE: AgriSynth Synthetic Data-- Weeds as Seen By AI
Data is the backbone of agricultural innovation when it comes to increasing yields, reducing pests, and improving overall efficiency, but generating high-quality real-world data is an expensive and time-consuming process. Today, we are joined by Colin Herbert, the CEO and Founder of AgriSynth, to find out how the advent of synthetic data will ultimately transform the industry for the better. AgriSynth is revolutionizing how AI can be trained for agricultural solutions using synthetic imagery. He also gives us an overview of his non-linear career journey (from engineering to medical school to agriculture, then through clinical trials and back to agriculture with a detour in Deep Learning), shares the fascinating origin story of AgriSynth, and more.
Key Points From This Episode:
Quotes:
“The complexity of biological images and agricultural images is way beyond driverless cars and most other applications [of AI].” — Colin Herbert [0:06:45]
“It’s parameter rich to represent the rules of growth of a plant.” — Colin Herbert [0:09:21]
“We know exactly where the edge cases are – we know the distribution of every parameter in that dataset, so we can design the dataset exactly how we want it and generate imagery accordingly. We could never collect such imagery in the real world.” — Colin Herbert [0:10:33]
“Ultimately, the way we look at an image is not the way AI looks at an image.” — Colin Herbert [0:21:11]
“It may not be a real-world image that we’re looking at, but it will be data from the real world. There is a crucial difference.” — Colin Herbert [0:32:01]
Links Mentioned in Today’s Episode:
Jennifer is the founder of Data Relish, a boutique consultancy firm dedicated to providing strategic guidance and executing data technology solutions that generate tangible business benefits for organizations of diverse scales across the globe. In our conversation, we unpack why a data platform is not the same as a database, working as a freelancer in the industry, common problems companies face, the cultural aspect of her work, and starting with the end in mind. We also delve into her approach to helping companies in crisis, why ‘small’ data is just as important as ‘big’ data, building companies for the future, the idea of a ‘data dictionary’, good and bad examples of data culture, and the importance of identifying an executive sponsor.
Key Points From This Episode:
Quotes:
“Something that is important in AI is having an executive sponsor, someone who can really unblock any obstacles for you.” — @jenstirrup [0:08:50]
“Probably the biggest [challenge companies face] is access to the right data and having a really good data platform.” — @jenstirrup [0:10:50]
“If the crisis is not being handled by an executive sponsor, then there is nothing I can do.” — @jenstirrup [0:20:55]
“I want people to understand the value that [data] can have because when your data is good it can change lives.” — @jenstirrup [0:32:50]
Links Mentioned in Today’s Episode:
Joining us today to provide insight on how to put together a credible AI solutions team is Mike Demissie, Managing Director of the AI Hub at BNY Mellon. We talk with Mike about what to consider when putting together and managing such a diverse team and how BNY Mellon is implementing powerful AI and ML capabilities to solve the problems that matter most to their clients and employees. To learn how BNY Mellon is continually innovating for the benefit of their customers and their employees, along with Mike’s thoughts on the future of generative AI, be sure to tune in!
Key Points From This Episode:
Quotes:
“Building AI solutions is very much a team sport. So you need experts across many disciplines.” —Mike Demissie [0:06:40]
“The engineers need to really find a way in terms of ‘okay, look, how are we going to stitch together the various applications to run it in the most optimal way?’” —Mike Demissie [0:09:23]
“It is not only opportunity identification, but also developing the solution and deploying it and making sure there's a sustainable model to take care of afterwards, after production — so you can go after the next new challenge.” —Mike Demissie [0:09:33]
“There's endless use of opportunities. And every time we deploy each of these solutions [it] actually sparks ideas and new opportunities in that line of business.” —Mike Demissie [0:11:58]
“Not only is it important to raise the level of awareness and education for everyone involved, but you can also tap into the domain expertise of folks, regardless of where they sit in the organization.” —Mike Demissie [0:15:36]
“Demystifying, and really just making this abstract capability real for people is an important part of the practice as well.” —Mike Demissie [0:16:10]
“Remember, [this] still is day one. As much as all the talk that is out there, we're still figuring out the best way to navigate and the best way to apply this capability. So continue to explore that, too.” —Mike Demissie [0:24:21]
Links Mentioned in Today’s Episode:
Mercedes-Benz is a juggernaut in the automobile industry and in recent times, it has been deliberate in advancing the use of AI throughout the organization. Today, we welcome to the show the Executive Manager for AI at Mercedes-Benz, Alex Dogariu. Alex explains his role at the company, he tells us how realistic chatbots need to be, how he and his team measure the accuracy of their AI programs, and why people should be given more access to AI and time to play around with it. Tune in for a breakdown of Alex's principles for the responsible use of AI.
Key Points From This Episode:
Tweetables:
“[Chatbots] are useful helpers, they’re not replacing humans.” — Alex Dogariu [09:38]
“This [AI] technology is so new that we really just have to give people access to it and let them play with it.” — Alex Dogariu [15:50]
“I want to make people aware that AI has not only benefits but also downsides, and we should account for those. And also, that we use AI in a responsible way and manner.” — Alex Dogariu [25:12]
“It’s always a balancing act. It’s the same with certification of AI models — you don’t want to stifle innovation with legislation and laws and compliance rules but, to a certain extent, it’s necessary, it makes sense.” — Alex Dogariu [26:14]
“To all the AI enthusiasts out there, keep going, and let’s make it a better world with this new technology.” — Alex Dogariu [27:00]
Links Mentioned in Today’s Episode:
‘Principles for responsible use of AI | Alex Dogariu | TEDxWHU’
Tarun dives into the game-changing components of Watsonx, before delivering some noteworthy advice for those who are eager to forge a career in AI and machine learning.
Key Points From This Episode:
Tweetables:
“One of the first things I tell clients is, ‘If you don’t know what problems we are solving, then we’re on the wrong path.’” — @tc20640n [05:14]
“A lot of our customers have adopted AI — but if the workflow is, let’s say 10 steps, they have applied AI to only one or two steps. They don’t get to realize the full value of that innovation.” — @tc20640n [05:24]
“Every client that I talk to, they’re all looking to build their own unique story; their own unique point of view with their own unique data and their own unique customer pain points. So, I look at Watsonx as a vehicle to help customers build their own unique AI story.” — @tc20640n [14:16]
“The most important thing you need is curiosity. [And] be strong-hearted, because this [industry] is not for the weak-hearted.” — @tc20640n [27:41]
Links Mentioned in Today’s Episode:
Creating AI workflows can be a challenging process. And while purchasing these types of technologies may be straightforward, implementing them across multiple teams is often anything but. That’s where a company like Veritone can offer unparalleled support. With over 400 AI engines on their platform, they’ve created a unique operating system that helps companies orchestrate AI workflows with ease and efficacy. Chris discusses the differences between legacy and generative AI, how LLMs have transformed chatbots, and what you can do to identify potential AI use cases within an organization. AI innovations are taking place at a remarkable pace and companies are feeling the pressure to innovate or be left behind, so tune in to learn more about AI applications in business and how you can revolutionize your workflow!
Key Points From This Episode:
Quotes:
“Anybody who's writing text can leverage generative AI models to make their output better.” — @chris_doe [0:05:32]
“With large language models, they've basically given these chatbots a whole new life.” — @chris_doe [0:12:38]
“I can foresee a scenario where most enterprise applications will have an LLM power chatbot in their UI.” — @chris_doe [0:13:31]
“It's easy to buy technology, it's hard to get it adopted across multiple teams that are all moving in different directions and speeds.” — @chris_doe [0:21:16]
“People can start new companies and innovate very quickly these days. And the same has to be true for large companies. They can't just sit on their existing product set. They always have to be innovating.” — @chris_doe [0:23:05]
“We just have to identify the most problematic part of that workflow and then solve it.” — @chris_doe [0:26:20]
Links Mentioned in Today’s Episode:
AI is an incredible tool that has allowed us to evolve into more efficient human beings. But, the lack of ethical and responsible design in AI can lead to a level of detachment from real people and authenticity. A wonderful technology strategist at Microsoft, Valeria Sadovykh, joins us today on How AI Happens. Valeria discusses why she is concerned about AI tools that assist users in decision-making, the responsibility she feels these companies hold, and the importance of innovation. We delve into common challenges these companies face in people, processes, and technology before exploring the effects of the democratization of AI. Finally, our guest shares her passion for emotional AI and tells us why that keeps her in the space. To hear it all, tune in now!
Key Points From This Episode:
Tweetables:
“We have no opportunity to learn something new outside of our predetermined environment.” — @ValeriaSadovykh [0:07:07]
“[Ethics] as a concept is very difficult to understand because what is ethical for me might not necessarily be ethical for you and vice versa.” — @ValeriaSadovykh [0:11:38]
“Ethics – should not come – [in] place of innovation.” — @ValeriaSadovykh [0:20:13]
“Not following up, not investing, not trying, [and] not failing is also preventing you from success.” — @ValeriaSadovykh [0:29:52]
Links Mentioned in Today’s Episode:
Key Points From This Episode:
Tweetables:
“When that hype cycle happens, where it is overhyped and falls out of favor, then generally that is – what is called a winter.” — @AnnapPatterson [0:03:28]
“No matter how hyped you think AI is now, I think we are underestimating its change.” — @AnnapPatterson [0:04:06]
“When there is a lot of hype and then not as many breakthroughs or not as many applications that people think are transformational, then it starts to go through a winter.” — @AnnapPatterson [0:04:47]
@AnnapPatterson [0:25:17]
Links Mentioned in Today’s Episode:
‘Eight critical approaches to LLMs’
Wayfair uses AI and machine learning (ML) technology to interpret what its customers want, connect them with products nearby, and ensure that the products they see online look and feel the same as the ones that ultimately arrive in their homes. With a background in engineering and a passion for all things STEM, Wayfair’s Director of Machine Learning, Tulia Plumettaz, is an innate problem-solver. In this episode, she offers some insight into Wayfair’s ML-driven decision-making processes, how they implement AI and ML for preventative problem-solving and predictive maintenance, and how they use data enrichment and customization to help customers navigate the inspirational (and sometimes overwhelming) world of home decor. We also discuss the culture of experimentation at Wayfair and Tulia’s advice for those looking to build a career in machine learning.
Key Points From This Episode:
Tweetables:
“[Operations research is] a very broad field at the intersection between mathematics, computer science, and economics that [applies these toolkits] to solve real-life applications.” — Tulia Plumettaz [0:03:42]
“All the decision making, from which channel should I bring you in [with] to how do I bring you back if you’re taking your sweet time to make a decision to what we show you when you [visit our site], it’s all [machine learning]-driven.” — Tulia Plumettaz [0:09:58]
“We want to be in a place [where], as early as possible, before problems are even exposed to our customers, we’re able to detect them.” — Tulia Plumettaz [0:18:26]
“We have the challenge of making you buy something that you would traditionally feel, sit [on], and touch virtually, from the comfort of your sofa. How do we do that? [Through the] enrichment of information.” — Tulia Plumettaz [0:29:05]
“We knew that making it easier to navigate this very inspirational space was going to require customization.” — Tulia Plumettaz [0:29:39]
“At its core, it’s an exploit-and-explore process with a lot of hypothesis testing. Testing is at the core of [Wayfair] being able to say: this new version is better than [the previous] version.” — Tulia Plumettaz [0:31:53]
Links Mentioned in Today’s Episode:
Bob highlights the importance of building interdepartmental relationships and growing a talented team of problem solvers, as well as the key role of continuous education. He also offers some insight into the technical and not-so-technical skills of a “data science champion,” tips for building adaptable data infrastructures, and the best career advice he has ever received, plus so much more. For an insider’s look at the data science operation at FreeWheel and valuable advice from an analytics leader with more than two decades of experience, be sure to tune in today!
Key Points From This Episode:
Tweetables:
“As a data science team, it’s not enough to be able to solve quantitative problems. You have to establish connections to the company in a way that uncovers those problems to begin with.” — @Bob_Bress [0:06:42]
“The more we can do to educate folks – on the type of work that the [data science] team does, the better the position we are in to tackle more interesting problems and innovate around new ideas and concepts.” — @Bob_Bress [0:09:49]
“There are so many interactions and dependencies across any project of sufficient complexity that it’s only through [collaboration] across teams that you’re going to be able to hone in on the right answer.” — @Bob_Bress [0:17:34]
“There is always more you can do to enhance the work you’re doing, other questions you can ask, other ways you can go beyond just checking a box.” — @Bob_Bress [0:23:31]
Links Mentioned in Today’s Episode:
Low-code platforms provide a powerful and efficient way to develop applications and drive digital transformation and are becoming popular tools for organizations. In today’s episode, we are joined by Piero Molino, the CEO, and Co-Founder at Predibase, a company revolutionizing the field of machine learning by pioneering a low-code declarative approach. Predibase empowers engineers and data scientists to effortlessly construct, enhance, and implement cutting-edge models, ranging from linear regressions to expansive language models, using a mere handful of code lines. Piero is intrigued by the convergence of diverse cultural interests and finds great fascination in exploring the intricate ties between knowledge, language, and learning. His approach involves seeking unconventional solutions to problems and embracing a multidisciplinary approach that allows him to acquire novel and varied knowledge while gaining fresh experiences. In our conversation, we talk about his professional career journey, developing Ludwig, and how this eventually developed into Predibase.
Key Points From This Episode:
Tweetables:
“One thing that I am proud of is the fact that the architecture is very extensible and really easy to plug and play new data types or new models.” — @w4nderlus7 [0:14:02]
“We are doing a bunch of things at Predibase that build on top of Ludwig and make it available and easy to use for organizations in the cloud.” — @w4nderlus7 [0:19:23]
“I believe that in the teams that actually put machine learning into production, there should be a combination of different skill sets.” — @w4nderlus7 [0:23:04]
“What made it possible for me to do the things that I have done is constant curiosity.” — @w4nderlus7 [0:26:06]
Links Mentioned in Today’s Episode:
dRisk uses a unique approach to increasing AV safety: collecting real-life scenarios and data from accidents, insurance reports, and more to train autonomous vehicles on extreme edge cases. With their advanced simulation tool, they can accurately recreate and test these scenarios, allowing AV developers to improve the performance and safety of their vehicles. Join us as Chess and Rav delve into the exciting world of AVs and the challenges they face in creating safer and more efficient transportation systems.
Key Points From This Episode:
Tweetables:
“At the time, no autonomous vehicles could ever actually drive on the UK's roads. And that's where Chess and the team at dRisk have done such great piece of work.” — Rav Babbra [0:07:25]
“If you've got an unprotected cross-traffic turn, that's where a lot of things traditionally go wrong with AVs.” —Chess Stetson [0:08:45]
“We can, in an automated way, map out metrics for what might or might not constitute a good test and cut out things that would be something like a hallucination.” —Chess Stetson [0:13:59]
“The thing that makes AI different than humans is that if you have a good driver's test for an AI, it's also a good training environment for an AI. That's different [from] humans because humans have common sense.” — Chess Stetson [0:15:10]
“If you can really rigorously test [AI] on its ability to have common sense, you can also train it to have a certain amount of common sense.” — Chess Stetson [0:15:51]
“The difference between an AI and a human is that if you had a good test, it's equivalent to a good training environment.” — Chess Stetson [0:16:29]
“I personally think it's not unrealistic to imagine AV is getting so good that there's never a death on the road at all.” — Chess Stetson [0:18:50]
“One of the reasons that we're in the UK is precisely because the UK is going to have no tolerance for autonomous vehicle collisions.” — Chess Stetson [0:20:08]
“Now, there's never a cow in the highway here in the UK, but of course, things do fall off lorries. So if we can train against a cow sitting on the highway, then the next time a grand piano falls off the back of a truck, we've got some training data at least that helps it avoid that.” — Rav Babbra [0:35:12]
“If you target the worst case scenario, everything underneath, you've been able to capture and deal with.” — Rav Babbra [0:36:08]
Links Mentioned in Today’s Episode:
In this episode, we learn about the common challenges companies face when it comes to developing and deploying their AV and how Stantec uses military and aviation best practices to remove human error and ensure safety and reliability in AV operations. Corey explains the importance of collecting edge cases and shares his take on why the autonomous mobility industry is so meaningful.
Key Points From This Episode:
Tweetables:
“For me, [commercialization] is a safe and reliable service that actually can perform the job that it's supposed to.” — @coreyclothier [0:07:04]
“Most of the autonomous vehicles that I've been working with, even since the beginning, most of them are pretty safe.” — @coreyclothier [0:08:01]
“When you start to talk to people from around the world, they absolutely have different attitudes related to autonomy and robotics.” — @coreyclothier [0:09:20]
“What's exciting though is about dRISK [is] it gives us a quantifiable risk measure, something that we can look at as a baseline and then something we can see as we make improvements and do mitigation strategies.” — @coreyclothier [0:17:18]
“The common challenges really are being able to handle all the edge cases in the operating environment that they're going to deploy.” — @coreyclothier [0:20:41]
Links Mentioned in Today’s Episode:
Vishnu provides valuable advice for data scientists who want to help create high-quality data that can be used effectively to impact business outcomes. Tune in to gain insights from Vishnu's extensive experience in engineering leadership and data technologies.
Key Points From This Episode:
Tweetables:
“One of the things that we always care about [at Credit Karma] is making sure that when you are recommending any financial products in front of the users, we provide them with a sense of certainty.” — Vishnu Ram [0:05:59]
“One of the big things that we had to do, pretty much right off the bat, was make sure that our data scientists were able to get access to the data at scale — and be able to build the models in time so that the model maps to the future and performs well for the future.” — Vishnu Ram [0:08:00]
“Whenever we want to introduce new platforms or frameworks, both the teams that own that framework as well as the teams that are going to use that framework or platform would work together to build it up from scratch.” — Vishnu Ram [0:15:11]
“If your consumers have done their own research, it’s a no-brainer to start including them because they’re going to help you see around the corner and make sure you're making the right decisions at the right time.” — Vishnu Ram [0:16:43]
Links Mentioned in Today’s Episode:
TFX: A TensorFlow-Based Production-Scale Machine Learning Platform [19:15]
Algolia is an AI-powered search and discovery platform that helps businesses deliver fast, personalized search experiences. In our conversation, Sean shares what ignited his passion for AI and how Algolia is using AI to deliver lightning-fast custom search results to each user. He explains how Algolia's AI algorithms learn from user behavior and talks about the challenges and opportunities of implementing AI in search and discovery processes. We discuss improving the user experience through AI, why technologies like ChatGPT are disrupting the market, and how Algolia is providing innovative solutions. Learn about “hashing,” the difference between keyword and vector searches, the company’s approach to ranking, and much more.
Key Points From This Episode:
Tweetables:
“Well, the great thing is that every 10 years the entire technology industry changes, so there is never a shortage of new technology to learn and new things to build.” — Sean Mullaney [0:05:08]
“It is not just the way that you ask the search engine the question, it is also the way the search engine responds regarding search optimization.” — Sean Mullaney [0:08:04]
Links Mentioned in Today’s Episode:
Today’s guest is a Developer Advocate and Machine Learning Growth Engineer at Roboflow who has the pleasure of providing Roboflow users with all the information they need to use computer vision products optimally. In this episode, Piotr shares an overview of his educational and career trajectory to date; from starting out as a civil engineering graduate to founding an open source project that was way ahead of its time to breaking the million reader milestone on Medium. We also discuss Meta’s Segment Anything Model, the value of packaged models over non-packaged ones, and how computer vision models are becoming more accessible.
Key Points From This Episode:
Tweetables:
“Not only [do] I showcase [computer vision] models but I also show people how to use them to solve some frequent problems.” — Piotr Skalski [0:10:14]
“I am always a fan of models that are packaged.” — Piotr Skalski [0:15:58]
“We are drifting towards a direction where users of those models will not necessarily have to be very good at computer vision to use them and create complicated things.” — Piotr Skalski [0:32:15]
Links Mentioned in Today’s Episode:
In our conversation, we learn about her professional journey and how this led to her working at DataRobot, what she realized was missing from the DataRobot platform, and what she did to fill the gap. We discuss the importance of bias in AI models, approaches to mitigate models against bias, and why incorporating ethics into AI development is essential. We also delve into the different perspectives of ethical AI, the elements of trust, what ethical “guard rails” are, and the governance side of AI.
Key Points From This Episode:
Tweetables:
“When we talk about ‘guard rails’ sometimes you can think of the best practice type of ‘guard rails’ in data science but we should also expand it to the governance and ethics side of it.” — @HaniyehMah [0:11:03]
“Ethics should be included as part of [trust] to truly be able to think about trusting a system.” — @HaniyehMah [0:13:15]
“[I think of] ethics as a sub-category but in a broader term of trust within a system.” — @HaniyehMah [0:14:32]
“So depending on the [user] persona, we would need to think about what kind of [system] features we would have .” — @HaniyehMah [0:17:25]
Links Mentioned in Today’s Episode:
Haniyeh Mahmoudian on LinkedIn
Kristen is also the founder of Data Moves Me, a company that offers courses, live training, and career development. She hosts The Cool Data Projects Show, where she interviews AI, machine learning (ML), and deep learning (DL) experts about their projects. Points From This Episode:
Tweetables:
“I’m finding people who are working on really cool things and focusing on the methodology and approach. I want to know: how did you collect your data? What algorithm are you using? What algorithms did you consider? What were the challenges that you faced?” — @DataMovesHer [0:05:55]
“A lot of times, it comes back to [the fact that] more data is always better!” — @DataMovesHer [0:15:40]
“I like [to do computer vision] projects that allow me to solve a problem that is actually going on in my life. When I do one, suddenly, it becomes a lot easier to see other ways that I can make other parts of my life easier.” — @DataMovesHer [0:18:59]
“The best thing you can do is to get involved in the community. It doesn’t matter whether that community is on Reddit, Slack, or LinkedIn.” — @DataMovesHer [0:23:32]
Links Mentioned in Today’s Episode:
In this episode, we learn the benefits of blue-collar AI education and the role of company culture in employee empowerment. Dr. Borne shares the history of data collection and analysis in astronomy and the evolution of cookies on the internet and explains the concept of Web3 and the future of data ownership. Dr. Borne is of the opinion that AI serves to amplify and assist people in their jobs rather than replace them and in our conversation, we discover how everyone can benefit if adequately informed.
Key Points From This Episode:
Tweetables:
“[AI] amplifies and assists you in your work. It helps automate certain aspects of your work but it’s not really taking your work away. It’s just making it more efficient, or more effective.” — @KirkDBorne [0:11:18]
“There’s a difference between efficiency and effectiveness … Efficiency is the speed at which you get something done and effective means the amount that you can get done.” — @KirkDBorne [0:11:29]
“There are different ways that automation and digital transformation are changing a lot of jobs. Not just the high-end professional jobs, so to speak, but the blue-collar gentlemen.” — @KirkDBorne [0:18:06]
“What we’re trying to achieve with this blue-collar AI is for people to feel confident with it and to see where it can bring benefits to their business.” — @KirkDBorne [0:24:08]
“I have yet to see an auto-complete come over your phone and take over the world.” — @KirkDBorne [0:26:56]
Links Mentioned in Today’s Episode:
Goodbye Passwords, Hello Biometrics with George Williams
Episode 61: Show Notes.
Is it really safer to have a system know your biometrics rather than your password? If so, who do you trust with this data? George Williams, a silicon valley tech veteran who most recently served as Head of AI at SmileIdentity, is passionate about machine learning, mathematics, and data science. In this episode, George shares his opinions on the dawn of AI, how long he believes AI has been around, and references the ancient Greeks to show the relationship between the current fifth big wave of AI and the genesis of it all. Focusing on the work done by SmileIdentity, you will understand the growth of AI in Africa, what and how biometrics works, and the mathematical vulnerabilities in machine learning. Biometrics is substantially more complex than password authentication, and George explains why he believes this is the way of the future.
Key Points From This Episode:
Tweetables:
“Robotics and artificial intelligence are very much intertwined.” — @georgewilliams [0:02:14]
“In my daily routine, I leverage biometrics as much as possible and I prefer this over passwords when I can do so.” — @georgewilliams [0:08:13]
“All of your data is already out there in one form or another.” — @georgewilliams [0:10:38]
“We don’t all need to be software developers or ML engineers, but we all have to understand the technology that is powering [the world] and we have to ask the right questions.” — @georgewilliams [0:11:53]
“[Some of the biometric] technology is imperfect in ways that make me uncomfortable and this technology is being deployed at massive scale in parts of the world and that should be a concern for all of us.” — @georgewilliams [0:20:33]
“In machine learning, once you train a model and deploy it you are not done. That is the start of the life cycle of activity that you have to maintain and sustain in order to have really good AI biometrics.” — @georgewilliams [0:22:06]
Links Mentioned in Today’s Episode:
Our discussion today dives into the climate change related applications of AI and machine learning, and how organizations are working towards mobilizing them to address the climate problem. Priya shares her thoughts on advanced technology and creating a dystopian version of humanity, what made her decide on her Ph.D. topic, and what she learned touring the world interviewing power grid experts around the world.
Key Points From This Episode:
Tweetables:
“When we are working on climate change related problems, even ones that are “technical problems” every problem is basically a socio-political technical problem, and really understanding that context when we move that forward can be really important.” — @priyald17 [0:10:02]
“Machine learning in power grids and really in a lot of other climate relevance sectors can contribute along several themes or in several ways.” — @priyald17 [0:12:18]
“What prompted us to found this organization, Climate Change AI, [is] to really help mobilize the AI machine learning community towards climate action by bringing them together with climate researchers, entrepreneurs, industry, policy, all of these players who are working to address the climate problems and sort of to do that together.” — @priyald17 [0:17:21]
Longer quote
“So the whole idea of Climate Change AI is rather than just focusing on what can we as individuals who are already in this area do to do research projects or deployment projects in this area, how can we sort of mobilize the broader talent pool and really help them to connect with entities that are really wanting to use their skills for climate action.” — @priyald17 [0:19:17]
Links Mentioned in Today’s Episode:
Putting the Smarts in the Smart Grid
Genetec has been a software provider for the physical security industry for over 25 years, earning its spot as the world’s number one software provider in video management. We are pleased to be joined today by Florian Matusek, Genetec’s Director of Video Analytics and the host of Video Analytics 101 on YouTube. Florian explains how his company is driving innovation in the market and what his specific role is before divining into the importance of maintaining both security and privacy, this new wave of special analytics, and why real-time improvements are more difficult than back-end adjustments. Our guest then lists all the exciting things he is witnessing in the world of video analytics and what he hopes to see in re-identification and gait analysis in the future. We discuss synthetic data and whether it will ever be commoditized and close with an exploration of the probable future of grocery stores without any employees.
Key Points From This Episode:
Tweetables:
“Nowadays, it's about automation. It's about operational efficiency. It's about integrating video and access control, and license plate recognition, IoT sensors, all into one platform, and providing the user a single pane of glass.” — Florian Matusek [0:05:11]
“We will always build products that benefit our users, which is the security operators, the ones purchasing it. But at the same time, we see it as our responsibility to also do everything possible to protect the privacy of the citizens that our customers are recording.” — Florian Matusek [0:09:03]
“What gets me excited are solutions that are really targeted for a specific purpose and made perfect for this purpose.” — Florian Matusek [0:11:24]
“You need both synthetic data and real data in order to make the real applications work really well.” — Florian Matusek [0:21:42]
“It's really funny how customers come up with creative ways to solve their specific problems.” — Florian Matusek [0:26:36]
Links Mentioned in Today’s Episode:
Navrina shares why trust and transparency are crucial in the AI space and why she believes having a Chief Ethics Officer should become an industry standard. Our conversation ends with a discussion about compliance and what AI tech organizations can do to ensure reliable, trustworthy, and transparent products. To get 30 minutes of uninterrupted knowledge from The National AI Advisory Committee member, Mozilla board of directors member, and World Economic Forum young global leader Navrina Singh, tune in now!
Key Points From This Episode:
Tweetables:
“I always saw technology as the tool that would help me change the world. Especially growing up in an environment where women don’t have the luxury that some other people have, you tend to lean on things that can make your ideas happen, and technology was that for me.” —@navrinasingh [0:01:17]
“As technologists, it’s our responsibility to make sure that the technologies we are putting out in the world that are becoming the fabric of our society, we take responsibility for it.” —@navrinasingh [0:04:04]
“By its very nature, trust is all about saying something and then consistently delivering on what you said. That’s how you build trust.” —@navrinasingh [0:08:58]
“I founded Credo AI for a reason, to bring more honest accountability in artificial intelligence.” —@navrinasingh [0:10:45]
“We are going to see more trust officers and trust functions emerge within organizations, but I am not really sure if a chief ethics officer is going to emerge as a core persona, at least not in the next two to three years. Is it needed? Absolutely, it’s needed.” —@navrinasingh [0:17:32]
Links Mentioned in Today’s Episode:
Arize and its founding engineer, Tsion Behailu, are leaders in the machine learning observability space. After spending a few years working as a computer scientist at Google, Tsion’s curiosity drew her to the startup world where, since the beginning of the pandemic, she has been building breaking-edge technology. Rather than doing it all manually (as many companies still do to this day), Arize AI technology helps machine learning teams detect issues, understand why they happen, and improve overall model performance. During this episode, Tsion explains why this method is so advantageous, what she loves about working in the machine learning field, the issue of bias in machine learning models (and what Arize AI is doing to help mitigate that), and more!
Key Points From This Episode:
Tweetables:
“We focus on machine learning observability. We're helping ML teams detect issues, troubleshoot why they happen, and just improve overall model performance.” — Tsion Behailu [0:06:26]
“Models can be biased, just because they're built on biased data. Even data scientists, ML engineers who build these models have no standardized ways to know if they're perpetuating bias. So more and more of our decisions get automated, and we let software make them. We really do allow software to perpetuate real world bias issues.” — Tsion Behailu [0:12:36]
“The bias tracing tool that we have is to help data scientists and machine learning teams just monitor and take action on model fairness metrics.” — Tsion Behailu [0:13:55]
Links Mentioned in Today’s Episode:
Arize AI
How to Know When It's Time to Leave your Big Tech SWE Job -- Tsion Behauli
Ian discusses what unique problems aerial automated vehicles face, how segregations in the air affect flying, how the vehicles land, and how they know where to land. Animal Dynamics' goal is to phase out humans in their technology entirely and Ian explains the human involvement in the process before telling us where he sees this technology fitting in with disaster response in the future.
Key Points From This Episode:
Tweetables:
“Drawing inspiration from the natural world to help address problems is very much the ethos of what Animal Dynamics is all about.” — Ian Foster [0:02:06]
“Data for autonomous aircraft is definitely a big challenge, as you might imagine.” — Ian Foster [0:16:17]
We're not aiming to just jump straight to full autonomy from day one. We operate safely within a controlled environment. As we prove out more aspects of the system performance, we can grow that envelope and then prove out the next level.” — Ian Foster [0:19:01]
“Ultimately, the desire is that the systems basically look after themselves and that humans are only involved in telling the thing where to go, and then the rest is delivered autonomously.” — Ian Foster [0:23:45]
“The important thing for us is to get out there and start making a difference to people. So we need to find a pragmatic and safe way of doing that.” — Ian Foster [0:23:57]
Links Mentioned in Today’s Episode:
Curren is a curious, driven, and creative leader with vast experience in data science and AI. Her original background was in neuroscience and cognitive neuroscience but entered the industry when she realized how much she enjoyed programming, maths, and statistics. Additionally, her biology background gave her an advantage, making her a perfect fit for managing the neuroscience portfolio for Johnson & Johnson. In our conversation with Curren, we learn about her professional background, how her biology background is an advantage, and what she enjoys most about data science, as well as the important work she does at Johnson & Johnson. We then talk about AI in the pharmaceutical industry, how it is used, what it is used for, the benefits of AI both to the company and patients, and her approach to tackling data science problems. She also tells us what it was like moving into a leadership role and shares some advice for people wanting to take the plunge into leadership.
Key Points From This Episode:
Tweetables:
“Finding new ways to use data to drive diagnosis is a big focus for us.” — @CurrenKatz [0:11:56]
“In data science, it can be challenging to define success. But choosing the right problem to solve can make that a lot easier.” — @CurrenKatz [0:15:27]
“I want the best data scientists in the world and to have those people on my team or the best managers in the world. I just need to give them the space to be successful.” — @CurrenKatz [0:23:55]
Links Mentioned in Today’s Episode:
Dr. Kruft unpacks how she went from earning a Ph.D. focused on quantum chemistry, to working in AI and machine learning. She shares how she first discovered her love of data science, and how her Ph.D. equipped her with the skills she needed to transition into this new and exciting field. We also discuss the data science approach to problem-solving, deep learning emulators, and the impact that machine learning could have on the natural sciences.
Key Points From This Episode:
Tweetables:
“Although I wasn't really working on machine learning, or data science during my Ph.D., there's a lot of transferable skills that I picked up along the way while I was working on quantum chemistry.” — Bonnie Kruft [0:03:00]
“We believe that deep learning could have a really transformational impact on the natural sciences.” — Bonnie Kruft [0:13:02]
“The idea is that deep learning emulators will be used for the things that are going to make the most impact on the world. Solving healthcare challenges, combating disease, combating climate change, and sustainability. Things like that.” — Bonnie Kruft [0:21:29]
Links Mentioned in Today’s Episode:
In our conversation, we discuss Brandon's approach to problem-solving, the use of synthetic data, challenges facing the use of AI in drug development, why the diversity of both data and scientists is important, the three qualities required for innovation, and much more.
Key Points From This Episode:
Tweetables:
“Instead of improving the legacy, is there a way to really innovate and break things? And that’s the way we think about it here at Valo.” — @allg00d [0:08:46]
“Here at Valo, if data scientists have good ideas, we let them run with them, you know? We let them commission experiments. That’s not generally the way that a traditional organization would work.” — @allg00d [0:11:31]
“While you might be able to get synthetic data that represents the bulk, you are not going to get the resolution within those patients, within those subgroups, within the patient set.” — @allg00d [0:15:15]
“We suffer right now from a lack of diversity of data, but then, on the other side, we also suffer as a field from lack of diversity in our scientists.” — @allg00d [0:19:42]
Links Mentioned in Today’s Episode:
In this episode, Heather shares her background in both farming and commerce, and explains how her in-field experience and insights aid both her and the AI team in the development cycle. We learn about the advantages of drone-based precision spraying, the function of the herbicides that Precision AI’s drones spray onto crops, and the various challenges of creating AI models that can recognize plant variations.
Key Points From This Episode:
Tweetables:
“Up until now, everybody just went, ‘How do we get more efficient [with] fewer passes?’ But nobody questioned, ‘Are we doing the passes with the right equipment?’” — Heather Clair [0:07:07]
“[precision.ai is] moving from land-based high-clearance sprayers to drone-based precision spraying.” — Heather Clair [0:07:24]
“I never thought when I was a little farm kid that I would be playing with drones, but it is one of my favorite things to do.” — Heather Clair [0:07:45]
“Trying to create these AI models that can work on any stage of plant can be a challenge.” — Heather Clair [0:21:15]
“It's incredible how working with my AI team has opened up my eyes to being able to look at these plants from a very logical standpoint.” — Heather Clair [0:25:34]
Links Mentioned in Today’s Episode:
Xiaoyang Yang, Head of Data AI Security and IT over at Second Dinner Studios, explains how Second Dinner navigates the issue of excess data with intention and discover the metrics that go deeper than the surface to measure the quality of competition, balance, and fairness within gaming. Xiaoyang also describes the difference between AI and gaming AI and shows us how each can be used to enhance the other. Listen to today’s episode for a careful look at how AI can be used to improve player experience and how gaming can act as a testing ground to improve AI in everyday life.
Key Points From This Episode:
Tweetables:
“We try to really listen to what our players are saying. One way to do that is through data. We use data as a tool.” — Xiaoyang Yang [0:02:28]
“When you see the scale, you begin to really understand that different players have different desires. Sometimes, different players see the same feature or the same experience in a very different type of way.” — Xiaoyang Yang [0:04:46]
“We see a lot of opportunities to use technology data AI to make MARVEL SNAP approachable to a wide audience of players and, hopefully, some players who have never tried the genre of collectible card games.” — Xiaoyang Yang [0:11:25]
“We want to make sure that there are different sets of cards you can use to have fun and still be competitive in the game. That's not an easy task.” — Xiaoyang Yang [0:19:25]
Links Mentioned in Today’s Episode:
Tune in to hear more about Becks’ role as a lead full stack AI engineer at Rogo, how they determine what should and should not be added into the product tier for deep learning, the types of questions you should be asking along the investigation-to-product roadmap for AI and machine learning products, and so much more!
Key Points From This Episode:
Tweetables:
“People think that [AI] can do more than what it can and it has only been the last few years where we realized that actually, there’s a lot of work to put it in production successfully, there’s a lot of catastrophic ways it can fail, there are a lot of considerations that need to be put in.” — Becks Simpson [0:11:39]
“Make sure that if you ever want to put any kind of machine learning or AI or something into a product, have people who can look at a road map for doing that and who can evaluate whether it even makes sense from an ROI business standpoint, and then work with the teams.” — Becks Simpson [0:12:55]
“I think for the people who are in academia, a lot of them are doing it to push the needle, and to push the state of the art, and to build things that we didn’t have before and to see if they can answer questions that we couldn’t answer before. Having said that, there’s not always a link back to a practical use case.” — Becks Simpson [0:20:25]
“Academia will always produce really interesting things and then it’s industry that will look at whether or not they can be used for practical problems.” — Becks Simpson [0:21:59]
Links Mentioned in Today’s Episode:
Dr. Seymour aims to take cutting-edge technology and apply it to the special effects industry, such as with the new AI platform, PLATO. He is also a lecturer at the University of Sydney and works as a consultant within the special effects industry. He is an internationally respected researcher and expert in Digital Humans and virtual production, and his experience in both visual effects and pure maths makes him perfect for AI-based visual effects. In our conversation we find out more about Dr. Seymour’s professional career journey, and what he enjoys the most about working as both a researcher and practitioner. We then get into all the details about AI in special effects as we learn about Digital Humans, the new PLATO platform, why AI dubbing is better, the biggest challenges facing the application of AI in special effects.
Key Points From This Episode:
Tweetables:
“In the film, half the actors are the original actors come back to just re-voice themselves, half aren’t. In the film hopefully, when you watch it, it’s indistinguishable that it wasn’t actually filmed in English. — @mikeseymour [0:10:15]
“In our process, it doesn’t apply because if you were saying in four words what I’d said in three, it would just match. We don’t have to match the timing, we don’t have to match the lip movement or jaw movement, it all gets fixed.” — @mikeseymour [0:15:15]
“My attitude is, it’s all very well for us to get this working in the lab, but it has to work in the real world.” — @mikeseymour [0:19:56]
Links Mentioned in Today’s Episode:
Ethics in AI is considered vital to the healthy development of all AI technologies, but this is easier said than done. In this episode of How AI Happens, we speak to Maria Luciana Axente to help us unpack this essential topic. Maria is a seasoned AI policy expert, public speaker, and executive and has a respected track record of working with companies whose foundation is in technology. She combines her love for technology with her passion for creating positive change to help companies build and deploy responsible AI. Maria works at PwC, where her work focuses on the operationalization of AI, and data across the firm. She also plays a vital role in advising government, regulators, policymakers, civil society, and research institutions on ethically aligned AI public policy. In our conversation, we talk about the importance of building responsible and ethical AI, while leveraging technology to build a better society. We learn why companies need to create a culture of ethics for building AI, what type of values encompasses responsible technology, the role of diversity and inclusion, the challenges that companies face, and whose responsibility it is. We also learn about some basic steps your organization can take and hear about helpful resources available to guide companies and developers through the process.
Key Points From This Episode:
Tweetables:
“How we have proceeded so far, via Silicon Valley, 'move fast and break things.' It has to stop because we are in a time when if we continue in the same way, we're going to generate more negative impacts than positive impacts.” — @maria_axente [0:10:19]
“You need to build a culture that goes above and beyond technology itself.” — @maria_axente [0:12:05]
“Values are contextual driven. So, each organization will have their own set of values. When I say organization, I mean both those who build AI and those who use AI.” — @maria_axente [0:16:39]
“You have to be able to create a culture of a dialogue where every opinion is being listened to, and not just being listened to, but is being considered.” — @maria_axente [0:29:34]
“AI doesn't have a technical problem. AI has a human problem.” — @maria_axente [0:32:34]
Links Mentioned in Today’s Episode:
Maria Luciana Axente on LinkedIn
The gap between those creating AI systems and those using the systems is growing. After 27 years on the other side of technology, Mieke decided that it was time to do something about the issues that she was seeing in the AI space. Today she is an Adjunct Professor for Sustainable Ethical and Trustworthy AI at Vlerick Business School, and during this episode, Mieke shares her thoughts on how we can go about building responsible AI systems so that the world can experience the full range of benefits of AI.
Key Points From This Episode:
Tweetables:
“The compute power had changed, and the volumes of data had changed, but the [AI] principles hadn't changed that much. Only some really important points never made the translation.” — @miekedk [0:02:03]
“[AI systems] don't automatically adapt themselves. You need to have your processes in place in order to make sure that the systems adapt to the changing context.” — @miekedk [0:04:06]
“AI systems are starting to be included into operational processes in companies, but only from the profit side, not understanding that they might have a negative impact on people especially when they start to make automated decisions.” — @miekedk [0:04:52]
“Let's move out of our silos and sit together in a multidisciplinary debate to discuss the systems we're going to create.” — @miekedk [0:07:52]
Links Mentioned in Today’s Episode:
Today, on How AI Happens, we are joined by the Chief Digital Officer at Allied Digital, Utpal Chakraborty, to talk all things AI at Allied Digital. You’ll hear about Utpal’s AI background, how he defines Allied Digital’s mission, and what Smart Cities are and how the company captures data to achieve them, as well as why AI learning is the right approach for Smart Cities. We also discuss what success looks like to Utpal and the importance of designing something seamless for the end-user. To find out why customer success is Allied Digital’s success, tune in today!
Key Points From This Episode:
Tweetables:
“I looked at how we can move this [Smart City] tool ahead and that’s where the AI machine learning came into the picture.” — @utpal_bob [0:11:11]
“[Allied Digital] wants to bring that wow factor into each and every service product and solution that we provide to our customers and, in turn, that they provide to the industry.” — @utpal_bob [0:16:27]
Links Mentioned in Today’s Episode:
In today’s conversation, we learn about Jason and Kevin’s career backgrounds, the potential that the deep technology sector has, what ideas excite them the most, the challenges when investing in AI-based companies, what kind of technology is easily understood by the consumer, what makes a technological innovation successful, and much more.
Key Points From This Episode:
Tweetables:
“I think for me personally, the cycle-time was very long. You work on projects for a very long time. As an investor, I get to see new ideas and new concepts every day. From an intellectual curiosity standpoint, there couldn’t be a better job.” — Kevin Dunlap [0:05:17]
“So that lights me up. When I hear somebody talk about a problem that they are looking to solve and how their technology can do it uniquely with some type of competitive or differentiated advantage we think is sustainable.” — Jason Schoettler [0:08:14]
“The things that really excite us are not, where can we do better than humans but first, where are there not humans work right now where we need humans doing work.” — Jason Schoettler [0:20:44]
“Anytime that someone is doing a job that is dangerous, that is able to be solved with technology, I think we owe it to ourselves to do that.” — Kevin Dunlap [0:22:39]
Links Mentioned in Today’s Episode:
Whether you’re building AI for self-driving cars or for scheduling meetings, it’s all about prediction! In this episode, we’re going to explore the complexity of teaching the human power of prediction to machines.
Key Points From This Episode:
Tweetables:
“The whole umbrella of AI is really just one big prediction engine.” — @DennisMortensen [0:03:38]
“Language is not a solved science.” — @DennisMortensen [0:06:32]
“The expectation of a machine response is different to that of a human response to the same question.” — @DennisMortensen [0:11:36]
Links Mentioned in Today’s Episode:
Leading AI companies are adopting simulation, synthetic data and other aspects of the metaverse at an incredibly fast rate, and the opportunities for AI/machine learning practitioners are endless. Tune in today for a fascinating conversation about how the real world and the virtual world can be blended in what Danny refers to as “the real metaverse.”
Key Points From This Episode:
Tweetables:
“When you play a game, I don’t need to know your name, your age. I don’t need to know where you live, or how much you earn. All that really matters is that my system needs to learn the way you play and what you are interested in in your gameplay, to make excellent recommendations for other games. That’s what drives the gaming ecosystem.” — @danny_lange [0:03:16]
“Deep learning embedding is something that is really driving a lot of progress right now in the machine learning AI space.” — @danny_lange [0:06:04]
“The world is built on uncertainty and we are looking at simulation in an uncertain world, rather than in a Newtonian, deterministic world.” — @danny_lange [0:23:23]
Links Mentioned in Today’s Episode:
In today’s episode, Archy De Berker, Head of Data and Machine learning at CarbonChain, explains how he and his team calculate carbon footprints, some of the challenges that they face in this line of work, the most valuable use of machine learning in their business (and for climate change solutions as a whole), and some important lessons that he has learned throughout his diverse career so far!
Key Points From This Episode:
Tweetables:
“We build automated carbon footprinting for the world’s most polluting industries. We’re really trying to help people who are buying things from carbon-intense industries figure out where they can get lower carbon versions of the same kind of products.” — @ArchydeB [0:02:14]
“A key challenge for carbon footprinting is that you need to be able to understand somebody’s business in order to tell them what the carbon footprint of their activities is.” — @ArchydeB [0:13:01]
“Probably the most valuable place for machine learning in our business is taking all this heterogeneous customer data from all these different systems and being able to map it onto a very rigid format that we can then retrieve information from our databases for.” — @ArchydeB [0:13:24]
Links Mentioned in Today’s Episode:
MATR Ventures Partner, Hessie Jones, is dedicated to solving issues around AI ethics as well as diversity & representation in the space. In our conversation with her, she breaks down how she came to beleive something was wrong with the way companies harvest & use data, and the steps she has taken towards solving the privacy problem. We discuss the danger of intentionally convoluted terms and conditions and the problem with synthetic data. Tune in to hear about the future of biometrics and data privacy and the emerging technologies using data to increase accountability.
Key Points From This Episode:
Tweetables:
“Venture capital is not immune to the diversity problems that we see today.” — Hessie Jones [0:05:04]
“We should separate who you are as an individual from who you are as a business customer.” — Hessie Jones [0:08:49]
“The problem I see with synthetic data is the rise of deep fakes.” — Hessie Jones [0:21:24]
“The future is really about data that’s not shared, or if it’s shared, it’s shared in a way that increases accountability.” — Hessie Jones [0:26:43]
Links Mentioned in Today’s Episode:
During Vinesh Sukumar’s colorful career he has worked at NASA, Apple, Intel, and a variety of other companies, before finding his way to Qualcomm where he is currently the Head of AI/ML Product Management. In today’s conversation, Vinesh shares his experience of developing the camera for the very first iPhone and one of the biggest lessons he learned from working with Steve Jobs. We then discuss what his current role entails and the biggest challenge that he has with it, Qualcomm’s approach to sustainability from a hardware, systems and software standpoint, and his thoughts on why edge computing is so important.
Key Points From This Episode:
Tweetables:
“Camera became one of the most important features for a consumer to buy a phone. Then visual analytics, AI, deep learning, ML really started seeping into images, and then into videos, and now the most important consumer influencing factor to buy a phone is the camera.” — Vinesh Sukumar [0:07:01]
“Reaction time is much better when you have intelligence on the device, rather than giving it to the cloud to make the decision for you.” — Vinesh Sukumar [0:20:48]
Links Mentioned in Today’s Episode:
Joining us on this episode of How AI Happens is four-time author, entrepreneur, future tech strategist, and The Digital Speaker himself, Dr. Mark van Rijmenam. Mark explainsthe extraordinary opportunities and challenges facing business leaders, consumers, regulators, policymakers, and other metaverse stakeholders trying to navigate the future of the internet; the important role that AI will play in the metaverse; why he believes we need to enable what he calls ‘anonymous accountability’; and how you can actively participate in building ethical AI.
Key Points From This Episode:
Tweetables:
“The social and the material [systems are] very good but, for the organizations of tomorrow, we need to add a third actor, which is the artificial.” — @VanRijmenam [0:03:05]
“Once we reach AGI, that will be a fundamental shift because, once we have AGI—which is as intelligent as a human being, but at an exponential speed—everything will change.” — @VanRijmenam [0:08:34]
“How can we create a metaverse that doesn’t continue on the path of the internet of today? We have this blank canvas where we can construct this immersive internet in ways where we do own our data, [digital assets, identity, and reputation] using a self-sovereign approach.” — @VanRijmenam [0:15:09]
“Technology is neutral. My objective is to help people move to the positive side of technology.” — @VanRijmenam [0:29:24]
Links Mentioned in Today’s Episode:
Dr. Mark van Rijmenam on LinkedIn
Neil Sahota is an AI Advisor to the UN, co-founder of the UN’s AI for Good initiative, IBM Master Inventor, and author of Own the AI Revolution. In today’s episode, Neil shares some of the valuable lessons he learned during his first experience working in the AI world, which involved training the Watson computer system. We then dive into a number of different topics, ranging from Neil’s thoughts on synthetic data and to the language learning capacity of AI versus a human child, to an overview of the AI for Good initiative and what Neil believes our a “cyborg future” could entail!
Key Points From This Episode:
Tweetables:
“We, as human beings, have to make really rapid judgement calls, especially in sports, but there’s still thousands of data points in play and the best of us can only see seven to 12 in real time.” — @neil_sahota [0:01:21]
“Synthetic data can be a good bridge if we’re in a very closed ecosystem.” — @neil_sahota [0:11:47]
“For an AI system, if it gets exposed to about 100 billion words it becomes proficient and fluent in a language. If you think about a human child, it only needs about 30 billion words. So, it’s not the volume that matters, there’s certain words or phrases that trigger the cognitive learning for language. The problem is that we just don’t understand what that is.” — @neil_sahota [0:14:22]
“Things that are more hard science, or things that have the least amount of variability, are the best things for AI systems.” — @neil_sahota [0:16:26]
“Local problems have global solutions.” — @neil_sahota [0:20:06]
Links Mentioned in Today’s Episode:
Today’s guest has committed many years of his life to trying to understand Artificial Superintelligence and the security concerns associated with it. Dr. Roman Yampolskiy is a computer scientist (with a Ph.D. in behavioral biometrics), and an Associate Professor at the University of Louisville. He is also the author of the book Artificial Superintelligence: A Futuristic Approach. Today he joins us to discuss AI safety engineering. You’ll hear about some of the safety problems he has discovered in his 10 years of research, his thoughts on accountability and ownership when AI fails, and whether he believes it’s possible to enact any real safety measures in light of the decentralization and commoditization of processing power. You’ll discover some of the near-term risks of not prioritizing safety engineering in AI, how to make sure you’re developing it in a safe capacity, and what organizations are deploying it in a way that Dr. Yampolskiy believes to be above board.
Key Points From This Episode:
Tweetables:
“Long term, we want to make sure that we don’t create something which is more capable than us and completely out of control.” — @romanyam [0:04:27]
“This is the tradeoff we’re facing: Either [AI] is going to be very capable, independent, and creative, or we can control it.” — @romanyam [0:12:11]
“Maybe there are problems that we really need Superintelligence [to solve]. In that case, we have to give it more creative freedom but with that comes the danger of it making decisions that we will not like.” — @romanyam [0:12:31]
“The more capable the system is, the more it is deployed, the more damage it can cause.” — @romanyam [0:14:55]
“It seems like it’s the most important problem, it’s the meta-solution to all the other problems. If you can make friendly well-controlled superintelligence, everything else is trivial. It will solve it for you.” — @romanyam [0:15:26]
Links Mentioned in Today’s Episode:
Joining us today on How AI Happens is Sebastian Raschka, Lead AI educator at GRID.ai and Assistant Professor of Statistics at the University of Wisconsin-Madison. Sebastian fills us in on the coursework he’s creating in his role at GRID.ai, and we find out what can be attributed to the crossover of machine learning in academia and the private sector. We speculate on the pros and cons of the commodification of deep learning models and which machine learning framework is better: PyTorch or TensorFlow.
Key Points From This Episode:
Tweetables:
“In academia, the focus is more on understanding how deep learning works… On the other hand, in the industry, there are [many] use cases of machine learning.” — @rasbt [0:10:10]
“Often it is hard to formulate answers as a human to complex questions.” — @rasbt [0:12:53]
“In my experience, deep learning can be very powerful but you need a lot of data to make it work well.” — @rasbt [0:14:06]
“In [Machine Learning with PyTorch and Scikit-Learn], I tried to provide a resource that is a hybrid between more theoretical books and more applied books.” — @rasbt [0:23:21]
“Why I like PyTorch is that it gives me the readability [and] flexibility to customize things.” — @rasbt [0:25:55]
Links Mentioned in Today’s Episode:
Machine Learning with PyTorch and Scikit-Learn
Antonio Grasso joins us to explain how he empowers some of the biggest companies in the world to use AI in a meaningful way and explains the two ways his company goes about this. You’ll hear about what Antonio believes is coming down the pipeline in terms of the Internet of Things, especially when it comes to edge computing, and why network traffic has become a huge concern. We discuss where edge computing begins and ends with regards to the difference between the device and its computational resources. In light of the fact that one can infer at the edge but not train at the edge, Antonio shares his views on why he disagrees that the ultimate goal should be to train at the edge. He also provides a helpful resource for AI practitioners to calculate an AI readiness index.
Key Points From This Episode:
Tweetables:
“‘Wow, this is really unbelievable! We can also create not [only] code software with direct explicit instruction, we can also [create] code software that learns from experience!’ That really [caught] me and I fell in love with this kind of technology.” — @antgrasso [0:03:13]
“I started on Social Media to share my knowledge, my experience, because I think you must share what you see because everyone can benefit of it too.” — @antgrasso [0:03:39]
“We need to shift to better understand what is the meaning of edge computing but we must divide the device itself from the computational resources that we put [there] to harness the power of computational power in proximity.” — @antgrasso [0:16:15]
“I can not imagine training at the edge. — We can do it, yes, but my question is why?” — @antgrasso [0:20:50]
Links Mentioned in Today’s Episode:
Digital Business Innovation Srl
Today's guest is Aleksandra Przegalinska PhD, Vice-Rector at Kozminski University, research associate, and Polish futurist. From studying pure philosophy, Aleksandra moved into AI when she started researching natural language processing in the virtual space. We kickstart our discussion with her account of how she ended up where she is now, and how she transferred her skills from philosophy to AI. We hear how Second Life was common in Asia centuries ago, why we are seeing a return to anonymization online, and why Aleksandra feels NLP should be called ‘natural language understanding’. We also discover what the real-world applications of NLP are, and why text processing is under-utilized. Moving onto more philosophical questions around AI and labor, Aleksandra explains how AI should be used to help people and why what is sometimes simple for a human can be immensely complex for AI. We wrap up with Aleksandra’s thoughts on transformers and why their applications are more important than their capabilities, as well as why she is so excited about the idea of xenobots.
Key Points From This Episode:
Tweetables:
“My major discovery [during my PhD] was that people are capable of building robust identities online and can live two lives. They can have their first life and then they can have their second life online, which can be very different from the one they pursue on-site, in the real world.” — @Przegaa [0:06:42]
“We can all observe that there is a great boom in NLP. I’m not even sure we should call it NLP anymore. Maybe NLP is an improper phrase. Maybe it’s NLU: natural language understanding.” — @Przegaa [0:14:51]
“Transformers seem to be a really big game-changer in the AI space.” — @Przegaa [0:16:40]
“I think that using text as a resource for data analytics for businesses in the future is something that we will see happen in the coming two or three years.” — @Przegaa [0:19:46]
“AI should not replace you, AI should help you at your work and make your work more effective but also more satisfying for you.” — @Przegaa [0:25:31]
Links Mentioned in Today’s Episode:
Aleksandra Przegalinska on LinkedIn
Alexandra Przegalinska on Twitter
CyberLink's facial recognition technology routinely registers best-in-class accuracy. But how do developers deal with masks, glasses, headphones, or changes in faces over time? How can they prevent spoofing in order to protect identities? And where does computer vision & object detection stop and FRT truly begin? CyberLink Senior Vice President of Global Marketing and US General Manager Richard Carriere and Head of Sales Engineering Craig Campbell join to discuss the endless use cases for facial recognition technology, how CyberLink is improving the tech's accuracy & security, and the ethical considerations of deploying FRT at scale.
CyberLink's Ultimate Guide to Facial Recognition
FaceMe Security SDK Demo
Get in touch with Cyberlink: [email protected]
Oxbotica is a vehicle software company at the forefront of autonomous technology, and today we have a fascinating chat with Ben Upcroft, the Vice President of Technology. Ben explains Oxbotica's mission of enabling industries to make the most of autonomy, and how their technological progress affects real-world situations. We also get into some of the challenges that Oxbotica and the autonomy space, in general, are currently facing, before drilling down on the important concepts of user trust, future implementations, and creating an adaptable core functionality. The last part of today's episode is spent exploring the exciting possibilities of simulated environments for data collection, and the broadening of vehicle experience. Ben talks about the importance of seeking out edge cases to improve their data, and we get into how Oxbotica applies this data across locations.
Key Points From This Episode:
Tweetables:
“Oxbotica is about deploying and enabling industries to use and leverage autonomy for performance, for efficiency, and safety gains.” — @ben_upcroft
“The autonomy that we bring revolutionizes how we move around the globe, through logistics transport, on wheeled vehicles.” — @ben_upcroft
“The idea behind the system is that it is modular, enables a core functionality, and I am able to add little extras that customize for a particular domain.” — @ben_upcroft
Links Mentioned in Today’s Episode:
Key Points From This Episode:
Tweetables:
“Most of the successful model architectures are now open source. You can get them anywhere on the web easily, but the one thing that a company is guarding with its life is its data.” — Jerome Pasquero [0:05:36]
“If you consider that we now know that a model can be highly sensitive to the quality of the data that are used to train it, there is this natural shift to try to feed models with the best data possible and data quality becomes of paramount importance.” — Jerome Pasquero [0:05:47]
“The point of this whole system is that, once you have these three components in place, you can drive your filtering strategy.” — Jerome Pasquero [0:14:06]
“You can always get more data later. What you want to avoid is getting yourself into a situation where the data that you are annotating is useless.” — Jerome Pasquero [0:17:30]
“A model is like a living thing. You need to take care of it otherwise it is going to degrade, not because it’s degrading internally, but because the data that it is used to seeing has changed.” — Jerome Pasquero [0:25:49]
Links Mentioned in Today’s Episode:
Dr. Daimler is an authority in Artificial Intelligence with over 20 years of experience in the field as an entrepreneur, executive, investor, technologist, and policy advisor. He is also the founder of data integration firm Conexus, and we kick our conversation off with the work he is doing to integrate large heterogeneous data infrastructures. This leads us into an exploration of the concept of compositionality, a structural feature that enables systems to scale, which Dr. Daimler argues is the future of IT infrastructure. We discuss how the way we apply AI to data is constantly changing, with data sources growing quadratically, and how this necessitates an understanding of newer forms of math such as category theory by AI specialists. Towards the end of our discussion, we move on to the subject of the adoption of AI in technologies that lives depend on, and Dr. Daimler gives his recommendation for how to engender trust amongst the larger population.
Key Points From This Episode:
Tweetables:
“You can create data that doesn’t add more fidelity to the knowledge you’re looking to gain for better business decisions and that is one of the limitations that I saw expressed in the government and other large organizations.” — @ead [0:01:32]
“That’s the world, is compositionality. That is where we are going and the math that supports that, type theory, categorical theory, categorical logic, that’s going to sweep away everything underneath IT infrastructure.” — @ead [0:10:23]
“At the trillions of data, a trillion data sources, each growing quadratically, what we need is category theory.” — @ead [0:13:51]
“People die and the way to solve that problem when you are talking about these life and death contexts for commercial airplane manufacturers or in energy exploration where the consequences of failure can be disastrous is to bring together the sensibilities of probabilistic AI and deterministic AI.” — @ead [0:24:07]
“Circuit breakers, oversight, and data lineage, those are three ways that I would institute a regulatory regime around AI and algorithms that will engender trust amongst the larger population.” — @ead [0:35:12]
Links Mentioned in Today’s Episode:
Irrespective of the application or the technology, a common problem among AI professionals always seems to be data. Is there enough of it? What do we prioritize? Is it clean? How do we annotate it? Today’s guest, however, believes that AI is not data-limited but compute-limited. Joining us to share some very interesting insights on the subject matter is Slater Victoroff, Founder and Chief Technology Officer at Indico, an unstructured data platform that enables users to build innovative, mission-critical enterprise workflows that maximize opportunity, reduce risk, and accelerate revenue. Slater explains how he came to co-found Indico Data despite a previous admission that he believed that deep learning was dead. He explains what happened that unlocked deep learning, how he was influenced by the AlexNet paper, and how Indico goes about solving the problem of unstructured data.
Key Points From This Episode:
Tweetables:
“Deep learning is particularly useful for these sorts of unstructured use-cases, image, text, audio. And it’s an incredibly powerful tool that allows us to attack these use cases in a way that we fundamentally weren’t able to otherwise.” — @sl8rv [0:02:44]
“By and large, AI today is not data-limited, it is compute limited. It is the only field in software that you can say that.” — @sl8rv [0:19:27]
“That’s really this next frontier though: This is where transfer learning is going next, this idea ‘Can I take visual information and language information? Can I understand that together in a comprehensive way, and then give you one interface to learn on top of that consolidated understanding of the world?’” — @sl8rv [0:26:05]
“We have gone from asking the question ‘Is transfer learning possible?’ to asking the question ‘What does it take to be the best in the world at transfer learning?’”. — @sl8rv [0:27:03]
Links Mentioned in Today’s Episode:
"Visualizing and Understanding Convolutional Networks"
The innovations that drive space exploration not only aid us in discovering other worlds, but they also benefit us right here on earth. Today’s guest is Shelli Brunswick, who joins us to talk about the role of AI in space exploration and how the ‘space ecosystem’ can create jobs and career opportunities on Earth. Shelli is the COO at the Space Foundation and was selected as the 2020 Diversity and Inclusion Officer and Role Model of the Year by WomenTech Network and a Woman of Influence by the Colorado Springs Business Journal. We kick our discussion off by hearing how Shelli got to her current role and what it entails. She talks about how connected the space industry has become to many others, and how this amounts to a ‘space ecosystem’, a rich field for opportunity, innovation, and commerce. We talk about the many innovations that have stemmed from space exploration, the role they play on this planet, and the possibilities this holds as the space ecosystem continues to grow. She gets into the programs at the Space Foundation to encourage entrepreneurship and the ways that innovators can direct their efforts to participate in the space ecosystem. We also explore the many ways that AI plays a role in the space ecosystem and how the AI being utilized across industries on earth will find later applications in space. Tune in today to learn more!
Key Points From This Episode:
Tweetables:
“What we really need to do is wrap it back to how that space technology, that space innovation, that investing in space, benefits us right here on planet earth and creates jobs and career opportunities.” — @shellibrunswick [0:05:52]
“The sky is not the limit [for the role that] AI can play in this.” — @shellibrunswick [0:12:12]
“It is the Wild West. It is exciting and, if you want to be an entrepreneur, buckle in because there is an opportunity for you!” — @shellibrunswick [0:20:36]
“You can sit in the Space Symposium sessions and hear what are those governments investing in, what are those companies investing in, and how can you as an entrepreneur create a product or service that’s related to AI that helps them fill that capability gap?” — @shellibrunswick [0:22:00]
Links Mentioned in Today’s Episode:
A future filled with autonomous vehicles promises to be a driving utopia. Maximum efficiency navigation decreasing traffic and congestion, safety features that drastically reduce collisions with other cars, bikes, or pedestrians, and an electric-first approach that lowers greenhouse gas emissions.
But as today’s guest asserts, on the back of her extensive research the implications of a huge increase in autonomous vehicles on our streets aren’t rosy by default. Sarah Barnes works on the micro-mobility team at Lyft, and has published a variety of works that document the expected implications of more autonomous vehicles in major metropolitan areas— implications that are good, bad, and ugly. Sarah argues that without a serious focus on three transport revolutions—making transport shared, electric, AND autonomous, congestion and pollution could be here to stay. Sarah walks me through what the various implications are, and how local governments and AI practitioners can partner on policy and technology to create a future that works for everyone.
Arria is a Natural Language Generation company that replicates the human process of expertly analyzing and communicating data insights. We caught up with their CTO, Neil Burnett, to learn more about how Arria's technology goes beyond the standard rules-based NLP approach, as well as how the technology develops and grows once it's placed in the hands of the consumer. Neil explains the huge opportunity within NLG, and how solving for seamless language based communication between humans and machines will result in increased trust and widespread adoption in AI/ML technologies.
Traditional LiDAR systems require moving parts to operate, making them less cost-effective, robust, and safe. Cibby Pulikkaseril is the Founder and CTO of Baraja, a company that has reinvented LiDAR for self-driving vehicles by using a color-changing laser routed by a prism. After his Ph.D. in lasers and fiber optic communications, Cibby got a job at a telecom equipment company, and that is when he discovered that a laser used in DWDM networks could be used to reinvent LiDAR. By joining this conversation, you’ll hear exactly how Baraja’s LiDAR technology works and what this means for the future of autonomous vehicles. Cibby also talks about some of the upcoming challenges we will face in the world of self-driving cars and the solutions his innovation offers. Furthermore, Cibby explains what spectrum scan LiDAR can offer the field of robotics more broadly.
Key Points From This Episode:
Tweetables:
“We started to think, what else could we do with it. The insight was that if we could get the laser light out of the fiber and into free space, then we could start doing LiDAR.” — Cibby Pulikkaseril [0:01:23]
“We were excited by this idea that there was going to be a change in the future of mobility and we can be a part of that wave.” — Cibby Pulikkaseril [0:02:13]
“We are the inventors of what we call spectrum scan LiDAR that is harnessing the natural phenomenon of the color of light to be able to steer a beam without any moving parts.” — Cibby Pulikkaseril [0:03:37]
“We had this insight which is that if you can change the color of light very rapidly, by coupling that into prism-like optics, this can route the wavelengths based on the color and so you can steer a beam without any moving parts.” — Cibby Pulikkaseril [0:03:57]
Links Mentioned in Today’s Episode:
Cibby Pulikkaseril on LinkedIn
Academic turned entrepreneur Michel Valstar joins How AI Happens to explain how his behaviomedics company, Blueskeye AI, prioritizes building trust with their users. Much of the approach features data opt-ins and on-device processing, which necessarily results in less data collection. Michel explains how his team is able to continue gleaning meaningful insight from smaller portions of data than your average AI practitioner is used to.
Michel Valstar on LinkedIn
Joining us today is Senior Director at Facebook AI, Manohar Paluri. Mano discusses the biggest challenges facing the field of computer vision, and the commonalities and differences between first and third-person perception. Manohar dives into the complexity of detecting first-person perception, and how to overcome the privacy and ethical issues of egocentric technology. Manohar breaks down the mechanism underlying AI based on decision trees compared to those based on real-world data, and how they result in two different ideals: transparency or accuracy.
Key Points From This Episode:
Tweetables:
“What I tell many of the new graduates when they come and ask me about ‘Should I do my Ph.D. or not?’ I tell them that ‘You’re asking the wrong question’. Because it doesn’t matter whether you do a Ph.D. or you don’t do a Ph.D., the path and the journey is going to be as long for anybody to take you seriously on the research side.” — Manohar Paluri [0:02:40]
“Just to give you a sense, there are billions of entities in the world. The best of the computer vision systems today can recognize in the order of tens of thousands or hundreds of thousands, not even a million. So abandoning the problem of core computer vision and jumping into perception would be a mistake in my opinion. There is a lot of work we still need to do in making machines understand this billion entity taxonomy.” — Manohar Paluri [0:11:33]
“We are in the research part of the organization, so whatever we are doing, it’s not like we are building something to launch over the next few months or a year, we are trying to ask ourselves how does the world look like three, five, ten years from now and what are the technological problems?” — Manohar Paluri [0:20:00]
“So my hope is, once you set a standard on transparency while maintaining the accuracy, it will be very hard for anybody to justify why they would not use such a model compared to a more black-box model for a little bit more gain in accuracy.” — Manohar Paluri [0:32:55]
Links Mentioned in Today’s Episode:
In recent years, the focus of AI developers has been to implement technologies that replace basic human labor. Talking to us today about why this is the wrong application for AI (right now), is Katya Klinova, the Head of AI, Labor, and the Economy at The Partnership on AI. Tune in to find out why replacing human labor doesn't benefit the whole of humanity, and what our focus should be instead. We delve into the threat of "so-so technologies" and what the developer's role should be in approaching ethical vendors and looking after the workers supplying them with data. Join us to find out more about how AI can be used to better the whole of society if there’s a shift in the field’s current aims.
Key Points From This Episode:
Tweetables:
“Creating AI that benefits all is actually a very large commitment and a statement, and I don't think many companies have really realized or thought through what they're actually saying in the economic terms when they're subscribing to something like that.” — @klinovakatya [0:09:45]
"It’s not that you want to avoid all kinds of automation, no matter what. Automation, at the end of the day, has been the force that lifted living conditions and incomes around the world, and has been around for much longer than AI." — @klinovakatya [0:11:28]
“We compensate people for the task or for their time, but we are not necessarily compensating them for the data that they generate that we use to train models that can displace their jobs in the future.” — @klinovakatya [0:14:49]
"Might we be automating too much for the kind of labor market needs that we have right now?" — @klinovakatya [0:23:14]
”It’s not the time to eliminate all of the jobs that we possibly can. It’s not the time to create machines that can match humans in everything that they do, but that’s what we are doing.” — @klinovakatya [0:24:50]
Links Mentioned in Today’s Episode:
"Automation and New Tasks: How Technology Displaces and Reinstates Labor"
In this episode, we talk to Stefan Scherer (CTO of Embodied) about why he decided to focus on the more nuanced challenge of developing children’s social-emotional skills. Stefan takes us through how encouraging children to mentor Moxie (a friendly robot) through social interaction helps them develop their interpersonal relationships. We dive into the relevance of scripted versus unscripted conversation in different AI technologies, and how Embodied taught Moxie to define abstract concepts such as "kindness".
Key Points From This Episode:
Tweetables:
“Human behavior is very complex, and it gives us a window into our soul. We can understand so much more than just language from human behavior, we can understand an individual's wellbeing and their abilities to communicate with others.” — Stefan Scherer [0:01:04]
"It is not sufficient to work on the easy challenges at first and then expand from there. No, as a startup you have to tackle the hard ones first because that's where you set yourself apart from the rest." — Stefan Scherer [0:04:53]
“Moxie comes into the world of the child with the mission to basically learn how to be a good friend to humans. And Moxie puts the child into this position of teaching Moxie about how to do that.” — Stefan Scherer [0:17:40]
"One of the most important aspects of Moxie is that Moxie doesn't serve as the destination, Moxie is really a springboard into life." — Stefan Scherer [0:18:29]
“We did not want to overengineer Moxie, we really wanted to basically afford the ability to have a conversation, to be able to multimodally interact, and yet be as frugal with the amount of concepts that we added or the amount of capabilities that we added.” — Stefan Scherer [0:27:17]
Links Mentioned in Today’s Episode:
Today we talk to Celtra Founder and CPO, Matevž Klanjšek, about how AI can be used to accelerate creativity, and what would happen if it eventually replaces humans in the creative space. We discuss the limitations of the tools currently available, why Matevž isn’t interested in teaching AI variance, and how humans and AI need to work together in advertising. Tune in to hear what the future of advertising looks like, and why the human-AI feedback loop is essential. Matevž tells us about the bizarre adverts he’s seen AI produce, and talks us through the evolution of human creativity: from paintings to photographs, and how humans stay relevant when we invent something new.
Key Points From This Episode:
Tweetables:
“It just makes sense to automate [repetitive tasks] as much as possible, and remove that from the equation, let human genius think about big ideas and communication strategies, creativity and so on.” — @hyperhandsome [0:03:47]
“I think on all of the levels, across the creative process, we always try to have humans involved. It’s almost like a basic principle.” — @hyperhandsome [0:14:47]
“So that’s the nice thing, actually, perhaps using pretty advanced AI to really inspire creativity in humans instead of replacing it. It’s kind of beautiful in a way.” — @hyperhandsome [0:17:24]
"I think the whole point of advertising, and humanity in general is precisely to be always different, to invent new things." — @hyperhandsome [0:18:12]
“I think technology always gets to a point where it can perfectly imitate, and do it better than humans can, but then we invent something new.” — @hyperhandsome [0:19:52]
Links Mentioned in Today’s Episode:
Joining us today is Dr. Bill Porto, Redpoint Senior Analytics Engineer and storied AI researcher, academic, and developer. Bill shares all his current projects, including pattern recognition and optimization models, and he reveals what it was like to work with the father of Evolutionary Programming, Dr. Larry Fogel. We touch on a new definition for computational intelligence, and talk about where evolutionary programming is in use today, before exploring the fact that evolution is not simply survival of the fittest, but increases variance through retaining less perfect fits. What's more, we define evolution as adaptation in a dynamic environment.
Key Points From This Episode:
Tweetables:
“Computational intelligence is just taking cues from nature. And nature adaptively learns using iterative evaluation selection. So why not put that into an application on a computer?” — Bill Porto [0:04:18]
“It’s not really survival of the fittest, that’s the common moniker for it, in reality evolution favors the solutions that are most fit, but it tends to retain a number of less fit solutions, and one of the benefits of that is it increases the variance in the number of solutions.” — Bill Porto [0:07:20]
“If you spend a lot of time getting a perfect solution, by the time you have it, it very well may be stale.” — Bill Porto [0:15:17]
Links Mentioned in Today’s Episode:
CEO Shahan Lilja joins to explain how Mavenoid is able to deploy custom chat bots in a matter of minutes, the processes by which these tools get better over time, and how the ability to automatically turn technical expertise into an algorithm that can be utilized at scale is amplifying human intelligence.
Rob Carpenter is the founder and CEO of Valyant AI, which is on a journey to solve the complex problem of conversational AI in the food service industry. In today’s episode, Rob explains the three main components of AI speech processing (and the challenges that arise at each of these nodes), how conversational AI has the capacity to improve conditions for human workers in the food service industry, and what this technology is going to be like in the future. After this episode, you’ll understand the importance of being more thoughtful about how you communicate your next burger and fries order to a conversational AI system.
Key Points From This Episode:
Tweetables:
“I thought the hologram was the hard part and that the conversational AI was solved, but it was basically the inverse of that.” — Rob Carpenter [0:06:47]
“There’s benefits when you get into a new industry or technology not knowing the problems, because you don’t know what your limitations are. I think a lot of times that frees you up to be more creative and innovative.” — Rob Carpenter [0:08:38]
“If I was to postulate where things would end up, I’d say it’s probably a 90/10. 90% is that the technology has to be better, and keep getting better. 90% of everything needs to be handled by the AI. The other 10%, people need to be more thoughtful when they communicate with these systems.” — Rob Carpenter [0:17:22]
“There’s 1.7 million unfilled positions in the restaurant industry right now. 1 in 6 of every position available right now is in restaurants.” — Rob Carpenter [0:20:26]
“Innovation is not only built into economies, but it’s essential for their health and long-term safety.” — Rob Carpenter [0:23:28]
Links Mentioned in Today’s Episode:
Dr. John Kane, Head of Signal Processing & Machine Learning at Cogito, explains the challenges brought on by the oft-repeated truism "speech is more than text", and how Cogito addresses these challenges to deliver real-time conversational insight to their users. Later, John explains the holistic approach to ensuring machine learning technology is created in a bias-free environment.
Data & AI Solution Specialist for Microsoft, Priyanka Roy, explains how Design Thinking is a crucial approach in the development of any AI technology, and how it's proper utilization results in better products and more effective teams. Priyanka also outlines the key pillars of a successful data governance approach, and the utility of "thick" data.
Walmart's SVP of Global Technology Anshu Bhardwaj and VP of Emerging Technology Desirée Gosby join Sama CEO Wendy Gonzalez for a roundtable discussion about representation in AI, explainable & ethical AI, and how representative teams are a key way to reduce biases in AI technology.
Theresa Benson, Product Storyteller for InRule Technology, explains the opportunity of combining declarative AI with predictive AI, and how InRule Technology is using predictive AI to empower non AI experts to develop algorithms from their existing domain knowledge.
LeanTaas CEO Mohan Giridharadas explains how his team is solving the Supply & Demand challenge within healthcare, how the algorithms are stress tested in order to handle extreme edge cases, as well as how drift is defined, detected, and resolved in a customer-centric fashion.
Anna Susmelj explains her research at Facebook AI developing optimal drug combinations for the treatment of complex diseases, as well as her background in causality research.
Anna's Facebook Research: AI predicts effective drug combinations to fight complex diseases faster
Laurence Moroney is an industry veteran who has authored several books on AI development, taught a series of AI/ML MooCs, and even advises British Parliament on their AI approach. His mission at google is to evangelize the opportunity of AI and work towards democratizing access to the development of this technology.
Laurence joined the podcast to discuss the nature of AI hype cycles, how AI practitioners can navigate them within their own organizations, and some of the amazing opportunities coming in to play when access to AI & ML is made global.
Pre-Order Laurence's new book, AI and Machine Learning for On-Device Development: A Programmer's Guide
Study with Laurence on Coursera
Subscribe to the Tensor Flow YouTube Channel
Kelvin Wursten, leader of PointClickCare's Data Science team, explains how they are utilizing AI to help solve complicated supply vs. demand calculations in hospital emergency departments, as well as the challenge of balancing building awesome technology while still prioritizing the user's needs.
Igor Susmelj, Co-Founder of Lightly.ai, explains how most companies don't have a problem of too little data, but rather of far too much irrelevant data. He details Lightly's approach of utilizing self-supervised learning to pare down massive data sets into something that can be useful to a supervised learning approach.
Hyperscience co-founder and CEO Peter Brodsky explains why standards are fundamentally at odds with innovation, and how making technology that is backwards compatible with reality is Hyperscience's approach.
Key topics:
Director of AI Research Ram explains how ManageEngine's tools predict anomalies, the long term utility of Human-in-the-Loop AI, and how they've used sentiment analysis & transfer learning to overcome a lack of data.
Gyant CEO & Co-Founder Stefan Behrens explains the challenges inherent in creating datasets for healthcare purposes, as well as the importance of building interpretability into their AI tools.
Sama CEO Wendy Gonzalez explains how the Sama Digital Basics program teaches AI skills to individuals in Africa's largest slum, and reflects on the findings of MIT's 6 year study measuring the program's effect.
RCT Results from MIT: Evaluating the Impact of Sama’s Training and Job Programs
CTO George Corugedo explains how the relationship between physics and math is a model for the relationship between business questions and artificial intelligence, as well as Redpoint Global's ensemble approach to optimization.
Adnan Khaleel, Sr. Director of Global Sales Strategy for HPC & AI at Dell, explains how companies are using HPC and containerization to scale their AI implementations, as well as how Dell parallelized a radiology algorithm, drastically improving both speed and accuracy.
En liten tjänst av I'm With Friends. Finns även på engelska.