Artificiality was founded in 2019 to help people make sense of artificial intelligence. We are artificial philosophers and meta-researchers. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We publish essays, podcasts, and research on AI including a Pro membership, providing advanced research to leaders with actionable intelligence and insights for applying AI. Learn more at www.artificiality.world.
The podcast Artificiality: Minds Meeting Machines is created by Helen and Dave Edwards. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
We're excited to welcome to the podcast Matt Beane, Assistant Professor at UC Santa Barbara and the author of the book "The Skill Code: How to Save Human Ability in an Age of Intelligent Machines."
Matt’s research investigates how AI is changing the traditional apprenticeship model, creating a tension between short-term performance gains and long-term skill development. His work has particularly focused on the relationship between junior and senior surgeons in the operating theater. As he told us, "In robotic surgery, I was seeing that the way technology was being handled in the operating room was assassinating this relationship." He observed that junior surgeons now often just set up the robot and watch the senior surgeon operate for hours, epitomizing a broader trend where AI and advanced technologies are reshaping how we transfer skills from experts to novices.
In "The Skill Code," Matt argues that three key elements are essential for developing expertise: challenge, complexity, and connection. He points out that real learning often involves discomfort, saying, "Everyone intuitively knows when you really learned something in your life. It was not exactly a pleasant experience, right?"
Matt's research shows that while AI can significantly boost productivity, it may be undermining critical aspects of skill development. He warns that the traditional model of "See one, do one, teach one" is becoming "See one, and if-you're-lucky do one, and not-on-your-life teach one." In our conversation, we explore these insights and discuss how we might preserve human ability in an age of intelligent machines.
Let’s dive into our conversation with Matt Beane on the future of human skill in an AI-driven world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
We're excited to welcome to the podcast Emily M. Bender, professor of computational linguistics at the University of Washington.
As our listeners know, we enjoy tapping expertise in fields adjacent to the intersection of humans and AI. We find Emily’s expertise in linguistics to be particularly important when understanding the capabilities and limitations of large language models—and that’s why we were eager to talk with her.
Emily is perhaps best known in the AI community for coining the term "stochastic parrots" to describe these models, highlighting their ability to mimic human language without true understanding. In her paper "On the Dangers of Stochastic Parrots," Emily and her co-authors raised crucial questions about the environmental, financial, and social costs of developing ever-larger language models. Emily has been a vocal critic of AI hype and her work has been pivotal in sparking critical discussions about the direction of AI research and development.
In this conversation, we explore the issues of current AI systems with a particular focus on Emily’s view as a computational linguist. We also discuss Emily's recent research on the challenges of using AI in search engines and information retrieval systems, and her description of large language models as synthetic text extruding machines.
Let's dive into our conversation with Emily Bender.
----------------------
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
We're excited to welcome to the podcast John Havens, a multifaceted thinker at the intersection of technology, ethics, and sustainability. John's journey has taken him from professional acting to becoming a thought leader in AI ethics and human wellbeing.
In his 2016 book, "Heartificial Intelligence: Embracing Our Humanity to Maximize Machines," John presents a thought-provoking examination of humanity's relationship with AI. He introduces the concept of "codifying our values" - our crucial need as a species to define and understand our own ethics before we entrust machines to make decisions for us.
Through an interplay of fictional vignettes and real-world examples, the book illuminates the fundamental interplay between human values and machine intelligence, arguing that while AI can measure and improve wellbeing, it cannot automate it. John advocates for greater investment in understanding our own values and ethics to better navigate our relationship with increasingly sophisticated AI systems.
In this conversation, we dive into the key ideas from "Heartificial Intelligence" and their profound implications for the future of both human and artificial intelligence. We explore questions like:
Let's dive into our conversation with John Havens.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
We’re excited to welcome to the podcast Leslie Valiant, a pioneering computer scientist and Turing Award winner renowned for his groundbreaking work in machine learning and computational learning theory. In his seminal 1983 paper, Leslie introduced the concept of Probably Approximately Correct or PAC learning, kick-starting a new era of research into what machines can learn.
Now, in his latest book, The Importance of Being Educable: A New Theory of Human Uniqueness, Leslie builds upon his previous work to present a thought-provoking examination of what truly sets human intelligence apart. He introduces the concept of "educability" - our unparalleled ability as a species to absorb, apply, and share knowledge.
Through an interplay of abstract learning algorithms and relatable examples, the book illuminates the fundamental differences between human and machine learning, arguing that while learning is computable, today's AI is still a far cry from human-level educability. Leslie advocates for greater investment in the science of learning and education to better understand and cultivate our species' unique intellectual gifts.
In this conversation, we dive deep into the key ideas from The Importance of Being Educable and their profound implications for the future of both human and artificial intelligence. We explore questions like:
Let’s dive into our conversation with Leslie Valiant.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
We’re excited to welcome to the podcast Jonathan Feinstein, professor at the Yale School of Management and author of Creativity in Large-Scale Contexts: Guiding Creative Engagement and Exploration.
Our interest in creativity is broader than the context of the creative professions like art, design, and music. We see creativity as the foundation of how we move ahead as a species including our culture, science, and innovation. We’re interested in the huge combinatorial space of creativity, linked together by complex networks. And that interest led us to Jonathan.
Through his research and interviews with a wide range of creative individuals, from artists and writers to scientists and entrepreneurs, Jonathan has developed a framework for understanding the creative process as an unfolding journey over time. He introduces key concepts such as guiding conceptions, guiding principles, and the notion of finding "golden seeds" amidst the vast landscape of information and experiences that shape our creative context.
By looking at creativity mathematically, Jonathan has exposed the tremendous beauty of the creative process as being intuitive, exploratory, and supported by math and machines and knowledge and structure. He shows how creativity is much broader and more interesting than the stereotypical idea of creativity as simply a singular lightbulb moment.
In our conversation, we explore some of the most surprising and counterintuitive findings from Jonathan's work, how his ideas challenge conventional wisdom about creativity, and the implications for individuals and organizations seeking to innovate in an increasingly AI-driven world.
Let’s dive into our conversation with Jonathan Feinstein.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
We’re excited to welcome to the podcast Karaitiana Taiuru. Dr Taiuru is a leading authority and a highly accomplished visionary Māori technology ethicist specialising in Māori rights with AI, Māori Data Sovereignty and Governance with emerging digital technologies and biological sciences.
Karaitiana has been a champion for Māori cultural and intellectual property rights in the digital space since the late 1990s. With the recent emergence of AI into the mainstream, Karaitiana sees both opportunities and risks for indigenous peoples like the Māori. He believes AI can either be a tool for further colonization and cultural appropriation, or it can be harnessed to empower and revitalize indigenous languages, knowledge, and communities.
In our conversation, Karaitiana shares his vision for incorporating Māori culture, values and knowledge into the development of AI technolo gies in a way that respects data sovereignty. We explore the importance of Māori representation in the tech sector, the role of AI in language and cultural preservation, and how indigenous peoples around the world can collaborate to shape the future of AI. Karaitiana offers a truly unique and thought-provoking perspective that I believe is crucial as we grapple with the societal implications of artificial intelligence. I learned a tremendous amount from our conversation and I'm sure you will too.
Let’s dive into our conversation with Karaitiana Taiuru.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
We’re excited to welcome to the podcast Omri Allouche, the VP of Research at Gong, an AI-driven revenue intelligence platform for B2B sales teams. Omri has had a fascinating career journey with a PhD in computational ecology before moving into the world of AI startups. At Gong, Omri leads research into how AI and machine learning can transform the way sales teams operate.
In our conversation today, we'll explore Omri's perspective on managing AI research and innovation. We'll discuss Gong’s approach to analyzing sales conversations at scale, and the challenges of building AI systems that sales reps can trust. Omri will share how Gong aims to empower sales professionals by automating mundane tasks so they can focus on building relationships and thinking strategically.
Let’s dive into our conversation with Omri Allouche.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
We’re excited to welcome to the podcast Susannah Fox, a renowned researcher who has spent over 20 years studying how patients and caregivers use the internet to gather information and support each other. Susannah has collected countless stories from the frontlines of healthcare and has keen insights into how patients are stepping into their power to drive change.
Susannah recently published a book called "Rebel Health: A Field Guide to the Patient-Led Revolution in Medical Care." In it, she introduces four key personas that represent different ways patients and caregivers are shaking up the status quo in healthcare: seekers, networkers, solvers, and champions.
The book aims to bridge the divide between the leaders at the top of the healthcare system and the patients, survivors, and caregivers on the ground who often have crucial information and ideas that go unnoticed. By profiling examples of patient-led innovation, Susannah hopes to inspire healthcare to become more participatory.
In our conversation, we dive into the insights from Susannah's decades of research, hear some compelling stories of patients, and discuss how medicine can evolve to embrace the power of peer-to-peer healthcare. As you’ll hear, this is a highly personal episode as Susannah’s work resonates with both of us and our individual and shared health experiences.
Let’s dive into our conversation with Susannah Fox.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
We’re excited to welcome to the podcast Dr. Angel Acosta, an expert on healing-centered education and leadership. Angel runs the Acosta Institute which helps communities process trauma and build environments for people to thrive.
He also facilitates leadership programs at the Garrison Institute that support the next generation of contemplative leaders. With his background in social sciences, curriculum design, and adult education, Angel has been thinking deeply about how artificial intelligence intersects with mindfulness, social justice and education.
In our conversation, we explore how AI can help or hinder our capacity for contemplation and healing. For example, does offloading cognitive tasks to AI tools like GPT create more mental space for mindfulness? How do we ensure these technologies don’t increase anxiety and threaten our sense of self?
We also discuss the promise and perils of AI for transforming education. What roles might AI assistants play in helping educators be more present with students? How can we design assignments that account for AI without compromising learning? What would a decolonized curriculum enabled by AI look like?
And we envision more grounded, humanistic uses of rapidly evolving AI—from thinking of it as "ecological technology" interdependent with the natural world, to leveraging its pattern recognition in service of collective healing and wisdom. What guiding principles do we need for AI that enhances both efficiency and humanity? How can we consciously harness it to create the conditions for people and communities to thrive holistically?
We’d like to thank our friends at the House of Beautiful Business for sparking our relationship with Angel—we highly recommend you check out their events and join their community.
Let’s dive into our conversation with Angel Acosta.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
We're excited to welcome Doug Belshaw to the show today. Doug is a founding member of the We Are Open Co-op which helps organizations with sensemaking and digital transformation.
Doug coined the term "serendipity surface" to describe cultivating an attitude of curiosity and increasing the chance encounters we have by putting ourselves out there. We adopted the term quite some time ago and were eager to talk with Doug about how he thinks about serendipity surfaces in the age of generative AI.
As former Head of Web Literacy at Mozilla and now currently pursuing a master's degree in systems thinking, Doug has a wealth of knowledge on topics spanning education, technology, productivity and more. In our conversation today, we'll explore concepts like productive ambiguity, cognitive ease, and rewilding your attention. Doug shares perspectives from his unique career journey as well as personal stories and projects exemplifying the creative potential of AI. We think you’ll find this a thought-provoking discussion on human-AI collaboration, lifelong learning, digital literacy, ambiguity, and the future of work. Let’s dive into our conversation with Doug Belshaw.
Key points:
Links for Doug Belshaw:
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
We’re excited to welcome Richard Kerris, Vice President of Developer Relations and GM of Media & Entertainment at NVIDIA, to the show today. Richard has had an extensive career working with creators and developers across film, music, gaming, and more. He offers valuable insights into how AI and machine learning are transforming creative tools and workflows.
In particular, Richard shares his perspective on how these advanced technologies are democratizing access to high-end capabilities, putting more power into the hands of a broader range of storytellers. He discusses the implications of this for the media industry—will it displace roles or expand opportunities? And we explore Richard's vision for what the future may look like in 5-10 years in terms of applications being auto-generated to meet specialized user needs.
We think you’ll find the wide-ranging conversation fascinating as we explore topics from AI-enhanced content creation to digital twins and AI assistants. Let’s dive into our discussion with Richard Kerris .
Key Points:
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
We're excited to welcome Tyler Marghetis, Assistant Professor of Cognitive & Information Sciences at the University of California, Merced, to the show today. Tyler studies what he calls the "lulls and leaps" or "ruts and ruptures" of human imagination and experience.
He's fascinated by how we as humans can get stuck in certain patterns of thinking and acting, but then also occasionally experience radical transformations in our perspectives. In our conversation, Tyler shares with us some of his lab's fascinating research into understanding and even predicting these creative breakthroughs and paradigm shifts. You'll hear about how he's using AI tools to analyze patterns in things like Picasso's entire body of work over his career. Tyler explains why he believes isolation and slowness are actually key ingredients for enabling many of history's greatest creative leaps. And he shares with us how his backgrounds in high-performance sports and in the LGBTQ community shape his inclusive approach to running his university research lab.
It's a wide-ranging and insightful discussion about the complexity of human creativity and innovation. Let's dive in to our interview with Tyler Marghetis.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
Why is scientific progress slowing down? That's a question that's been on the minds of many. But before we dive into that, let's ponder this—how do we even know that scientific progress is decelerating? And in an era where machines are capable of understanding complexities that sometimes surpass human cognition, how should we approach the future of knowledge?
Joining us in this exploration is Professor James Evans from the University of Chicago. As the director of the Knowledge Lab at UChicago, Professor Evans is at the forefront of integrating machine learning into the scientific process. His work is revolutionizing how new ideas are conceived, shared, and developed, not just in science but in all knowledge and creative processes.
Our conversation today is a journey through various intriguing landscapes. We'll delve into questions like:
We're also thrilled to discuss Professor Evans' upcoming book, "Knowing," which promises to be a groundbreaking addition to our understanding of these topics.
So, whether you're a scientist, a creative, a business leader, a data scientist or just someone fascinated by the interplay of human intelligence and artificial technology, this episode is sure to offer fresh perspectives and insights.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
Few understand how to anticipate major technology shifts in the enterprise better than today's guest, Ed Sim. Ed is a pioneer in the world of venture capital, specifically focusing on enterprise software and infrastructure since 1996. He founded Boldstart in 2010 to invest at the earliest stages of enterprise software companies, growing the firm from $1M to around $375M today.
So where does an experienced investor who has seen countless tech waves come and go place his bets in this new AI-first future? That’s the key topic we dive into today.
While AI forms a core part of our dialogue, Ed emphasizes that he doesn’t look at pitches and go “Oh, AI, I need to invest in that.” Rather, he tries to see if founders have identified a real pain point, have a unique approach to solving it, and can clearly articulate how they will provide a significant improvement over status quo. AI is an important component, of course, but it isn’t a reason to invest alone.
With that framing in mind, Ed shares where he is most excited to invest in light of recent generative AI breakthroughs. Unsurprisingly, AI security ranks high on his list given enterprises’ skittishness around adopting any technology that could compromise sensitive data or infrastructure. Ed saw this need early, backing a startup called Protect AI in March 2022 that focuses specifically on monitoring and certifying the security of AI systems.
The implications of AI have branched into virtually every sector, but Ed reminds us that as investors and builders, we must stay grounded in solving real problems vs just chasing the shiny new thing.
Key Points:
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
One of our research obsessions is Edge AI through which we study opportunities to build and deploy AI on a computing device at the edge of a network. The premise is that AI in the cloud benefits from scale but is challenged by cost and privacy and Edge AI solves many of these challenges by eliminating cloud computing costs and keeping data within secure environments.
Given this interest, we were excited to talk with Rodrigo Liang, the Co-Founder and CEO of SambaNova Systems which has built a platform to deliver enterprise grade chips, software, and models in a fully integrated system, purpose built for AI. In this interview, Rodrigo discusses how his company is enabling enterprises to adopt AI in a secure, customizable way that builds long-term value by building AI assets. Their full-stack solutions aim to simplify AI model building and deployment, especially by leveraging open source frameworks and using modular, fine-tuned expert models tailored to clients' private data.
Key Points:
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
One of our long-time subscribers recently said to us: “What I love about you is that you’re regularly talking about things three years ahead of everyone else.” That inspired us to look back through our catalog of conversations to see which ones we think are most relevant now.
Today, we're revisiting one of our most thought-provoking episodes, originally recorded in April 2022, featuring Barbara Tversky, the author of "Mind in Motion: How Action Shapes Thought." This episode is great way to start 2024 because we are all about to experience what are known as Large Multimodal Models or LMMs, models which go beyond text and bring in more sensory modalities including spatial information.
Tversky's insights into spatial reasoning and embodied cognition are more relevant than ever in the era of multimodal models in AI. These models, which combine text, images, and other data types, mirror our human ability to process information across various sensory inputs.
The parallels between Tversky's research and Large Multimodal Models (LMMs) in AI are striking. Just as our physical interactions with the world shape our cognitive processes, these AI models learn and adapt by integrating diverse data types, offering a more holistic understanding of the world.
Her work sheds light on how we might improve AI's ability to 'think' and 'reason' spatially, enhancing its application in fields ranging from navigation systems to virtual reality.
As we revisit our interview with Tversky, we're reminded of the importance of considering human-like spatial reasoning and embodied cognition in advancing AI technology.
Join us as we explore these intriguing concepts with Barbara Tversky, uncovering the essential role of spatial reasoning in both human cognition and artificial intelligence.
Barbara Tversky is an emerita professor of psychology at Stanford University and a professor of psychology at Teachers College at Columbia University. She is also the President of the Association for Psychological Science. Barbara has published over 200 scholarly articles about memory, spatial thinking, design, and creativity, and regularly speaks about embodied cognition at interdisciplinary conferences and workshops around the world. She lives in New York.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
In this episode, we speak with cognitive neuroscientist Stephen Fleming about theories of consciousness and how they relate to artificial intelligence. We discuss key concepts like global workspace theory, higher order theories, computational functionalism, and how neuroscience research on consciousness in humans can inform our understanding of whether machines may ever achieve consciousness. In particular, we talk with Steve about a recent research paper, Consciousness in Artificial Intelligence, which he co-authored with Patrick Butlin, Robert Long, Yoshua Bengio, and several others.
Steve provides an overview of different perspectives from philosophy and psychology on what mechanisms may give rise to consciousness. He explains global and local theories, the idea of a higher order system monitoring lower level representations, and similarities and differences between human and machine intelligence. The conversation explores current limitations in neuroscience for studying consciousness empirically and opportunities for interdisciplinary collaboration between neuroscientists and AI researchers.
Key Takeaways:
Stephen Fleming is Professor of Cognitive Neuroscience at the Department of Experimental Psychology, University College London. Steve’s work aims to understand the mechanisms supporting human subjective experience and metacognition by employing a combination of psychophysics, brain imaging and computational modeling. He is the author of *Know Thyself*, a book on the science of metacognition, about which we interviewed him on Artificiality in December of 2021.
Episode Notes:
2:13 - Origins of the paper Stephen co-authored on consciousness in artificial intelligence
5:17 - Discussion of demarcating intelligence vs phenomenal consciousness in AI
6:34 - Explanation of computational functionalism and mapping functions between humans and machines
13:42 - Examples of theories like global workspace theory and higher order theories
19:27 - Clarifying when sensory information reaches consciousness under global theories
23:02 - Challenges in precisely defining aspects like the global workspace computationally
28:35 - Connections between higher order theories and generative adversarial networks
30:43 - Ongoing empirical evidence still needed to test higher order theories
36:52 - Iterative process needed to update theories based on advancing neuroscience
40:40 - Open questions remaining despite foundational research on consciousness
46:14 - Mismatch between public perceptions and indicators from neuroscience theories
50:30 - Experiments probing anthropomorphism and consciousness attribution
56:17 - Surprising survey results on public views of AI experience
59:36 - Ethical issues raised if public acceptance diverges from scientific consensus
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
If you’ve used a large language model, you’ve likely had one or more moments of amazement as the tool immediately responded with impressive content from its massive data cosmos training set. But you’ve likely also had moments of confusion or disillusionment as the tool responded with irrelevant or incorrect responses, displaying a lack of reasoning.
A recent research paper from Meta caught our eye because it proposes a new mechanism called System 2 Attention which “leverages the ability of LLMs to reason in natural language and follow instructions in order to decide what to attend to.” The name System 2 is derived from the work of Daniel Kahneman who in his 2011 book, Thinking, Fast and Slow, differentiated between System 1 thinking as intuitive and near-instantaneous and System 2 thinking as slower and effortful. The Meta paper also references our friend Steven Sloman who in 1996 made the case for two systems of reasoning—associative and deliberative or rule-based.
Given our interest in the idea of LLMs being able to help people make better decisions—which often requires more deliberative thinking—we asked Steve to come back on the podcast to get his reaction to this research and generative AI in general. Yet again, we had a dynamic conversation about human cognition and modern AI, which field is learning what from the other, and a few speculations about the future. We’re grateful for Steve taking the time to talk with us again and hope that he’ll join us for a third time when his next book is released sometime in 2024.
Steven Sloman is a professor of cognitive, linguistic, and psychological sciences at Brown University where he has taught since 1992. He studies how people think, including how we think as a community, a topic he wrote a fantastic book about with Philip Fernbach called The Knowledge Illusion: Why We Never Think Alone. For more about that work, please check out our first interview with Steve from June of 2021.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
In this episode, we speak with Julia Rhodes Davis, a Senior Advisor at Data & Society, about her recent report "Advancing Racial Equity Through Technology Policy" published by the AI Now Institute. This comprehensive report provides an in-depth examination of how the technology industry impacts racial inequity and concrete policy recommendations for reform. A critical insight from the report is that advancing racial equity requires a holistic approach. The report provides policy recommendations to reform antitrust law, ensure algorithmic accountability, and support tech entrepreneurship for people of color.
In our interview, Julia explains how advancing racial equity requires policy change as well as coalition-building with impacted communities. She discusses the urgent need to reform practices of algorithmic discrimination that restrict opportunities for marginalized groups. Julia highlights some positive momentum from federal and state policy efforts and she encourages people to get involved with local organizations, providing a great list of organizations you might consider.
Links:
Advancing Racial Equity Through Technology Policy report
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
Grounding her work in the problem of causation, Alicia Juarrero challenges previously held beliefs that only forceful impacts are causes. Constraints, she claims, bring about effects as well, and they enable the emergence of coherence.
Alicia is the author of multiple books, most recently Context Changes Everything: How Constraints Create Coherence. Helen says in this interview that it feels like this book is from the future. It’s about using the tools of complexity science to understand identity, hierarchy, and top-down causation, and in so doing, presents a new way of thinking about the natural world but also the artificial world. In this interview, we discuss how to use concepts from complexity—including the important role of constraints—to enlighten our perspectives on the community of humans and machines.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
Jai Vipra is a research fellow at the AI Now Institute where she focuses on competition issues in frontier AI models. She recently published the report Computational Power and AI which focuses on compute as a core dependency in building large-scale AI. We found this report to be an important addition to the work covering the generative AI industry because compute is incredibly important but not very well understood. In the report, Jai breaks down the key components of compute, analyzes the supply chain and competitive dynamics, and aggregates all the known economics. In this interview, we talk with Jai about the report, its implications, and her recommendations for industry and policy responses.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
Wendy Wong is a professor of political science and principal’s research chair at the University of British Columbia where she researches and teaches about the governance of emerging technologies, human rights, and civil society/non-state actors.
In this interview, we talk with Wendy about her new book We, the Data: Human Rights in the Digital Age which is described as “a rallying call for extending human rights beyond our physical selves—and why we need to reboot rights in our data-intensive world.” Given the explosion of generative AI and mass data capture that fuels generative AI models, Wendy’s argument for extending human rights to the digital age seems very timely. We talk with her about how human rights might be applied to the age of data, the datafication by big tech, individuals as stakeholders in the digital world, and our awe of the human contributions that enables generative AI.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
Chris Summerfield is a Professor of Cognitive Science at the University of Oxford. His work is concerned with understanding how humans learn and make decisions. He is interested in how humans acquire new concepts or patterns in data, and how they use this information to make decisions in novel settings. He's also a research scientist at Deepmind.
Earlier this year, Chris released a book called Natural General Intelligence, How understanding the brain can help us build AI. This couldn’t be more timely given talk of AGI and in this episode we talk with Chris about his work and what he’s learned about humans from studying AI and what he’s learned about AI by studying humans. We talk about his aim to provide a bridge between the theories of those who study biological brains and the practice of those who are seeking to build artificial brains, something we find perpetually fascinating.
About Artificiality from Helen & Dave Edwards:
Artificiality is dedicated to understanding the collective intelligence of humans and machines. We are grounded in the sciences of artificial intelligence, collective intelligence, complexity, data science, neuroscience, and psychology. We absorb the research at the frontier of the industry so that you can see the future, faster. We bring our knowledge to you through our newsletter, podcast interviews with academics and authors, and video interviews with AI innovators. Subscribe at artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
Michael Bungay Stanier has an extraordinary talent for distilling the complexity of human relationships into easy to remember and follow frameworks—doing so with just the right amount of Australian humor and plenty of vulnerability. Despite his remarkable success with books like The Coaching Habit, The Advice Trap, and How to Begin, Michael never comes across as one of those gurus who thinks they have all the answers. That mindset comes through perfectly in the title of his newest book, How to Work with (Almost) Anyone—not absolutely anyone, almost anyone. The book is built around five questions for building the best possible relationships which we have found to be very helpful in our working relationship.
We have grown to be friends with Michael through our repeated gatherings at the House of Beautiful Business. I know all three of us would encourage all of our listeners and readers to join us at the next House as well.
In this interview, we talk about Michael’s new book, how to use a keystone conversation to build the best possible relationship, and we even consider how to apply Michael’s frameworks to working with generative AI.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
An exploration of how we might conceptualize the design of AGI within the context of human left and right brains.
The tension between AI and human functioning highlights unique cooperation. AI uses language, abstraction, and analysis, while humans rely on experience, empathy, and metaphor. AI manipulates static elements well but struggles in a changing world.
This leads to two distinct design approaches for future AI and considerations for "artificial general intelligence" (AGI).
One approach focuses on "left-brained" AI—controlling facts with internal consistency, while relying on humans for context, meaning, and care. Here, machines serve humans. This path is popular due to the challenge of developing AI mimicking human right hemisphere functions.
However, we want machines that can correct contextual mistakes and understand our intended meanings. The design challenge here lies in connecting highly "left-brained" AI to holistic humans in a way that enhances human capabilities.
Alternatively, we could design AI with asymmetry, mirroring the human brain's evolution. Such AI would provide a synthesized perspective before interacting with a human, applying computational power to intuition and addressing human paradoxes. Some envision this as AGI—an all-knowing synthesis machine.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
If you’ve read our show notes, you’ll know that our music was written and performed by Jonathan Coulton. I’ve known Jonathan for more than 30 years, dating back to when we sang together in college. But that’s a story for another day, or perhaps never.
Jonathan spent his first decade post college as a software coder and then through a bit of happenstance and throwing care to the wind, he transitioned to music. In the mid 2000s, he blazed a trail of creating his own career on the internet—without a label or any of the support that musicians normally have. While he was pushing out a new song each week as part of his Thing-A-Week Project, he became known as the “internet music-business guy” since he had successfully used the internet to build his career and a dedicated fanbase. He has since released several albums, toured plenty, and launched an annual cruise for his fans.
Throughout his career, technology, and specifically AI, has been a theme—starting with his song Chiron Beta Prime in 2006 about a planet where all humans have been enslaved by uncaring and violent robots. During this interview we talk about his 2017 album Solid State which is well described by writer Emily Nussbaum who wrote, “Coulton’s latest album, “Solid State,” is, like so many breakthrough albums, the product of a raging personal crisis—one that is equally about making music and living online, getting older, and worrying about the apocalypse. A concept album about digital dystopia, it’s Coulton’s warped meditation on the ugly ways the internet has morphed since 2004. At the same time, it’s a musical homage to his earliest Pink Floyd fanhood, a rock-opera about artificial intelligence. It’s a worried album by a man hunting for a way to stay hopeful.”
In this interview, we talk with Jonathan about how he feels about Solid State now, his reaction to generative AI, and his experiences trying to use generative AI in songwriting. We’re grateful to be able to grab Jonathan just before he left on tour with Aimee Mann. We hope you all take time to listen to Solid State and catch him live.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
An exploration of the intersection of AI, meaning, and human relationships.
In this episode, we dive deep into the role of AI in our lives and how it can influence our perception of meaning. We explore how AI, and specifically generative AI, is impacting our collective experiences and the ways we make authentic choices.
We discuss the idea of intimacy with AI and the future trajectory of human-AI interaction. We consider the possibility of AI enabling more time for meaningful experiences by taking over less meaningful tasks but also wonder if it’s possible for AI to truly have a place in human meaning.
Note: According to our research, Doug Belshaw is the original author of the term “serendipity surface.” You can find his first post here and a follow up here. Apologies to Doug for forgetting your name during recording!
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
ChatGPT, Dall-e, Midjourney, Bard, and Bing are here. Many others are coming. At every dinner conversation, classroom lesson, and business meeting people are talking about AI. There are some big questions that now everyone is seeking answers for, not just the philosophers, ethicists, researchers, and venture folks. It’s noisy, complicated, technical and often agenda-driven.
In this episode, we tackle the question of existential risk. Will AI kill us all?
We start by talking about why this question is important at all. And why we are finally tackling it ourselves (since we’ve largely avoided it for quite some time). We talk about the scenarios that people are worried about and the three premises that underly this risk:
* We will build an intelligence that will outsmart us
* We will not be able to control it
* It will do things we don’t want it to
Join us as we talk about the risk that AI might end humanity. And, if you’d like to dig deeper, subscribe to Artificiality at https://artificiality.substack.com to get all of our content on this topic including our weekly essay, a gallery of AI images, book recommendations, and more. (Note: the essay will be emailed to subscribers a couple of days after this podcast first airs—thanks for your patience!).
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
As Silicon Valley lunges towards creating AI that is considered superior to humans (at times called Artificial General Intelligence or Super-intelligent AI), it does so with the premise that it is possible to encode values in AI so that the AI won’t harm us. But values are individual, elusive, and ever-changing. They resist being mathematized.
Join us as we discuss human values, how they form, how they change, and why trying to encode them in algorithms is so difficult, if not impossible.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
Culture plays a vital role in connecting individuals and communities, enabling us to leverage our unique talents, share knowledge, and solve problems together. However, the rise of an intelligentsia of machine soothsayers highlights the need to consciously design new coherence strategies for the age of machines. Why? Because generative AI is a cultural technology that produces different outcomes depending on its cultural context.
Who will take on this challenge, and how will culture evolve in response to the growing influence of machines? This is the essential question that requires careful consideration as we navigate the complex interplay between human culture and technology, seeking to preserve sonder as for humans only.
Listen in as we discuss human culture and the impact of generative AI.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
This episode is the first in our summer series based on our thesis for designing AI to be a Mind for our Minds. We recently presented this idea for the first time at our favorite event of the year hosted by The House of Beautiful Business. We are grateful for our long-term relationship with the House and its founders, Tim Leberecht and Till Grusche, and head of curation and community, Monika Jiang. The House puts on public and corporate events that are like none you’ve ever experienced. We encourage everyone to consider attending a public event and bringing the House to your organization.
We always meet fascinating people at the house—too many to mention in one podcast. During this episode we highlight Hannah Critchlow and her book Joined Up Thinking and Michael Bungay Stanier and his book How to Work with (Almost) Anyone. Check them both out: we are big fans.
Stay tuned over the summer as we will dig deeper into how to design AI to be a Mind for our Minds.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
AI is based on data. And data is frequently collected with the intent to be quantified, understood, and used across context. That’s why we have things like grade point averages that translate across subject matters and educational institutions. That’s why we perform cost-benefit analyses to normalize the forecasted value of projects—no matter the details. As we deploy more AI that is based on a metrified world, we’re encouraging the quantification of our lives and risk losing the context and subjective value that creates meaning.
In this interview, we talk with C. Thi Nguyen about these large scale metrics, about objectivity and judgment, about how this quantification removes the nuance, contextual sensitivity, and variability to make these measurements legible to the state. And that’s just scratching the surface of this interview.
Thi Nguyen used to be a food writer and is now a philosophy professor at the University of Utah. His research focuses on how social structures and technology can shape our rationality and our agency. He writes about trust, art, games, and communities. His book, Games: Agency as Art, was awarded the American Philosophical Association’s 2021 Book Prize.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
We are deeply interested in the intersection of the digital and material worlds, both living and not living. Most of our interviews are focused on the intersection of humans and machines—how does the digital world affect humans and how do humans affect the digital world. This interview, however, is about the intersection of plants and machines.
Harpreet Sareen works at the intersection of digital and material, plant and machine, and art and science. His work challenges people to consider the life of plants, what we can learn from them, what we can see and what we can’t see. His art and science projects challenge us to wonder if we can actually believe what we’re seeing.
We moved to the Cascade Mountains to be able to spend more time in the wilderness. We likely spend quite a bit more time in nature than most people. Despite our strong connections to nature, Sarpreet’s work accomplishes his goal of encouraging us to reconsider this relationship, to consider what an increased symbiosis might be.
Harpreet Sareen is a designer, researcher and artist creating mediated digital interactions through the living world, with growable electronics, organic robots and bionic materials. His work has been shown in museums, featured in media in 30+ countries, published in academic conferences, viewed on social media 5M+ times and used by thousands of people around the world. He has also worked professionally in museums, corporates and international research centers in five countries. He is currently an Assistant Professor at Parsons School of Design in New York City and directs the Synthetic Ecosystems Lab that focuses on post-human and non-human design.
Learn more about Harpreet Sareen
Interesting links:
* Elephant project: Hybrid Enrichment System (ACM article)
* Elowan: A Robot-Plant Hybrid -- Plant with a robotic body
* Cyborg Botany: Electronics grown inside plants
* Cyborg Botany: In-Planta Cybernetic Systems
Most recent papers:
* Helibots (attached) at CAADRIA 2023, and related exhibition in ADM Gallery, Singapore
* BubbleTex at CHI 2023, and related exhibition in Ars Electronica, Austria
* Algaphon: Sounds of macroalgae under water (Installation at Ars Electronica, Austria)
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
Anyone working in a large organization has likely asked this question: Why is it that I can seemingly find anything on the internet but I can’t seem to find anything inside my organization? It is counter-intuitive that it’s easier to organize the vast quantity of information on the public internet than it is to organize the smaller amount of information inside a single organization.
The reality is that enterprise knowledge management and search is very difficult. Data does not reside in easily organized forms. It is spread across systems which provide varying levels of access. Knowledge can be fleetingly exchanged in communication systems. And each individual person has their own access rights, creating a complex challenge.
These challenges may be amplified by large language models in the enterprise which seek to help people with analytical and creative tasks by tapping into an organization’s knowledge. How can these systems access enough enterprise data to develop a useful level of understanding? How can they provide the best answers to each individual that follows data access governance requirements?
To answer these questions, we talked with Arvind Jain, the CEO of Glean, which provides AI-powered workplace search. Glean searches across an organizations applications to build a trusted knowledge model that respects data access governance when presenting information to users. Glean’s knowledge models also provide a way for enterprises to introduce the power of generative AI while providing boundaries for its use that can be challenging to create.
Prior to founding Glean, Arvind co-founded Rubrik, one of the fastest growing companies in cloud data management. For more than a decade Arvind worked at Google, serving as a Distinguished Engineer, leading teams in Search, Maps, and YouTube.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
The world has been upended by the introduction of generative AI. We think this could be the largest advance in technology—ever. All of our clients are trying to figure out what to do, how to de-risk the introduction of these technologies, and how to design new, innovative solutions.
To get a perspective on these changes created by AI, we talked with Lukas Egger who leads the Innovation Office & Strategic Projects team at SAP Signavio, where he focuses on de-risking new product ideas and establishing best-in-class product discovery practices. With a successful track record in team building and managing challenging projects, Lukas has expertise in data-driven technology, cloud-native development, and has created and implemented new product discovery methodologies. Excelling at bridging the gap between technical and business teams, he has worked in AI, operations, and product management in fast-growth environments. Lukas has movie credits for his work in Computer Graphics research, published a book on philosophy, and is passionate about the intersection of technology and people, regularly speaking on how to improve organizations.
We love Lukas’ concept that we are in the peacock phase of generative AI when everyone is trying to show off their colorful feathers—and not yet showing off new value creation. We enjoyed talking with Lukas about his views on the realities of today and his forecasts and speculations on the future.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
Is technology good or bad for children? How should parents think about technology in their children’s lives? Are there different answers depending on the age of the child and their stage of development? What can we apply from what we know about children’s play and activity in the analog world to the digital world? How should product designers think about designing technology to be good for kids? How does AI and generative AI affect the answers to these questions, if at all?
To answer some of these questions, we talked with Katie Davis about her recent book, Technology’s Child: Digital Media’s Role in the Ages and Stages of Growing Up. In her book, Katie shares her research on how children engage with technology at each stage of development, from toddler to twenty something, and how they can best be supported.
As parents of five kids, we’re interested in these questions both personally and professionally. We are particularly interested in Katie’s concept of “loose parts” and how we might apply this idea to digital product design, especially AI design. We think anyone who has children or has an interest in technology’s impact on children will find Katie’s book highly informative and a great read.
Katie Davis is Associate Professor at the University of Washington Information School, where she is a founding member and Co-Director of the UW Digital Youth Lab. She is the coauthor of The App Generation: How Today’s Youth Navigate Identity, Intimacy, Imagination in a Digital World and Writers in the Secret Garden: Fanfiction, Youth, and New Forms of Mentoring.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
Weather forecasting is fascinating. It involves making predictions in the complex, natural world, using a global infrastructure for people who have varying needs and desires. Some just want to know if we should carry an umbrella today. Others want to know how to prepare for a week-long trip. And then there are those who use the weather forecast to make decisions that can have significant, even critical, consequences.
We also think weather forecasting is an interesting topic given the parallels to what we are experiencing in AI. Weather forecasting and AI systems are black box prediction systems, supported by a global infrastructure that is transitioning from public to private control. In weather, our satellite industry is transitioning from publicly-funded and controlled to private. And in AI, the major models and data are transitioning from academia (which we would argue is essentially public given their interest in publishing and sharing knowledge) to corporate control.
Given this backdrop and the fact that Helen is an avid weather forecasting nerd, we talked with Andrew Blum about his book The Weather Machine: A Journey Inside the Forecast. The book is a fascinating narrative about how the weather forecast works based on a surprising tour of the infrastructure and people behind it. It’s a great book and we highly recommend it.
Andrew Blum is an author and journalist, writing about technology, infrastructure, architecture, design, cities, art, and travel. In addition to The Weather Machine, Andrew also wrote Tubes: A Journey to the Center of the Internet which was the first ever book-length look at the physical infrastructure of the Internet—all the data centers, undersea cables and tubes filled with light. You can also find Andrew’s writing in many publications and hear him talk at various conferences, universities, and corporations. At the end of our interview, we talk with Andrew about his current research and we’re very much looking forward to his next book.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
We’ve heard a lot about how generative AI may negatively impact careers in design. But we wonder how might generative AI have a positive impact on designers? How might generative AI be used as a tool that helps designers rather than as a replacement for designers? How might we use generative AI in design education? How do design educators and their students feel about generative AI? How else might generative AI help designers in ways that we haven’t uncovered yet?
To answer these questions, we talked with Juan Noguera about his individual design work, his teaching at the Rochester Institute of Technology, and about his recent article in The Conversation entitled DALL-E 2 and Midjourney can be a boon for industrial designers. Juan proposes that AI image generation programs can be a fantastic way to improve the design process. Juan’s story about using generative AI working with bronze artisans in Guatemala is particularly compelling.
Juan Noguera is an Assistant Professor of Industrial Design at the Rochester Institute of Technology. A Guatemalan, he was raised in a colorful and vivid culture. He quickly developed an interest in how things were made, tearing everything he owned apart, and putting it back together, often with a few leftover pieces.
We enjoyed talking with Juan about his teaching, about his student’s projects, and about ideas he has for how AI might be able to help designers more in the future.
Learn more about Juan Noguera.
Read Juan Noguera’s article in The Conversation.
Learn more about Juan Noguera’s work on AI in Design.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
What role does design have in solving the world’s biggest problems? What can designers add? Some would say that designers played a role in getting us into our current mess. Can they also get us out of it? How can we design solutions for problems in complex systems that are evolving, emerging, and changing?
To answer these questions, we talked with Don Norman about his book, Design for a Better World: Meaningful, Sustainable, Humanity Centered. In his book, Don proposes a new way of thinking, one that recognizes our place in a complex global system where even simple behaviors affect the entire world. He identifies the economic metrics that contribute to the harmful effects of commerce and manufacturing and proposes a recalibration of what we consider important in life.
Don Norman is Distinguished Professor Emeritus of Cognitive Science and Psychology and founding director of the Design Lab at the University of California, San Diego from which he has retired twice. Don is also retired from and holds the emeritus title from Northwestern University, the Nielsen Norman Group and a few other organizations. He was an Apple Vice President, has been an advisor and board member for numerous companies, and has three honorary degrees. His numerous books have been translated into over 20 languages, including The Design of Everyday Things and Living with Complexity.
It was a true pleasure to talk with Don, someone who we have read and followed for decades. His work is central to much of today’s design practices and we loved talking with him about where he hopes design may take us.
Learn more about Don Norman.
Learn more about Don’s book Design for a Better World.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
What are the cause and effect of my actions? How do I know the effect of the small acts in my life? How can I identify opportunities to have impact that is much larger than myself? How can we make problems that seem overwhelmingly complex feel more manageable and knowable? How might we use the scaling tools of designers to tackle some of the world’s largest and most complex problems?
To answer these questions, we talked with Jamer Hunt about his book Not to Scale: How the Small Becomes Large, the Large Becomes Unthinkable, and the Unthinkable Becomes Possible. The book repositions scale as a practice-based framework for navigating soci al change in complex systems. Jamer is Professor of Transdisciplinary Design and Program Director for University Curriculum at the New School’s Parsons School for Design. Jamer was the founding director of the Transdisciplinary Design graduate program at Parsons that was created to emphasize collaborative design-led research and a systems-oriented approach to social change.
We’re big fans of Jamer’s book and have incorporated his concept of scalar framing into our work. We encourage you to check his book out as well and see how zooming in and out can help you frame complex problems in a way that makes them more addressable.
Learn more about Jamer Hunt
Learn more about Jamer’s book Not to Scale
Learn more about the Transdisciplinary Design program at Parsons
Watch the Powers of Ten by Charles & Ray Eames
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
Why does ChatGPT matter?
* People always get excited about AI advances and this one is accessible in a way that others weren’t in the past.
* People can use natural language to prompt a natural language response.
* It’s seductive because it feels like synthesis.
* And it can feel serendipitous.
But…
* We need to remember that ChatGPT and all other generative AI are tools and they can fail us.
* While it may feel serendipitous, that serendipity is more constrained than it may feel.
Some other ideas we cover:
* The research at Google, OpenAI, Microsoft, and Apple gives us some context for evaluating how special ChatGPT actually is and what might be ahead.
* The current craze about prompt engineering.
What we’re reading:
* Raghuveer Parthasarathy’s So Simple a Beginning
* Don Norman’s Design for a Better World
* Jamer Hunt’s Not to Scale
* Ann Pendleton-Jullian & John Seely Brown’s Design Unbound
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
We’re always looking for new ideas from science that we can use in our work. Over the past few years, we have been researching new ways to handle increasing complexity in the world and how to solve complex problems. Why do we seem to see emergent, adaptive, open, and networked problems more often? And why don’t they yield to traditional problem solving techniques?
Our research has centered on complexity science and understanding how to apply its lessons to problem solving. Complexity science teaches us about the nature of complex systems including the nervous system, ecosystems, economies, social communities, and the internet. It teaches us ways to identify opportunities for change through metaphor, models, and math and ways to synchronize change through incentives.
The Santa Fe Institute has been at the center of our complexity research journey. Founded in 1984, SFI is the leading research institute on complexity science. Its researchers endeavor to understand and unify the underlying, shared patterns in complex physical, biological, social, cultural, technological, and even possible astrobiological worlds. We encourage anyone interested in this topic to wander through the ample and diverse resources on the SFI website, SFI publications, and SFI courses.
We had the pleasure of digging into complexity science and its applications with one of the leading minds in complexity, David Krakauer, who is President and William H. Miller Professor of Complex Systems at SFI. David's research explores the evolution of intelligence and stupidity on Earth. This includes studying the evolution of genetic, neural, linguistic, social, and cultural mechanisms supporting memory and information processing, and exploring their shared properties. He served as the founding director of the Wisconsin Institutes for Discovery, the co-director of the Center for Complexity and Collective Computation, and professor of mathematical genetics, all at the University of Wisconsin, Madison. He has been a visiting fellow at the Genomics Frontiers Institute at the University of Pennsylvania, a Sage Fellow at the Sage Center for the Study of the Mind at the University of California, Santa Barbara, a long-term fellow of the Institute for Advanced Study, and visiting professor of evolution at Princeton University.
A graduate of the University of London, where he went on to earn degrees in biology and computer science, Dr. Krakauer received his D.Phil. in evolutionary theory from Oxford University.
Learn more about SFI.
Learn more about David Krakauer.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
Everyone’s talking about it so we will too. Generative AI is taking the world by storm. But is it a good storm or a scary storm? How should individuals think about what’s possible? What about companies?
Our take: generative AI is hugely powerful but will always have flaws and potholes. As probabilistic systems, it will always produce errors—how will you plan for that? As systems that are trained on everything on the internet, they are essentially stealing IP from everyone, everywhere—how do you feel about participating in that theft?
We’ve spent years witnessing companies take a “if we build it (data, analytics, AI), people will use it” approach—and fail. Digital transformation doesn’t happen successfully by itself. Digital transformation is actually all about people. Companies that succeed in integrating data, analytics, and AI are those that undertake thoughtful change management programs to help people understand how to integrate these technologies into their complex human systems.
Generative AI is really exciting. But our prediction is that companies will need to undertake thoughtful change management to ensure they get the best out of these new AI technologies, not the worst.
Nudges of the week
Helen: Synthesize Later. Integrate argument and counter-argument into a decision. Good decisions involve reconciling subjective judgments and resolving clashing causal forces. The best way to do this is to be deliberate and conscious of the need to synthesize. Schedule a meeting titled “synthesis” and set expectations that now is the moment to step slowly through each point of view, iterate, and nudge each side. Have each side make a list of the things that would bring them toward each other. Failing to do this contributes to a sense that the decision is stuck.
Dave: Explain, Teach, Pitch. Explanations and the stories that link cause and effect play a key role in allowing us to adapt flexibly to a changing world. Explaining our decisions is a generative act. We learn more about our own motivations and knowledge. Explanation is active and can help us when we need to rethink, reevaluate, and deal with regret. Teaching is set apart from explanation because good teaching also relies on empathy. A good teacher understands where the student is in their learning process and adjusts their teaching to fit the mental model of the learner.
What We’re Learning
Helen: Joined-Up Thinking by Hannah Critchlow. A great summary of the state of the science about how we can build our collective intelligence. A delightful read that Helen highly recommends.
Dave: Don Norman’s next book who we will interview in the next few weeks! Stay tuned for that interview with one of Dave’s heroes.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
What can we learn from the practice of design? What might we learn if we had an insight into top designers’ minds? How might we apply the best practices of designers beyond the field of design itself? Most of our listeners are likely familiar with design thinking—what other practices should we learn about and understand?
To answer these questions, we talked with Kees Dorst about his books, Frame Innovation and Notes on Design, to discover his views on the creative processes of top designers and understand his practice of frame innovation. We enjoyed both books and find insights that extend well beyond design into all areas of problem solving. We are particularly interested in applying frame innovation in our complex problem-solving sprints and consulting practice.
Kees Dorst is Professor of Transdisciplinary Innovation at the University of Technology Sydney’s TD School. He is considered one of the lead thinkers developing the field of design, valued for his ability to connect a philosophical understanding of the logic of design with hands-on practice. As a bridge-builder between these two worlds, his writings on design as a way of thinking are read by both practitioners and academics. He has written several bestselling books in the field – ‘Understanding Design’ (2003, 2006), ‘Design Expertise’ (with Bryan Lawson, 2013), 'Frame Innovation' (2015) ‘Designing for the Common Good’ (2016) and ‘Notes on Design – How Creative Practice Works’ (2017).
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
The latest Big Ideas report from MIT Sloan and BCG makes for an interesting read but contains flaws, obvious conclusions, and raises more questions than it answers.
We discuss this report and make some suggestions about how to think about AI based on the survey’s conclusions:
* trust matters (no-duh). The data suggests if people trust AI they will use it twice as much.
* ability to override the AI matters (no-duh). The data suggests if people can override the AI they will use it twice as much.
* people describe an AI as a co-worker but the majority of people don’t even know they are using it. Huh?
Another surprise is that people like AI that means they don’t have to talk to their boss. Who would have anticipated that?
Nudges of the week
Helen: Synthesize Later. Integrate argument and counter-argument into a decision. Good decisions involve reconciling subjective judgments and resolving clashing causal forces. The best way to do this is to be deliberate and conscious of the need to synthesize. Schedule a meeting titled “synthesis” and set expectations that now is the moment to step slowly through each point of view, iterate, and nudge each side. Have each side make a list of the things that would bring them toward each other. Failing to do this contributes to a sense that the decision is stuck.
Dave: Be Less Wrong. Let go of perfectionism and feel the relief of knowing that by striving to be less wrong, you’ll probably end up being more right.
What We’re Learning
Helen: The Neuroscience of You by Chantel Prat. Delivers on the promise of showing you how your brain is different. Really fun and engaging book to read and do all the tests.
Dave: Learning from Helen! He’s been reading the first draft of our next book Solve Better Problems: How to Solve Complex Problems in the Digital Age. Complexity really is a different animal and it’s mind opening to understand why.
If you enjoy our podcasts, please subscribe on Substack or your favorite podcast platform. And please leave a positive rating or comment—haring your positive feedback helps us reach more people and connect them with the world's great minds. Seriously, a review on Apple podcasts is a big deal!
And if you like how we think then contact us about our speaking and workshops, and human-centered product design. You can learn more about us at getsonder.com and you can contact us at [email protected].
You can learn more about making better decisions in our book, Make Better Decisions: How to Improve Your Decision-Making in the Digital Age. The book is an essential guide to practicing the cognitive skills needed for making better decisions in the age of data, algorithms, and AI. Please check it out at MBD.zone and purchase it from Amazon, Bookshop.org, or your favorite local bookstore.
Twitter as we knew is gone. Elon has fired half the full time employees and 80 percent of the contractors. It’s a brutal way to trim excess fat, reset the culture, and establish a loyal band. But is it a good decision? How could it go wrong?
Elon is incredibly successful at running engineering companies. But if you look at his failures—the fixes he hasn’t been able to affect—they are all in the zone of people; specifically networks of people and machines. He’s consistently failed to accurately forecast driverless technology and he’s overestimated the capabilities of robots when it come to human-like fine-grained automation.
Our key question regarding Twitter is this: is Elon grossly underestimating the people factor in Twitter? As Dave says in the podcast, “he's taking a group of people that work in a system, that have personal connections, have human connections, the kinds of connections that are required to be productive, and he's treating them like bits and parts in a car factory, or like working capital. Something that you can just discard half of and continue going with half of it. He completely misses the fact that people in an organization like Twitter require the other people that are around them.”
One of the good things about Twitter was its intellectualism and commitment to getting better. By all accounts, the culture was one of making a good decision based on considering many (complex) factors. It evolved based on many selection pressures—advertisers, users, activists. Now it’s going to be a place where one person makes what they consider to be the easy decision with an ideology that a system shock is the best way to force a new equilibrium. Elon would rather fix than futz.
This is potentially a perfectly rational strategy. Indeed it may be the only one given the company faces significant financial pressure (in part brought on by Elon’s previous decisions). But the problem with how it’s been done at Twitter isn’t the speed the scale, or even the cruelty. The problem is that it’s all about creating complete unyielding loyalty. A Twitter where you’re either with Elon or against him. We aren’t the only ones to point out the irony of the situation. Almost overnight, the so-called champion of free speech has created a total FIFO (Fit In or F**k Off) employer.
Will Twitter still operate as a global public square? What will happen next?
Helen: I’m 80 percent confident the answer is yes. We’ll adapt: that’s what humans do. And Elon has an engineering challenge here that he can act on: it’s a financial engineering challenge.
Dave: Yes, but people will be more cautious. “I think people will be more cautious. I think having a singular billionaire, slightly autocratic feature, a figurehead makes decisions willy nilly, throws people in or out, has thrown away all of the guardrails or any form of ethics, is going to have a long standing impact on the platform. So it's going to make people a little bit more wary about it being the trusted place.
Dave’s Nudge OTW: Break Up Problems Early
A company struggling to make the company decision framework function. The real problem? It’s hard to make a decision framework (who could decide what) work before you even know the problem you have. The nudge helped Dave step back and see that the real problem was that people need to understand that they are dealing with different types of problems—hard decisions, complicated problems, complex scenarios.
Helen’s Nudge OTW: Plug the Leaks
A great nudge for improving willpower. No decision is too small. Which means no action is too small either. Stop focusing on the big mass of motivation and focus instead on the trajectory or velocity. What you should do for an hour, do for 15 minutes instead (for example, writing or working out).
Final Thing
Helen: Klara and the Sun. The latest book from Nobel Prize winner Kazuo Ishiguro. Wonderful story about the relationship between an artificial friend and a teen. Says Helen: “I think it is clever because it doesn't beat you over the head about what an artificial consciousness might be. It requires quite a lot of discovery to figure out exactly how Klara is operating in the world, what the nature of her conscious perception actually is. The more you know about AI, the better it is because you see so many different angles into the way that an artificial mind might process the world. And ways that could lead to enormous flaws in relying on artificial friends.”
Dave: Build: An Unorthodox Guide to Making Things Worth Making by Tony Fadell. He’s an old friend of Dave’s from Apple who has a lot of wisdom about people and products. “Sometimes the people you don't expect to be amazing, the ones you thought were B's and B pluses, turn out to completely rock your world. They hold your team together by being dependable and flexible and great mentors and teammates. They're modest and kind of just quietly do good work. They're a different type of Rockstar.” Says Dave: “I totally agree with him. This is so under appreciated. Those people won't show up very high on Elon’s list. I would imagine that he probably threw out a whole bunch of those people because they don't show up at the top of whatever performance metric he's using.”
If you enjoy our podcasts, please subscribe on Substack or your favorite podcast platform. And please leave a positive rating or comment—haring your positive feedback helps us reach more people and connect them with the world's great minds. Seriously, a review on Apple podcasts is a big deal!
And if you like how we think then contact us about our speaking and workshops, and human-centered product design. You can learn more about us at getsonder.com and you can contact us at [email protected].
You can learn more about making better decisions in our book, Make Better Decisions: How to Improve Your Decision-Making in the Digital Age. The book is an essential guide to practicing the cognitive skills needed for making better decisions in the age of data, algorithms, and AI. Please check it out at MBD.zone and purchase it from Amazon, Bookshop.org, or your favorite local bookstore.
We all likely want to improve the organizations we work in. We might want to improve the employee experience, improve the customer experience, or be more efficient and effective. But we all likely have had the experience of feeling like our organizations are too difficult, too entrenched, and too complex to change. Any organization—large or small, public or private—can feel like a faceless bureaucracy that is resistant to change. So what can people do who want to affect change? How do you accomplish things that can seem impossible?
To answer these questions, we talked with Marina Nitze and Nick Sinai about their recently published book, Hack Your Bureaucracy: Get Things Done No Matter What Your Role on Any Team. Marina and Nick have deep experience in one of the largest, most complex bureaucracies in the world: the U.S. government. As technology leaders in the Obama White House, Marina and Nick undertook large change programs. Their book contains their stories and their advice for anyone who wants to affect change.
We find the hacks in their book quite valuable, and we wish this book had been available early in our career when we were both in much larger organizations. We love the fact that their hacks focus on the people and working within a system for change—not the move fast & break things mentality of Silicon Valley. Above all, we appreciate that it’s clear that Marina and Nick thought deeply about what they would have wanted to know when they embarked on the significant technology change programs they undertook in the White House and Veterans Administration.
Marina Nitze is currently a partner at Layer Aleph, a crisis response firm that specializes in restoring complex software systems to service. Marina was most recently Chief Technology Officer of the U.S. Department of Veterans Affairs under President Obama, after serving as Senior Advisor on technology in the Obama White House and as the first Entrepreneur-in-Residence at the U.S. Department of Education.
Nick Sinai is a Senior Advisor at Insight Partners, a VC and private equity firm, and is also Adjunct Faculty at Harvard Kennedy School and a Senior Fellow at the Belfer Center for Science and International Affairs. Nick served as U.S. Deputy Chief Technology Officer in the Obama White House, and prior, played a key role in crafting the National Broadband Plan at the FCC.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
How will AI change our jobs? Will it replace humans and eliminate jobs? Will it help humans get things done? Will it create new opportunities for new jobs? People often speculate on these topics, doing their best to predict the somewhat unpredictable.
To help us get a better understanding of the current state of humans and AI working together, we talked with Tom Davenport and Steve Miller about their recently-released book, Working with AI. The book is centered around 29 detailed and deeply-researched case studies about human-AI collaboration in real-world work settings. What they show is that AI isn’t a job destroyer but a technology that changes the way we work.
Tom is Distinguished Professor of Information Technology and Management at Babson College, Visiting Professor at Oxford's Saïd Business School, Fellow of the MIT Initiative on the Digital Economy, and Senior Advisor to Deloitte's AI practice. He is the author of The AI Advantage and coauthor of Only Humans Need Apply and other books.
Steve is Professor Emeritus of Information Systems at Singapore Management University, where he previously served as Founding Dean of the School of Computing and Information Systems and Vice Provost for Research. He is coauthor of Robotics Applications and Social Implications.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
We humans make a lot of decisions. Apparently, 35,000 of them every day! So how do we improve our decisions? Is there a process to follow? Who are the experts to learn from? Do big data and AI make decisions easier or harder? Is there any way to get better at making decisions in this complex, modern world we live in?
To dig into these questions we talked with…ourselves! We recently published our first book, Make Better Decisions: How to Improve Your Decision-Making in the Digital Age. In this book, we’ve provided a guide to practicing the cognitive skills needed for making better decisions in the age of data, algorithms, and AI. Make Better Decisions is structured around 50 nudges that have their lineage in scholarship from behavioral economics, cognitive science, computer science, decision science, design, neuroscience, philosophy, and psychology. Each nudge prompts the reader to use their beautiful, big human brain to notice when our automatic decision-making systems will lead us astray in our complex, modern world, and when they'll lead us in the right direction.
In this conversation, we talk about our book, our favorite nudges at the moment, and some of the Great Minds who we have interviewed on Artificiality including Barbara Tversky, Jevin West, Michael Bungay Stanier, Stephen Fleming, Steven Sloman and Tania Lombrozo.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
We all do things with other people. We design things, we write things, we create things. Despite the fact that co-creation is all around us it can be easy to miss because creation gets assigned to individuals all too often. We’re quick to assume that one person should get credit thereby erasing the contributions of others.
The two of us have a distinct interest in co-creation because we co-create everything we do. We co-created Sonder Studio, our speaking engagements, our workshops, our design projects, and our soon-to-be-published book, Make Better Decisions. We’re also interested in how humans can co-create with technology, specifically artificial intelligence, and when that is a good thing and when that might be something to avoid.
To dig into these interests and questions we talked with Kat Cizek and William Uricchio whose upcoming book Collective Wisdom offers the first guide to co-creation as a concept and as a practice. Kat, William, and a lengthy list of co-authors have presented a wonderful tracing of the history of co-creation across many disciplines and societies. The book is based in interviews with 166 people and includes nearly 200 photographs that should not be missed. We hope that you all have a chance to experience their collective work.
Kat is an Emmy and Peabody-winning documentarian who is the Artistic Director and Cofounder of the Co-Creation Studio at MIT Open Documentary Lab. William is Professor of Comparative Media Studies at MIT, where he is also Founder and Principal Investigator of the MIT Open Documentary Lab and Principal Investigator of the Co-Creation Studio. Their book is scheduled to be published by MIT Press on November 1st.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
How should we respond and react to artificial intelligence and its impact on the world and each other? How should we handle the risk and uncertainty risk caused by the permeation of AI throughout our lives?
To tackle these questions, we talked with Gerd Gigerenzer about his recent book, How to Stay Smart in a Smart World. We talk with Gerd about the impacts of big data on making decisions, the increasing use of AI for surveillance, the risks of trusting smart technology too much, and the broader impact of technology on our human dignity.
Gerd is the Director Emeritus at the Max Planck Institute for Human Development and the author of several books, including Calculated Risks, Gut Feelings, and Risk Savvy and the coeditor of Better Doctors, Better Patients, Better Decisions and Classification in the Wild. He has trained judges, physicians, and managers in decision-making and understanding risk.
We thoroughly enjoyed Gerd’s book and recommend it to both those new to AI who may be looking for an approachable introduction and to those expert in AI who may be looking for a new perspective to think about the future of our digital world.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
We all want decision-making to be easier. We want simple tools and frameworks that provide a process for no-regrets decisions. But it just isn’t that easy. Despite how much we understand about the science of decision-making, the act of making decisions is frequently quite difficult. And the quantity of data we can now access to support decision-making doesn’t make decisions easier, it actually makes them more complex.
So what to do? In his book, Difficult Decisions: How Leaders Make the Right Call with Insight, Integrity, and Empathy, Eric Pliner argues that the best way to approach complex, subjective decisions is to first understand your own subjectivity, morals, and ethics.
In this episode, we talk with Eric about his book, how he advises leaders to make decisions, the importance of aligning intent with impact in the world, and how to think about the role of data in decision-making.
In addition to being an author, Eric is CEO of YSC Consulting where he works with leaders and organizations on leadership development, organizational culture, and strategic diversity and inclusion initiatives.
As frequent listeners know, we spend a lot of time working with people on how to make better decisions and it was a true pleasure to talk with Eric about how he approaches this topic and how he helps leaders tackle difficult decisions.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
We’d all like to be healthier—to sleep longer, have lower stress, and have more energy. But is it possible for an AI to help us accomplish this? And how would that experience feel? What data would we need to provide? How would the AI encourage the behavior changes required? Would it feel like a friend or a bully? Would it work at all?
To answer some of these questions, we talked with Tom Hale, the new CEO at Oura. Oura makes a fascinating device that monitors a long list of signals from your body all through a ring on your finger. That ring connects with an app on your phone that gives you lots of data about your health. Perhaps most interestingly, in addition to the facts about your health, the app provides suggestions for what you might do differently. And it provides those suggestions in a way that seems cautious about making too many conclusions, leaving the true agency with you.
Neither of us owned Oura rings before our conversation so we couldn’t bring that experience to the podcast. But after our conversation we both decided to buy one and give it a try. Our sizing kits are on the way and the rings will follow soon after. We’re planning to record our reactions to the rings so subscribe, if you haven’t already, to get an alert when we publish our experience.
Prior to joining Oura, Tom was President of MomentiveAI, previously called SurveyMonkey, Chief Product and Operating Officer at HomeAway, and a long-time executive at Adobe Systems.
Tom’s personal experience with the Oura Ring before becoming CEO is what tipped the balance and got us to be some of his newest customers. We’ll be interested to hear if any of our listeners do the same.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
We all love stories—they are one of the most important ways that humans communicate. Stories create heroes to root for and villains to revile. Stories create realities and help us align our values and objectives with others. But how do stories change in a world that is awash with data and is overwhelmed by large tech companies that try to motivate—or manipulate–us with stories using data that we don’t see and can’t comprehend?
To help answer these questions, we talked with Frank Rose about his recent book The Sea We Swim In: How Stories Work in a Data-Driven World. Frank’s book is inspired by his Strategic Storytelling seminar at Columbia University and is a wonderful resource to help understand the power of narrative thinking.
In addition to being a senior fellow at Columbia University School of the Arts, Frank is the director of Columbia’s pioneering Digital Storytelling Lab and a frequent speaker on narrative thinking and on the power of immersive storytelling. Frank’s writing and journalism career started in the punk scene at CBGB for The Village Voice and continued as a contributing editor at Esquire and then Wired. He has written several books including West of Eden about the early days of Apple Computer and The Art of Immersion about how the digital generation changed storytelling.
We greatly enjoyed talking with Frank about one of our favorite subjects: telling stories in a data-driven world.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
Many of our listeners will be familiar with human-centered design and human-computer interaction. These fields of research and practice have driven technology product design and development for decades. Today, however, these fields are changing to adapt to the increasing use of artificial intelligence, leading to an emerging field called human-centered AI.
Prior to the widespread use of AI, technology products were powerful, yet, predictable—they operated based on the rules created by their designers. With AI, however, machines respond to data, providing predictions that may not be anticipated when the product is designed or programmed. This is incredibly powerful but can also create unintended consequences.
This challenge leads to the questions: How can we design AI-based products that provide benefits to humans? How can we create AI systems that learn and change with new data but still provide consequences intended by the system’s designers?
These questions led us to interview Ben Shneiderman, an Emeritus Distinguished University Professor in the department of Computer Science at the University of Maryland. Ben recently published a wonderfully approachable book, Human-Centered AI, which provides a guide to how AI can be used to augment and enhance humans’ lives. As the founding director of the Human-Computer Interaction Laboratory, Ben has a 40-year history in researching how humans and computers interact, making him an ideal source to talk with about how humans and AI interact.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
“How can we augment our thinking spaces to increase creative solutions? How can we make those solutions real by mastering complexity?” Julio Mario Ottino and Bruce Mau ask and answer these questions in their ambitious and visually stunning work, The Nexus.
In their book, Ottino and Mau take on a big subject—how to augment your thinking by integrating art, technology, and science. It is a thought-provoking and curiosity-enhancing book—perfect for rewilding your attention with its glorious footnotes and gorgeous visuals.
Our takeaways (not to plot bust) for being a Nexus thinker:
* Experiment—the world is too uncertain to spend too much energy and time overly planning and analyzing, whether it’s from data or from intuition. We have to learn to dance between data and intuition, to be in both the rational and emotional at once.
* Develop the art of coexistence. We are trained (and like to think) in terms of black and white, A versus B. We have to learn how to hold opposing ideas at the same time and yet be still able to act. This is hard but artists do it all the time and leaders can learn.
* Complex systems require us to think more and more in terms of tradeoffs. And complex systems exhibit a property called emergence, where literally behaviors we can’t predict emerge as a result of the system. The job of leaders is now to create conditions that allow for successful emergence.
* The best opportunity to tackle the world’s greatest problems—those of unprecedented complexity—is by working at the Nexus, where art, technology and science converge.
Ottino and Mau challenge us to think beyond the boundaries of our specialities and training, to be curious about how others in unrelated fields discover knowledge and find their creativity. It is thinking for our age, where design becomes the method for discovery.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
We hear a lot about harm from AI and how the big platforms are focused on using AI and user data to enhance their profits. What about developing AI for good for the rest of us? What would it take to design AI systems that are beneficial to humans?
In this episode, we talk with Mark Nitzberg who is Executive Director of CHAI or the UC Berkeley Center for Human-Compatible AI and head of strategic outreach for Berkeley AI Research. Mark began studying AI in the early 1980s and completed his PhD in Computer Vision and Human Perception under David Mumford at Harvard. He has built companies and products in various AI fields including The Blindsight Corporation, a maker of assistive technologies for low vision and active aging, which was acquired by Amazon. Mark is also co-author of The AI Generation which examines how AI reshapes human values, trust and power around the world.
We talk with Mark about CHAI’s goal of reorienting AI research towards provably beneficial systems, why it’s hard to develop beneficial AI, variability in human thinking and preferences, the parallels between management OKRs and AI objectives, human-centered AI design and how AI might help humans realize the future we prefer.
Links:
Learn more about UC Berkeley CHAI
Subscribe to get Artificiality delivered to your email
Learn more about Sonder Studio
P.S. Thanks to Jonathan Coulton for our music
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
Have you ever wondered why you can recognize and remember things but can’t describe them in words? That is one of the questions that started Barbara Tversky’s contrarian research and academic career, leading to her theory that spatial thinking is the foundation of abstract thought. While most people were focused on language as central to human thinking, Barbara recognized that our relationship with the spaces we inhabit, including mental ones, provided a unique way of understanding the world. In her book, Mind in Motion, Barbara shows how spatial cognition is the foundation of thought and allows us to draw meaning from our bodies, our movements and the spaces around us.
We find Barbara’s work to be incredibly fascinating, especially as we consider the current approach to AI and technology design. While there is an extraordinary amount of investment being made into language AI, Barbara’s work causes us to wonder about the opportunities for AI that taps into our spatial reasoning. We’re just starting to scratch the surface of this idea in our design work and thank Barbara for uncovering the idea and sharing it in her wonderful book.
In this episode, we talk with Barbara about spatial thinking as the foundation of abstract thought, the linearity of spaces and perception of distances, putting thought into the world, the creative power of sketching, self-driving cars, aphantasia (aka lacking a mind’s eye) and the confusion between sight and navigational ability.
Barbara Tversky is an emerita professor of psychology at Stanford University and a professor of psychology at Teachers College at Columbia University. She is also the President of the Association for Psychological Science. Barbara has published over 200 scholarly articles about memory, spatial thinking, design, and creativity, and regularly speaks about embodied cognition at interdisciplinary conferences and workshops around the world. She lives in New York.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
All major companies are working to increase the value of data science. Setting a goal may be easy but implementation often raises challenging questions. How should companies think about the role of data scientists, the challenge of increasing diversity in data science and increasing data literacy and data-driven decision-making?
In this episode, we talk with Megan Brown, the Director of Data Science for Starbucks’ Global Center of Excellence. Megan has a unique background. They have experience and expertise in both humans (with a PhD in experimental psychology) and in machines and data (as a practicing data scientist). This gives them a unique perspective on how to help others solve business problems with data.
We talk with Megan about their journey from fifth grade teacher to data scientist, how non-data science executives should make the most of data science, the responsibility of humans to act on predictions from models and make human-to-human connection, the challenge of asking data scientists to take on too many roles, the conundrum of self-service and their Aspen Institute-sponsored project to bring more diversity to data science by bringing Starbucks’ partners from the stores into data science roles.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
This week we talk with Peter Sterling, the author of What is Health.
Peter has had a long career in medicine and neuroscience. He has recently published in Jama Psychiatry, with Michael Platt, on Why Deaths of Despair Are Increasing in the US and Not Other Industrial Nations with many Insights From Neuroscience and Anthropology. While that might not sound like AI, we do wonder what the role of technology might be for all of us to make better personal decisions about our health.
Peter caught our attention with his concise and understandable description of how evolution, by optimizing for energy efficiency, has built human brains. We care about this for a couple of different reasons.
First, his work is relevant to how we make decisions as modern humans. This tells us the things that matter to us, where there are evolutionary mismatches and what we might do about it.
Second, here at Sonder Studio we care about how humans create meaning and how we learn and cohere with our communities—other brains and by implication, other intelligences. This leads us to be naturally curious about how brains work as well as how AI works. We are obsessed with understanding and designing for this interaction. We start our decision-making workshops with key insights from Peter’s book because it really matters to understand something about how our brains are built and what makes them so different from AI.
We really enjoyed this conversation and appreciate Peter’s time with us. We think you’’ll enjoy it too.
Links to things about Peter:
Lecture: What is Health? Cornell University Sept 2021
Interview by Andrew Keen: Homeostasis vs. Allostasis
Webinar: What does our species require for a healthy life
Interview: What does our species require for a healthy life?
Essay: Q&A in Current Biology
Lecture: What is Health?: NEI. Oct 27 2020.
Lecture/Interview: Conversatorio sobre racismo
Essay: Human design in a post-COVID world
Essay: Attention! One morning with a roving mind
Essay: Covid-19 and the harsh reality of empathy distribution Scientific American
Essay: How neuroscience could explain the rise of addictions, heart disease, and diabetes in 21st century America TIME
Twitter: Peter Sterling @whatishealth21
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
It’s human to know oneself. We are able to self-monitor, understand our cognition, and recognize gaps in our knowledge. This is called metacognition—we think about how we think. We can think of it as self-awareness or the ability to understand the state of our knowledge. In this episode, we talk with Stephen Fleming, Professor of Cognitive Neuroscience at University College London.
Steve has recently published a book on this topic called Know Thyself. We wanted to explore a number of ideas with him—metacognition as uniquely human, as an important skill, whether machines need to have some form of awareness and the issue of agency and machines.
In his book, Steve proposes that we either put self-awareness into machines, thereby reducing our need for self-awareness, or design interfaces that increase human self-awareness. If we have self-aware machines then we risk losing our own. If we want better self-awareness we must prioritize how an AI metacognition can show us its uncertainty and error. This second route is more likely to result in humans retaining autonomy. It preserves the human role of wrestling uncertainty, seeking explanations, and making sense of the world. Perhaps the biggest insight for us regarding agency and AI is how to think about it as having a structure. We talk about how true autonomy is aligning our choices and wants and how machines play a role.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
Have you ever wondered what it means to be data literate in a world of big data and AI? Now that so many decisions rely on information that is only readable by machine and our statistical intuitions, which were bad before, are now practically useless, what is data literacy in the age of AI and how important is it? We talked with Jevin West, assistant professor in the Information School at the University of Washington, co-founder of the DataLab, director of the Center for an Informed Public and co-author of the acclaimed book Calling B******t to ask these questions and more.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
Have you wondered what makes people different from machines? Well one thing is curiosity—curiosity is something that drives humans but as yet not machines. And one person that knows humans and curiosity is Michael Bungay Stanier, coach extraordinaire and well-known author of the best selling books The Coaching Habit and The Advice Trap.
We first met Michael at the House of Beautiful Business, a very special community that gathers in Lisbon each year to make humans more human and business more beautiful. All three of us will be back at the House again this November and we highly encourage you to join in person or online.
In this interview, we wanted to find out what makes a good coach. According to Michael, it is being able to stay curious just a little bit longer. Perhaps that is what will make humans better coaches than AI for a while yet.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
Making decisions with data requires some form of communication with data. But how do we communicate with numbers and characters and binary bits? The best way today is through data visualization.
Visualizing data has come a long way since the early days of hand-drawn charts and graphs. Starting with the desktop publishing revolution of the 1980s through to the current world of data viz tools in the cloud, anyone can create a graphical representation of data. While in many ways this democratization of data visualization is a good thing, it also means we're awash with charts and graphs, many of which unintentionally don't tell the story intended. Some intentionally mislead.
Visualizing data well is more complex than you might think and that's why we reached out to a true expert in the field, Mollie Pettit, to talk about how she approaches visualization. In particular, we were interested in some of the more complex issues like how to visualize confidence in data.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
Have you ever wondered about what it takes to design AI that doesn’t do more harm than good? We speak with Josh Lovejoy who is perhaps the most experienced out there in the field of human-centered AI design. At the time of our recording, Josh was Head of Design for Microsoft’s Ethics & Society team. Since our recording, Josh has taken a role as a UX Manager in Google’s Privacy & Data Protection Office. We started our interview by asking Josh why human-centered design is so important when working with AI.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
Have you ever wondered what it means to be a humanist in the age of technology? How can we put human values into a machine? How can we even know what those human values are? We asked Kate O’Neill, founder of KO Insights and author of the Tech Humanist, this question and found that there’s a lot to work with when it comes to understanding humans and what they might want their machines to do.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
Have you ever wondered why we humans love to use our intuition even when we are surrounded by data and we also know that even simple algorithms can be more accurate than human judgment? We put that exact question to Tania Lombrozo, Arthur W. Marks ’19 Professor of Psychology and director of the Concepts & Cognition Lab at Princeton University, and it turns out that the answer is surprisingly complex.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
On a scale of 1 to 10, rate how well you understand how a toilet works. Now, take a moment to explain how it works. Now, after you’ve tried to explain it, does your rating of how well you understand change? If you’re like most people, the act of trying to explain will highlight that you don’t understand it as well as you thought you did. This is called the Knowledge Illusion and it’s where we feel we know more than we do because we get our knowledge from our community—both human and machine. What’s so interesting about this illusion is that it says so much about how we should approach others and it also says a lot about how we should approach having our knowledge inside of machines. We talked with Steven Sloman, Professor of Cognitive, Linguistic and Psychological Sciences at Brown University who, along with Philip Fernbach, popularized this idea in a book called the Knowledge Illusion. How does a conscious recognition of our knowledge being derived from our community affect our experience in the world?
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
In this episode, we have a conversation with Rana el Kaliouby, CEO and Co-Founder of Affectiva, about emotional AI, bias in AI and her new book, Girl Decoded. Rana is a leader in her views on ethical AI and how to design AI systems for humans—two topics that are near to our hearts! We hope you enjoy this conversation as much as we did.
Note we open with a short conversation about IBM’s announcement about no longer selling general purpose facial recognition technology—an unusual step for a big tech platform to cancel a product line for ethical reasons.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
In this episode we have a conversation with Renée Cummings, Founder & CEO of Urban AI, about the issues and opportunities for AI in urban settings. While we recorded this before the current protests, our conversation with Renée couldn’t be more timely as we talk about the use of AI in law enforcement and recruiting. We think Renée’s specialized focus on AI in urban settings will be essential to understand as we, as a society, seek to bring people together and rise up out of the conflicts and economic strife that are hurting so many today.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
In this episode, we open with a chat about Facebook’s acquisition of GIPHY and what the company may be trying to learn with its AI (hint: hidden meanings) and then we move on to a great conversation with Maria Axente of PWC about ethical AI at PWC and in her non-profit work with groups like UNICEF.
If you’re enjoying our podcast, please share with your friends, subscribe and give us a like—we’d appreciate your help spreading the word.
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
In this episode, we open with an opinionated chat about Facebook’s new oversight board and then we have a great conversation with Ted Kwartler, VP of Trusted AI at DataRobot. We think DataRobot is a very interesting AI company and Ted’s role is key to helping their customers with ethical AI.
In this episode, I have a conversation with Arash Rahnama, Head of Applied AI Research at Modzy (@arashrahnamaphd, @getmodzy, www.modzy.com). We talk about techniques for providing explainability in AI, how to pair explainability with fairness to reduce bias, how AI goes wrong and how diverse teams can provide important prior knowledge to AI systems and help govern them in the wild.
In this episode, I have a conversation with Scott Stephenson, Co-founder and CEO of Deepgram, a company which has built an end-to-end deep learning speech recognition system. We talk about why end-to-end deep learning sets Deepgram apart from other speech recognition companies, how Deepgram handles accents and bias, how enterprise speech recognition differs from consumer and Deepgram’s goal of becoming a speech understanding company.
In this episode, I have the pleasure of talking with Will Griffin, Chief Ethics Officer of Hypergiant, an Austin Texas-based AI product and services company. We talk about how Hypergiant uses Immanuel Kant’s categorical imperative in its ethics framework, how Hypergiant applies its ethics process with clients and why Will thinks ethical processes are important in AI development.
In this episode, I have the pleasure of interviewing Chelsea Barabas a PhD candidate at MIT’s Media Lab. We talk about her work on bias in the criminal justice system as well as her most recent work applying the concept of “studying up” from anthropology to the data science world.
Here are some of the links we refer to in the episode:
http://www.chelsbar.com
https://cmsw.mit.edu/profile/chelsea-barabas/
https://medium.com/@chelsea_barabas
https://www.nytimes.com/2019/07/17/opinion/pretrial-ai.html
https://journal.culanth.org/index.php/ca/article/view/ca31.3.01/367
https://discardstudies.com/2016/08/08/ethnographic-refusal-a-how-to-guide/
https://science.sciencemag.org/content/366/6464/421/tab-article-info
In this episode, Dave interviews Helen about her recent article in Quartz, “Are AI ethicists making any difference?” Some of the topics we explore include:
* Why is there a rush to hire AI ethicists in the tech industry?
* What do AI ethicists do?
* Why are people skeptical and what is “ethics washing” and “ethics bashing?”
* What does Jacob Metcalf of Data & Society mean by saying that ethics is “the vessel which we use to hold our values?”
* What does Josh Lovejoy of Microsoft mean by saying that ethics need not be seen as a philosophical add-on “but as just good design?”
* What are AI checklists and why is their use good practice?
In this episode, we dive into the paradox of autonomy. Some of the topics we explore include:
* Why are there so many paradoxical observations in AI?
* What is the autonomy paradox?
* Is there any way that giving up more information can be autonomy-enhancing?
* What are principal reason and counter-factual explanations?
* How can we deal with the autonomy paradox through AI UX design?
For further reading, check out the most recent Artificiality article, the research paper we reference and the Buzzfeed article on Clearview AI.
Special thanks to our friend, Jonathan Coulton, for our theme music.
Take a listen as we take a deep dive into the paradox of personalization.
En liten tjänst av I'm With Friends. Finns även på engelska.