62 avsnitt • Längd: 35 min • Månadsvis
How is the use of artificial intelligence (AI) shaping our human experience?
Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.
All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.
The podcast Pondering AI is created by Kimberly Nevala, Strategic Advisor - SAS. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
Geertrui Mieke de Ketelaere reflects on the uncertain trajectory of AI, whether AI is socially or environmentally sustainable, and using AI to become good ancestors.
Mieke joined Kimberly to discuss the current trajectory of AI; uncertainties created by current AI applications; the potent intersection of humanlike AI and heightened social/personal anxiety; Russian nesting dolls (matryoshka) as an analogy for AI systems; challenges with open source AI; the current state of public literacy and regulation; the Safe AI Companion Collective; social and environmental sustainability; expanding our POV beyond human intelligence; and striving to become good ancestors in our use of AI and beyond.
A transcript of this episode is here.
Geertrui Mieke de Ketelaere is an engineer, strategic advisor and Adjunct Professor of AI at Vlerick Business School focused on sustainable, ethical, and trustworthy AI. A prolific author, speaker and researcher, Mieke is passionate about building bridges between business, research and government in the domain of AI. Learn more about Mieke’s work here: www.gmdeketelaere.com
Vaishnavi J respects youth, advises considering the youth experience in all digital products, and asserts age-appropriate design is an underappreciated business asset.
Vaishnavi joined Kimberly to discuss: the spaces youth inhabit online; the four pillars of safety by design; age-appropriate design choices; kids’ unique needs and vulnerabilities; what both digital libertarians and abstentionists get wrong; why great experiences and safety aren’t mutually exclusive; how younger cohorts perceive harm; centering youth experiences; business benefits of age-appropriate design; KOSPA and the duty of care; implications for content policy and product roadmaps; the youth experience as digital table stakes and an engine of growth.
A transcript of this episode is here.
Vaishnavi J is the founder and principal of Vyanams Strategies (VYS), helping companies, civil society, and governments build healthier online communities for young people. VYS leverages extensive experience at leading technology companies to develop tactical product and policy solutions for child safety and privacy. These range from product guidance, content policies, operations workflows, trust & safety strategies, and organizational design.
Additional Resources:
Monthly Youth Tech Policy Brief: https://quire.substack.com
Kathleen Walch and Ron Schmelzer analyze AI patterns and factors hindering adoption, why AI is never ‘set it and forget it’, and the criticality of critical thinking.
The dynamic duo behind Cognilytica (now PMI) join Kimberly to discuss: the seven (7) patterns of AI; fears and concerns stymying AI adoption; the tension between top-down and bottom-ups AI adoption; the AI value proposition; what differentiates CPMAI from good old-fashioned project management; AI’s Red Queen moment; critical thinking as a uniquely human skill; the DKIUW pyramid and limits of machine understanding; why you can’t sit AI out.
A transcript of this episode is here.
Kathleen Walch and Ron Schmelzer are the co-founders of Cognilytica, an AI research and analyst firm which was acquired by PMI (Project Management Institute) in September 2024. Their work, which includes the CPMAI project management methodology and the top-rated AI Today podcast, focuses on enabling AI adoption and skill development.
Additional Resources:
CPMAI certification: https://courses.cognilytica.com/
AI Today podcast: https://www.cognilytica.com/aitoday/
Dr. Marisa Tschopp explores our evolving, often odd, expectations for AI companions while embracing radical empathy, resisting relentless PR and trusting in humanity.
Marisa and Kimberly discuss recent research into AI-based conversational agents, the limits of artificial companionship, implications for mental health therapy, the importance of radical empathy and differentiation, why users defy simplistic categorization, corporate incentives and rampant marketing gags, reasons for optimism, and retaining trust in human connections. A transcript of this episode is here.
Dr. Marisa Tschopp is a Psychologist, a Human-AI Interaction Researcher at scip AG and an ardent supporter of Women in AI. Marisa’s research focuses on human-AI relationships, trust in AI, agency, behavioral performance assessment of conversational systems (A-IQ), and gender issues in AI.
Additional Resources:
The Impact of Human-AI Relationship Perception on Voice Shopping Intentions in Human Machine Collaboration Publication
How do users perceive their relationship with conversational AI? Publication
KI als Freundin: Funktioniert eine Chatbot-Beziehung? TV Show (German, SRF)
Friends with AI? It’s complicated! TEDxBoston Talk
John Danaher assesses how AI may reshape ethical and social norms, minds the anticipatory gap in regulation, and applies the MVPP to decide against digitizing himself.
John parlayed an interest in science fiction into researching legal philosophy, emerging technology, and society. Flipping the script on ethical assessment, John identifies six (6) mechanisms by which technology may reshape ethical principles and social norms. John further illustrates the impact AI can have on decision sets and relationships. We then discuss the dilemma articulated by the aptly named anticipatory gap. In which the effort required to regulate nascent tech is proportional to our understanding of its ultimate effects.
Finally, we turn our attention to the rapid rise of digital duplicates. John provides examples and proposes a Minimally Viable Permissibility Principle (MVPP) for evaluating the use of digital duplicates. Emphasizing the difficulty of mitigating the risks posed after a digital duplicate is let loose in the wide, John declines the opportunity to digitally duplicate himself.
John Danaher is a Sr. Lecturer in Ethics at the NUI Galway School of Law. A prolific scholar, he is the author of Automation and Utopia: Human Flourishing in a World Without Work (Harvard University Press, 2019). Papers referenced in this episode include The Ethics of Personalized Digital Duplicates: A Minimal Viability Principle and How Technology Alters Morality and Why It Matters.
A transcript of this episode is here.
Ben Bland expressively explores emotive AI’s shaky scientific underpinnings, the gap between reality and perception, popular applications, and critical apprehensions.
Ben exposes the scientific contention surrounding human emotion. He talks terms (emotive? empathic? not telepathic!) and outlines a spectrum of emotive applications. We discuss the powerful, often subtle, and sometimes insidious ways emotion can be leveraged. Ben explains the negative effects of perpetual positivity and why drawing clear red lines around the tech is difficult.
He also addresses the qualitative sea change brought about by large language models (LLMs), implicit vs explicit design and commercial objectives. Noting that the social and psychological impacts of emotive AI systems have been poorly explored, he muses about the potential to actively evolve your machine’s emotional capability.
Ben confronts the challenges of defining standards when the language is tricky, the science is shaky, and applications are proliferating. Lastly, Ben jazzes up empathy as a human superpower. While optimistic about empathic AI’s potential, he counsels proceeding with caution.
Ben Bland is an independent consultant in ethical innovation. An active community contributor, Ben is the Chair of the IEEE P7014 Standard for Ethical Considerations in Emulated Empathy in Autonomous and Intelligent Systems and Vice-Chair of IEEE P7014.1 Recommended Practice for Ethical Considerations of Emulated Empathy in Partner-based General-Purpose Artificial Intelligence Systems.
A transcript of this episode is here.
Philip Rathle traverses from knowledge graphs to LLMs and illustrates how loading the dice with GraphRAG enhances deterministic reasoning, explainability and agency.
Philip explains why knowledge graphs are a natural fit for capturing data about real-world systems. Starting with Kevin Bacon, he identifies many ‘graphy’ problems confronting us today. Philip then describes how interconnected systems benefit from the dynamism and data network effects afforded by knowledge graphs.
Next, Philip provides a primer on how Retrieval Augmented Generation (RAG) loads the dice for large language models (LLMs). He also differentiates between vector- and graph-based RAG. Along the way, we discuss the nature and locus of reasoning (or lack thereof) in LLM systems. Philip articulates the benefits of GraphRAG including deterministic reasoning, fine-grained access control and explainability. He also ruminates on graphs as a bridge to human agency as graphs can be reasoned on by both humans and machines. Lastly, Philip shares what is happening now and next in GraphRAG applications and beyond.
Philip Rathle is the Chief Technology Officer (CTO) at Neo4j. Philip was a key contributor to the development of the GQL standard and recently authored The GraphRAG Manifesto: Adding Knowledge to GenAI (neo4j.com) a go-to resource for all things GraphRAG.
A transcript of this episode is here.
Matthew Scherer makes the case for bottom-up AI adoption, being OK with not using AI, innovation as a relative good, and transparently safeguarding workers’ rights.
Matthew champions a worker-led approach to AI adoption in the workplace. He traverses the slippery slope from safety to surveillance and guards against unnecessarily intrusive solutions.
Matthew then illustrates why AI isn’t great at making employment decisions; even in objectively data rich environments such as the NBA. He also addresses the intractable problem of bias in hiring and flawed comparisons between humans and AI. We discuss the unquantifiable dynamics of human interactions and being OK with our inability to automate hiring and firing.
Matthew explains how the patchwork of emerging privacy regulations reflects cultural norms towards workers. He invokes the Ford Pinto and the Titan submersible catastrophe when challenging the concept of innovation as an intrinsic good. Matthew then makes the case for transparency as a gateway to enforcing existing civil rights and laws.
Matthew Scherer is a Senior Policy Counsel for Workers' Rights and Technology at the Center for Democracy and Technology (CDT). He studies how emerging technologies affect workers in the workplace and labor market. Matt is also an Advisor for the International Center for Advocates Against Discrimination.
A transcript of this episode is here.
Heidi Lanford connects data to cocktails and campaigns while considering the nature of data disruption, getting from analytics to AI, and using data with confidence.
Heidi studied mathematics and statistics and never looked back. Reflecting on analytics then and now, she confirms the appetite for data has never been higher. Yet adoption, momentum and focus remain evergreen barriers. Heidi issues a cocktail party challenge while discussing the core competencies of effective data leaders.
Heidi believes data and CDOs are disruptive by nature. But this only matters if your business incentives are properly aligned. She revels in agile experimentation while counseling that speed is not enough. We discuss how good old-fashioned analytics put the right pressure on the foundational data needed for AI.
Heidi then campaigns for endemic data literacy. Along the way she pans JIT holiday training and promotes confident decision making as the metric that matters. Never saying never, Heidi celebrates human experts and the spotlight AI is shining on data.
Heidi Lanford is a Global Chief Data & Analytics Officer who has served as Chief Data Officer (CDO) at the Fitch Group and VP of Enterprise Data & Analytics at Red Hat (IBM). In 2023, Heidi co-founded two AI startups LiveFire AI and AIQScore. Heidi serves as a Board Member at the University of Virginia School of Data Science, is a Founding Board Member of the Data Leadership Collaborative, and an Advisor to Domino Data Labs and Linea.
A transcript of this episode is here.
Marianna B. Ganapini contemplates AI nudging, entropy as a bellwether of risk, accessible ethical assessment, ethical ROI, the limits of trust and irrational beliefs.
Marianna studies how AI-driven nudging ups the ethical ante relative to autonomy and decision-making. This is a solvable problem that may still prove difficult to regulate. She posits that the level of entropy within a system correlates with risks seen and unseen. We discuss the relationship between risk and harm and why a lack of knowledge imbues moral responsibility. Marianna describes how macro-level assessments can effectively take an AI system’s temperature (risk-wise). Addressing the evolving responsible AI discourse, Marianna asserts that limiting trust to moral agents is overly restrictive. The real problem is conflating trust between humans with the trust afforded any number of entities from your pet to your Roomba. Marianna also cautions against hastily judging another’s beliefs, even when they overhype AI. Acknowledging progress, Marianna advocates for increased interdisciplinary efforts and ethical certifications.
Marianna B. Ganapini is a Professor of Philosophy and Founder of Logica.Now, a consultancy which seeks to educate and engage organizations in ethical AI inquiry. She is also a Faculty Director at the Montreal AI Ethics Institute and Visiting Scholar at the ND-IBM Tech Ethics Lab .
A transcript of this episode is here.
Miriam Vogel disputes AI is lawless, endorses good AI hygiene, reviews regulatory progress and pitfalls, boosts literacy and diversity, and remains net positive on AI.
Miriam Vogel traverses her unforeseen path from in-house counsel to public policy innovator. Miriam acknowledges that AI systems raise some novel questions but reiterates there is much to learn from existing policies and laws. Drawing analogies to flying and driving, Miriam demonstrates the need for both standardized and context-specific guidance.
Miriam and Kimberly then discuss what constitutes good AI hygiene, what meaningful transparency looks like, and why a multi-disciplinary mindset matters. While reiterating the business value of beneficial AI Miriam notes businesses are now on notice regarding their AI liability. She is clear-sighted regarding the complexity, but views regulation done right as a means to spur innovation and trust. In that vein, Miriam outlines the progress to-date and work still to come to enact federal AI policies and raise our collective AI literacy. Lastly, Miriam raises questions everyone should ask to ensure we each benefit from the opportunities AI presents.
Miriam Vogel is the President and CEO of Equal AI, a non-profit movement committed to reducing bias and responsibly governing AI. Miriam also chairs the US National AI Advisory Committee (NAIAC).
A transcript of this episode is here.
Melissa Sariffodeen contends learning requires unlearning, ponders human-AI relationships, prioritizes outcomes over outputs, and values the disquiet of constructive critique.
Melissa artfully illustrates barriers to innovation through the eyes of a child learning to code and a seasoned driver learning to not drive. Drawing on decades of experience teaching technical skills, she identifies why AI creates new challenges for upskilling. Kimberly and Melissa then debate viewing AI systems through the lens of tools vs. relationships. An avowed lifelong learner, Melissa believes prior learnings are sometimes detrimental to innovation. Melissa therefore advocates for unlearning as a key step in unlocking growth. She also proposes a new model for organizational learning and development. A pragmatic tech optimist, Melissa acknowledges the messy middle and reaffirms the importance of diversity and critically questioning our beliefs and habits.
Melissa Sariffodeen is the founder of the The Digital Potential Lab, co-founder and CEO of Canada Learning Code and a Professor at the Ivey Business School at Western University where she focuses on the management of information and communication technologies.
A transcript of this episode is here.
Shannon Mullen O’Keefe champions collaboration, serendipitous discovery, curious conversations, ethical leadership, and purposeful curation of our technical creations.
Shannon shares her professional journey from curating leaders to innovative ideas. From lightbulbs to online dating and AI voice technology, Shannon highlights the simultaneously beautiful and nefarious applications of tech and the need to assess our creations continuously and critically. She highlights powerful insights spurred by the values and questions posed in the book 10 Moral Questions: How to Design Tech and AI Responsibly. We discuss the ‘business of business,’ consumer appetite for ethical businesses, and why conversation is the bedrock of culture. Throughout, Shannon highlights the importance and joy of discovery, embracing nature, sitting in darkness, and mustering the will to change our minds, even if that means turning our creations off.
Shannon Mullen O’Keefe is the Curator of the Museum of Ideas and co-author of the Q Collective’s book 10 Moral Questions: How to Design Tech and AI Responsibly. Learn more at https://www.10moralquestions.com/.
A transcript of this episode is here.
Sarah Gibbons and Kate Moran riff on the experience of using current AI tools, how AI systems may change our behavior and the application of AI to human-centered design.
Sarah and Kate share their non-linear paths to becoming leading user experience (UX) designers. Defining the human-centric mindset Sarah stresses that intent is design and we are all designers. Kate and Sarah then challenge teams to resist short-term problem hunting for AI alone. This leads to an energized and frank debate about the tensions created by broad availability of AI tools with “shitty” user interfaces, why conversational interfaces aren’t the be-all-end-all and whether calls for more discernment and critical thinking are reasonable or even new. Kate and Sara then discuss their research into our nascent AI mental models and emergent impacts on user behavior. Kate discusses how AI can be used for UX design along with some far-fetched claims. Finally, both Kate and Sara share exciting areas of ongoing research.
Sarah Gibbons and Kate Moran are Vice Presidents at Nielson Norman Group where they lead strategy, research, and design in the areas of human-centered design and user experience (UX).
A transcript of this episode is here.
Simon Johnson takes on techno-optimism, the link between technology and human well-being, the law of intended consequences, the modern union remit and political will.
In this sobering tour through time, Simon proves that widespread human flourishing is not intrinsic to tech innovation. He challenges the ‘productivity bandwagon’ (an economic maxim so pervasive it did not have a name) and shows that productivity and market polarization often go hand-in-hand. Simon also views big tech’s persuasive powers through the lens of OpenAI’s board debacle.
Kimberly and Simon discuss the heyday of shared worker value, the commercial logic of automation and augmenting human work with technology. Simon highlights stakeholder capitalism’s current view of labor as a cost rather than people as a resource. He underscores the need for active attention to task creation, strong labor movements and participatory political action (shouting and all). Simon believes that shared prosperity is possible. Make no mistake, however, achieving it requires wisdom and hard work.
Simon Johnson is the Head of the Economics and Management group at MIT’s Sloan School of Management. Simon co-authored the stellar book “Power and Progress: Our 1,000 Year Struggle Over Technology and Prosperity with Daren Acemoglu.
A transcript of this episode is here.
Professor Rose Luckin provides an engaging tutorial on the opportunities, risks, and challenges of AI in education and why AI raises the bar for human learning.
Acknowledging AI’s real and present risks, Rose is optimistic about the power of AI to transform education and meet the needs of diverse student populations. From adaptive learning platforms to assistive tools, Rose highlights opportunities for AI to make us smarter, supercharge learner-educator engagement and level the educational playing field. Along the way, she confronts overconfidence in AI, the temptation to offload challenging cognitive workloads and the risk of constraining a learner’s choices prematurely. Rose also adroitly addresses conflicting visions of human quantification as the holy grail and the seeds of our demise. She asserts that AI ups the ante on education: how else can we deploy AI wisely? Rising to the challenge requires the hard work of tailoring strategies for specific learning communities and broad education about AI itself.
Rose Luckin is a Professor of Learner Centered Design at the UCL Knowledge Lab and Founder of EDUCATE Ventures Research Ltd., a London hub for educational technology start-ups, researchers and educators involved in evidence-based educational technology and leveraging data and AI for educational benefit. Explore Rose’s 2018 book Machine Learning and Human Intelligence (free after creating account) and the EDUCATE Ventures newsletter The Skinny.
A transcript of this episode is here.
Katrina Ingram addresses AI power dynamics, regulatory floors and ethical ceilings, inevitability narratives, self-limiting predictions, and public AI education.
Katrina traces her career from communications to her current pursuits in applied AI ethics. Showcasing her way with words, Katrina dissects popular AI narratives. While contemplating AI FOMO, she cautions against an engineering mentality and champions the power to say ‘no.’ Katrina contrasts buying groceries with AI solutions and describes regulations as the floor and ethics as the ceiling for responsible AI. Katrina then considers the sublimation of AI ethics into AI safety and risk management, whether Sci-Fi has led us astray and who decides what. We also discuss the law of diminishing returns, the inevitability narrative around AI, and how predictions based on the past can narrow future possibilities. Katrina commiserates with consumers but cautions against throwing privacy to the wind. Finally, she highlights the gap in funding for public education and literacy.
Katrina Ingram is the Founder & CEO Ethically Aligned AI, a Canadian consultancy enabling organizations to practically apply ethics in their AI pursuits.
A transcript of this episode is here.
Paulo Carvão discusses AI’s impact on the public interest, emerging regulatory schemes, progress over perfection, and education as the lynchpin for ethical tech.
In this thoughtful discussion, Paulo outlines the cultural, ideological and business factors underpinning the current data economy. An economy in which the manipulation of personal data into private corporate assets is foundational. Opting for optimism over cynicism, Paul advocates for a first principles approach to ethical development of AI and emerging tech. He argues that regulation creates a positive tension that enables innovation. Paulo examines the emerging regulatory regimes of the EU, the US and China. Preferencing progress over perfection, he describes why regulating technology for technology’s sake is fraught. Acknowledging the challenge facing existing school systems, Paulo articulates the foundational elements required of a ‘bilingual’ education to enable future generations to “do the right things.”
Paulo Carvão is a Senior Fellow at the Harvard Advanced Leadership Initiative, a global tech executive and investor. Follow his writings and subscribe to his newsletter on the Tech and Democracy substack.
A transcript of this episode is here.
Dr. Christina Jayne Colclough reflects on AI Regulations at Work.
In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.
Giselle Mota reflects on Inclusion at Work in the age of AI.
In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.
Ganes Kesari reflects on generative AI (GAI) in the Enterprise.
In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.
Chris McClean reflects on Digital Ethics and Regulation in AI today.
In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.
Dr. Erica Thompson reflects on Making Model Decisions about and with AI.
In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.
To learn more, check out Erica’s book Escape from Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It
Roger Spitz reflects on Upskilling Human Decision Making in the age of AI.
In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.
To learn more, check out Roger’s book series The Definitive Guide to Thriving on Disruption
Sheryl Cababa reflects on Systems Thinking in AI design.
In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.
To learn more, check out Sheryl’s book Closing the Loop: Systems Thinking for Designers
Ilke Demir reflects on Generative AI (GAI) Detection and Protection.
In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.
Professor J Mark Bishop reflects on large language models (LLM) and beyond.
In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.
Henrik Skaug Sætra reflects on Environmental and Social Sustainability with AI.
In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next. To learn more, check out Henrik’s latest book: Technology and Sustainable Development: The Promise and Pitfalls of Techno-Solutionism
Yonah Welker reflects on Policymaking, Inclusion and Accessibility in AI today.
In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.
Marisa Tschopp reflects on Human-AI interactions in AI.
In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.
Patrick Hall drops in to provide a current take on risk, reward and regulation in AI today.
In this bonus episode, Patrick reflects on the evolving state of play in AI regulations, consumer awareness and education.
Ganes Kesari confronts AI hype and calls for balance, reskilling, data literacy, decision intelligence and data storytelling to adopt AI productively.
Ganes reveals the reality of AI and analytics adoption in the enterprise today. Highlighting extreme divides in understanding and expectations, Ganes provides a grounded point of view on delivering sustained business value.
Cautioning against a technocentric approach, Ganes discusses the role of data literacy and data translators in enabling AI adoption. Discussing common barriers to change, Kimberly and Ganes discuss growing resistance from technologists, not just end users. Ganes muses about the impact of AI on creative tasks and his own experiences with generative AI. Ganes also underscores the need to address workforce reskilling yet remains optimistic about the future of human endeavor. While discussing the need for improved decision-making, Ganes identifies decision intelligence as a critical new business competency. Finally, Ganes strongly advocates for taking a business-first approach and using data storytelling as part of the responsible AI and analytics toolkit.
Ganes Kesari is the co-founder and Chief Decision Scientist for Gramener and Innovation Titan.
A transcript of this episode is here.
Dr. Christina Colclough addresses tech determinism, the value of human labor, managerial fuzz, collective will, digital rights, and participatory AI deployment.
Christina traces the path of digital transformation and the self-sustaining narrative of tech determinism. As well as how the perceptions of the public, the C-Suite and workers (aka wage earners) diverge. Thereby highlighting the urgent need for robust public dialogue, education and collective action.
Championing constructive debate, Christina decries ‘for-it-or-against-it’ views on AI and embraces the Luddite label. Kimberly and Christina discuss the value of human work, we vs. they work cultures, the divisiveness of digital platforms, and sustainable governance. Christina questions why emerging AI regulations give workers short shift and whether regulation is being privatized. She underscores the dangers of stupid algorithms and the quantification of humans. But notes that knowledge is key to tapping into AI’s benefits while avoiding harm. Christina ends with a persuasive call for responsible regulation, radical transparency and widespread communication to combat collective ignorance.
Dr. Christina Jayne Colclough is the founder of The Why Not Lab where she fiercely advocates for worker rights and dignity for all in the digital age.
A transcript of this episode is here.
Reid Blackman confronts whack-a-mole approaches to AI ethics, ethical ‘do goodery,’ squishy values, moral nuance, advocacy vs. activism and overfitting for AI.
Reid distinguishes AI for ‘not bad’ from AI ‘for good’ and corporate social responsibility. He describes how the language of risk creates a bridge between ethics and business. Debunking the notion of ethicists as moral priests, Reid provides practical steps for making ethics palatable and effective.
Reid and Kimberly discuss developing organizational muscle to reckon with moral nuance. Reid emphasizes that disagreement and uncertainty aren’t unique to ethics. Nor do squishy value statements make ethics squishy. Reid identifies a cocktail of motivations driving organization to engage, or not, in AI ethics. We also discuss the tendency for self-regulation to cede to market forces and the government’s role in ensuring access to basic human goods. Cautioning against overfitting an ethics program to AI alone, Reid illustrates the benefits of distinguishing digital ethics from ethics writ large. Last but not least, Reid considers how organizations may stitch together responses to the evolving regulatory patchwork.
Reid Blackman is the author of “Ethical Machines” and the CEO of Virtue Consultants.
A transcript of this episode is here.
Ilke Demir depicts the state of generative AI, deepfakes for good, the emotional shelf life of synthesized media, and methods to identify AI-generated content.
Ilke provides a primer on traditional generative models and generative AI. Outlining the fast-evolving capabilities of generative AI, she also notes their current lack of controls and transparency. Ilke then clarifies the term deepfake and highlights applications of ‘deepfakes for good.’
Ilke and Kimberly discuss whether the explosion of generated imagery creates an un-reality that sets ‘perfectly imperfect’ humans up for failure. An effervescent optimist, Ilke makes a compelling case that the true value of photos and art comes from our experiences and memories. She then provides a fascinating tour of emerging techniques to detect and indelibly identify generated media. Last but not least, Ilke affirms the need for greater public literacy and accountability by design.
Ilke Demir is a Sr. Research Scientist at Intel. Her research team focuses on generative models for digitizing the real world, deepfake detection and generation techniques.
A transcript of this episode is here.
Professor J Mark Bishop reflects on the trickiness of language, how LLMs work, why ChatGPT can’t understand, the nature of AI and emerging theories of mind.
Mark explains what large language models (LLM) do and provides a quasi-technical overview of how they work. He also exposes the complications inherent in comprehending language. Mark calls for more philosophical analysis of how systems such as GPT-3 and ChatGPT replicate human knowledge. Yet, understand nothing. Noting the astonishing outputs resulting from more or less auto-completing large blocks of text, Mark cautions against being taken in by LLM’s disarming façade.
Mark then explains the basis of the Chinese Room thought experiment and the hotly debated conclusion that computation does not lead to semantic understanding. Kimberly and Mark discuss the nature of learning through the eyes of a child and whether computational systems can ever be conscious. Mark describes the phenomenal experience of understanding (aka what it feels likes). And how non-computational theories of mind may influence AI development. Finally, Mark reflects on whether AI will be good for the few or the many.
Professor J Mark Bishop is the Professor of Cognitive Computing (Emeritus) at Goldsmith College, University of London and Scientific Advisor to FACT360.
A transcript of this episode is here.
Chris McClean reflects on ethics vs. risk, ethically positive outcomes, the nature of trust, looking beyond ourselves, privacy at work and in the metaverse.
Chris outlines the key differences between digital ethics and risk management. He emphasizes the discovery of positive outcomes as well as harms and where a data-driven approach can fall short. From there, Chris outlines a comprehensive digital ethics framework and why starting with impact is key. He then describes a pragmatic approach for making ethics accessible without sacrificing rigor.
Kimberly and Chris discuss the definition of trust, the myriad reasons we might trust someone or something, and why trust isn’t set-it-and-forget-it. From your smart doorbell to self-driving cars and social services, Chris argues persuasively for looking beyond ‘how does this affect me.’ Highlighting Eunice Kyereme’s work on digital makers and takers, Chris describes the role we each play – however unwittingly – in creating the digital ecosystem. We then discuss surveillance vs. monitoring in the workplace and the potential for great good and abuse inherent in the Metaverse. Finally, Chris stresses that ethically positive outcomes go beyond ‘tech for good’ and that ethics is good business.
Chris McClean is the Global Head of Digital Ethics at Avanade and a PhD candidate in Applied Ethics at the University of Leeds.
A transcript of this episode is here.
Henrik Skaug Sætra contends humans aren’t mere machines, assesses AI thru a sustainable development lens and weighs the effect of political imbalances and ESG.
Henrik embraces human complexity. He advises against applying AI to naturally messy problems or to influence populations least able to resist. Henrik outlines how the UN Sustainable Development Goals (SDG) can identify beneficial and marketable avenues for AI. He also describes SDG’s usefulness in ethical impact assessment. Championing affordable and equitable access to technology, Henrik shows how disparate impacts occur between individuals, groups and society. Along the way, Kimberly and Henrik discuss political imbalances, the technocratic nature of emerging regulations and why we shouldn’t expect corporations to be broadly ethical of their own accord. Outlining his AI ESG protocol, Henrik surmises that qualitative rigor can address gaps in quantitative analysis alone. Finally, Henrik encourages the proactive use of SDGs and ESG to drive innovation and opportunity.
Henrik is Head of the Digital Society and an Associate Professor at Østfold University College. He is a political theorist focusing on the political, ethical, and social implications of technology.
A transcript of this episode can be found here.
Dr. Mark Coeckelbergh is a Professor of Philosophy of Media and Technology, a member of the High-Level Expert Group on Artificial Intelligence (EC) and the Austrian Council on Robotics and AI.
In this insightful discussion, Mark explains why AI systems are not merely tools or strictly rational endeavors. He describes the challenges created when AI systems imitate human capabilities and how human sciences help address the messy realities of AI. Mark also demonstrates how political philosophy makes conversations about multidimensional topics such as bias, fairness and freedom more productive. Kimberly and Mark discuss the difficulty with global governance, the role of scientific expertise and technology in society, and the need for political imagination to govern emerging technologies such as AI. Along the way, Mark illustrates the debate about how AI systems could vs. should be used through the lens of gun control and climate change. Finally, Mark sounds a cautionary note about the potential for AI to undermine our fragile democratic institutions.
A transcript of this episode can be found here.
Patrick Hall is the Principal Scientist at bnh.ai.
Patrick artfully illustrates how data science has become divorced from scientific rigor. At least, that is, in popular conceptions of the practice. Kimberly and Patrick discuss the pernicious influence of the McNamara Fallacy, applying the scientific method to algorithmic development and keeping an open mind without sacrificing concept validity. Patrick addresses the recent hubbub around AI sentience, cautions against using AI in social contexts and identifies the problems AI algorithms are best suited to solve. Noting AI is no different than any other mission-critical software, he outlines the investment and oversight required for AI programs to deliver value. Patrick promotes managing AI systems like products and makes the case for why performance in the lab should not be the first priority.
A transcript of this episode can be found here.
Fernando Lucini is the Global Data Science & ML Engineering Lead (aka Chief Data Scientist) at Accenture.
Fernando Lucini outlines common uses for AI generated synthetic data. He emphasizes that synthetic data is a facsimile – close, but not quite real - and debunks the notion it is inherently private. Kimberly and Fernando discuss the potential pitfalls in synthetic data sets, the emergent need for standard controls, and why ensuring quality - much less fairness - is not simple. Fernando assesses the current state of the synthetic data market and the work still to be done to enable broad-scale adoption. Tipping his hat to fabulous achievements such as GPT-3 and Dall-E, Fernando identifies multiple ways synthetic data can be used for good works and creative endeavors.
A transcript of this episode can be found here.
Roger Spitz is the CEO of Techistential and Chairman of the Disruptive Futures Institute.
In this thought-provoking discussion, Roger discusses why neither humans nor AI systems are great at decision making in complex environments. But why humans should be. Roger unveils the insidious influence of AI systems on human decisions and why uncertainty is a pre-requisite for human choice, freedom, and agency. Kimberly and Roger discuss the implications of complexity, the rising cost of poor assumptions, and the dangerous allure of delegating too many decisions to AI-enabled machines. Outlining the AAA (antifragile, anticipatory, agile) model for decision-making in the face of deep uncertainty, Roger differentiates foresight from strategic planning and anticipatory agility from ‘move fast and break things.’ Last but not least, Roger argues that current educational incentives run counter to nurturing the mindset and skills needed to thrive in our increasingly complex, emergent world.
A transcript of this episode can be found here.
Dr. Dorothea Baur is an ethicist and independent consultant on the topics of ethics, responsibility and sustainability in tech and finance.
Dorothea debunks common ethical misconceptions and explores the novel issues that arise when applying ethics to technology. Kimberly and Dorothea discuss the risks posed by risk management-based approaches to tech ethics. As well as the “unholy collision” between the pursuit of scale and universal generalization. Dorothea reluctantly gives a nod to Milton Friedman when linking ethics to material business outcomes. Along the way, Dorothea illustrates how stakeholder engagement is evolving and the power of the employee. Noting that algorithms do not have agency and will never be ethical, Dorothea persuasively articulates our moral responsibility to retain responsibility for our AI creations.
A transcript of this episode can be found here.
Marisa Tschopp is a Human-AI interaction researcher at scip AG and Co-Chair of the IEEE Agency and Trust in AI Systems Committee.
Marisa answers the question ‘what is trust?' and compares trust between humans to trust in a machine. Differentiating trust from trustworthiness, Marisa emphasizes the importance of considering the context and motivation behind AI systems. Kimberly and Marisa discuss the pros and cons of endowing AI systems with human characteristics (aka anthropomorphizing) and why ‘do you trust AI?’ is the wrong question. Debunking the concept of ‘The AI’, Marisa outlines practices for calibrating trust in AI systems. A self-described skeptical optimist, Marisa also shares her research into how people perceive their relationships with AI-enabled machines and how these patterns may change over time.
A transcript of this episode can be found here.
Dr Erica Thompson is a Senior Policy Fellow in Ethics of Modelling and Simulation at the LSE Data Science Institute.
Using the trusty-ish weather forecast as a starting point, Erica highlights the gaps to be minded when applying models in real-life. Kimberly and Erica discuss the role of expert judgement and intuition, the orthodoxy of data-driven cultures, models as engines not cameras, and why exposing uncertainty improves decision-making. Erica illustrates why it is so easy to become overconfident in models. She shows how value judgements are embedded in every step of model development (and hidden in math), why chameleons and accountability don’t mix, and considerations for using model outputs to think or decide effectively. Looking forward, Erica foresees a future in which values rather than data drive decision-making.
A transcript of this episode can be found here.
Sheryl Cababa is the Chief Design Officer at Substantial where she conducts research, develops design strategies and advocates for human-centric outcomes.
From the infinite scroll to Twitter edits, Sheryl illustrates how current design practices unwittingly undermine human agency. Often while delivering exactly what a user wants. She refutes the need to categorically eliminate the term ‘users’ while showing how a singular user focus has led us astray. Sheryl then outlines how systems thinking can reorient existing design practices toward human-centric outcomes. Along the way, Kimberly and Sheryl discuss the limits of empathy, the evolving ethos of unintended consequences and embracing nuance. While acknowledging the challenges ahead, Sheryl remains optimistic about our ability to design for human well-being not just expediency or profit.
A transcript of this episode can be found here.
Our next episode explores the limits of model land with Dr Erica Thompson. Subscribe now so you don’t miss it.
Kate O’Neill is an executive strategist, the Founder and CEO of KO Insights, and author dedicated to improving the human experience at scale.
In this paradigm-shifting discussion, Kate traces her roots from a childhood thinking heady thoughts about language and meaning to her current mission as ‘The Tech Humanist’. Following this thread, Kate illustrates why meaning is the core of what makes us human. She urges us to champion meaningful innovation and reject the notion that we are victims of a predetermined future.
Challenging simplistic analysis, Kate advocates for applying multiple lenses to every situation: the individual and the collective, uses and abuses, insight and foresight, wild success and abject failure. Kimberly and Kate acknowledge but emphatically disavow current norms that reject nuanced discourse or conflate it with ‘both-side-ism’. Emphasizing that everything is connected, Kate shows how to close the gap between human-centricity and business goals. She provides a concrete example of how innovation and impact depend on identifying what is going to matter, not just what matters now. Ending on a strategically optimistic note, Kate urges us to anchor on human values and relationships, habituate to change and actively architect our best human experience – now and in the future.
A transcript of this episode can be found here.
Thank you for joining us for Season 2 of Pondering AI. Join us next season as we ponder the ways in which AI continues to elevate and challenge our humanity. Subscribe to Pondering AI now so you don’t miss it.
Giselle Mota is a Principal Consultant for the Future of Work at ADP where she advices organizations on human agency, diversity and learning in the age of AI.
In this energetic discussion, Giselle shares how navigating dyslexia spawned a passion for technology and enabling learning at work. Giselle stresses that human agency and automation are only mutually exclusive when AI is employed with the wrong end in mind. Prioritizing human experience over ‘doing more with less’ Giselle explores the impact – good and bad - of AI systems on humans at work today.
While ruminating on the future happening now, Giselle puts the onus on organizations to ensure no employee is left behind. From the warehouse floor to HR, the importance of diverse perspectives, rigorous due diligence and critical thinking when deploying AI systems is underscored. Along the way, Kimberly and Giselle dissect what AI algorithms can and cannot reasonably predict. Giselle then defines the leadership mindsets and talent needed to bring AI to work appropriately. With infectious optimism, she imposes a reality check on our innate desire to “just do cool things”. Finally, in a rousing call to action, Giselle makes a robust argument for robust accountability and making ethics endemic to every human endeavor, including AI.
A transcript of this episode can be found here.
Our final episode of Season 2 features Kate O’Neill. A tech humanist and author of ‘A Future so Bright’ Kate will discuss how we can architect the future of AI with strategic optimism. Subscribe to Pondering AI now so you don’t miss it.
Baroness Beeban Kidron is an award-willing filmmaker, a Crossbench Peer in the UK House of Lords and the Founder and Chair of the 5Rights Foundation.
In this eye-opening discussion, Beeban vividly describes how the seed for 5Rights was planted while getting up close and personal with teenagers navigating the physical and digital realms ‘In Real Life’. Beeban sounds a resounding alarm about why treating all humans as equal on the internet is regressive. As well as how existing business models have created a perfect societal storm, especially for children.
Intertwining the voices of these underserved and underrepresented stakeholders with some shocking facts, Beeban illustrates the true impact of the current digital experiment on young people. In that vein, Kimberly and Beeban examine behaviors we implicitly condone and, in fact, promote in the digital realm that would never pass muster in so-called real life. Speaking to the brilliantly terrifying Twisted Toys campaign, Beeban shows how storytelling can make these critical yet oft sensitive topics accessible. Finally, Beeban speaks about critical breakthroughs such as the Age-Appropriate Design Code, positive action being taken by digital platforms in response and the long road still ahead.
A transcript of this episode can be found here.
Our next episode features Giselle Mota. Giselle is a Principle Consultant for the Future of Work at ADP where she advices organizations on human agency, diversity and learning in the age of AI. Subscribe to Pondering AI now so you don’t miss it.
Vincent de Montalivet is the Global AI Sustainability Leader at Capgemini where he develops strategies to use AI to combat climate change and drive corporate net-zero initiatives.
In this forthright discussion, Vincent charts his path from supply chain engineering to his current position at the crossroads of data, IT and sustainability. Vincent stresses this is the ‘decade of action’ and highlights cutting edge AI applications enabling the turn from simulation to accountability in real-time. Addressing fears about AI, Vincent shows how it enables rather than replaces human expertise.
In that vein, Kimberly and Vincent have a frank discussion about whether AI for environmental good balances AI’s own appetite for energy. Vincent examines different aspects of the argument and shares recent research, facts and figures to shed light on the debate. He describes why AI is not a silver bullet, why AI is not always required and emerging research into making AI itself green. Vincent then provides a 3-step roadmap for corporate sustainability initiatives. Discussing emerging innovations, Vincent pragmatically points out that we are only addressing 3% of the green use cases that can be addressed with AI today. He rightfully suggests focusing there.
A transcript of this episode can be found here.
Our next episode features Baroness Beeban Kidron. She is the Founder and Chair of the 5Rights Foundation which is leading the fight to protect children’s rights and well-being in the digital realm. Subscribe to Pondering AI now so you don’t miss it.
David Ryan Polgar is the Founder of All Tech is Human. He is a leading tech ethicist, an advocate for human-centric technology, and advisor on improving social media and crafting a better digital future.
In this timely discussion, David traces his not-so-unlikely path from practicing law to being a standard bearer for the responsible technology movement. He artfully illustrates the many ways technology is altering the human experience and makes the case for “no application without representation”.
Arguing that many of AI’s misguided foibles stem from a lack of imagination, David shows how all paths to responsible AI start with diversity. Kimberly and David debunk the myth of the ethical superhero but agree there may be a need for ethical unicorns. David expounds on the need for expansive education, why non-traditional career paths will become traditional and the benefits of thinking differently. Acknowledging the complex, nuanced problems ahead, David advocates for space to air constructive, critical, and, yes, contrarian points of view. While disavowing 80s sitcoms, David celebrates youth intuition, bemoans the blame game, prioritizes progress over problem statements, and leans into our inevitable mistakes. Finally, David invokes a future in which responsible tech is so in vogue it becomes altogether unremarkable.
A transcript of this episode can be found here.
Our next episode features Vincent de Montalivet, leader of Capgemini’s global AI Sustainability program. Vincent will help us explore the yin and yang of AI’s relationship with the environment. Subscribe now to Pondering AI so you don’t miss it.
Dr. Valérie Morignat PhD is the CEO of Intelligent Story and a leading advisor on the creative economy. She is a true polymath working at the intersection of art, culture, and technology.
In this perceptive discussion, Valérie illustrates how cultural legacies inform technology and innovation today. Tracing a path from storytelling in caves to modern Sci-Fi she proves that everything new takes (a lot of) time. Far from theoretical, Valérie shows how this philosophical understanding helps business innovators navigate the current AI landscape.
Discussing the evolution of VR/AR, Valérie highlights the existential quandary created by our increasingly fragmented digital identities. Kimberly and Valérie discuss the pillars of responsible innovation and the amplification challenges AI creates. Valérie shares the power of AI to teach us about ourselves and increase human learning, creativity, and autonomy. Assuming, of course, we don’t encode ancient, spurious classification schemes or aggravate negative behaviors. She also describes our quest for authenticity and flipping the script to search for the real in the virtual.
Finally, Valérie sketches a roadmap for success including executive education and incremental adoption to create trust and change our embedded mental models.
A transcript of this episode can be found here.
Our next episode features David Ryan Polgar, founder of All Tech is Human. David is a leading tech ethicist and responsible technology advocate who is well-known for his work on improving social media. Subscribe now so you don’t miss it.
Yonah Welker is a technology innovator, influencer, and advocate for diversity and zero exclusion in AI. They are at the forefront of policies and applications for adaptive, assistive, and social AI.
In this illuminating discussion, Yonah traces their personal journey from isolation to advocacy through technology. They are passionate about the future of AI-enabled education, healthcare, and civics. Yet caution that our current approach to inclusion is not, in fact, inclusive. While evaluating mechanisms for accountability, Yonah shares lessons learned from the European Commission’s diverse approach to technology evaluation.
Yonah has an expansive view of how AI can “change everything” for those who experience life differently – whether they are autistic, neurodiverse, disabled or dyslexic. Kimberly and Yonah discuss how AI is expanding the borders of the classroom and workplace today. And how these solutions can inadvertently reinforce existing barriers if not mindfully applied. This leads naturally to the need for broad community collaboration and human involvement beyond traditional corporate boundaries.
Yonah highlights our responsibilities as digital citizens and the critical debate over digital ownership. Finally, Yonah emphasizes that we are all, at our core, activists who can influence the trajectory of AI.
A transcript of this episode can be found here.
Our next episode features Dr. Valérie Morignat PhD. Valerie is the CEO of Intelligent Story and a leading advisor on the creative economy who works at the intersection of art and AI. Subscribe now so you don’t miss it.
Dr. Eric Perakslis, PhD is the Chief Science and Digital Officer at the Duke Clinical Research Institute.
In this incisive discussion, Eric exposes the curious nature of healthcare data. He proposes treating data like a digital specimen: one that requires clear consent and protection against misuse. Expanding our view beyond the doctor’s office, Eric shows why adverse effects from data misuse can be much harder to cure than a rash. As well as our innate human tendency to focus on technology’s potential while overlooking patient vulnerabilities.
While discussing current data protections, Eric lays the foundation for a shift from privacy toward non-discrimination. Along the way, Kimberly and Eric discuss the many ways anonymous data can compromise patient privacy and the research it underpins. In doing so, a critical loophole in existing institutional review boards (IRB) and regulatory safeguards is exposed. An avid data advocate, Eric adroitly argues that proper patient and data protection will accelerate innovation and life-saving research. Finally, Eric makes a case for doing the hard things first and why the greatest research opportunities are rooted in equity.
A transcript of this episode can be found here.
Our next episode features Yonah Welker. They are a ‘tech explorer’ and leading voice regarding the need for diversity and zero exclusion in AI as well as the role of social AI. Subscribe now so you don’t miss it.
Dr. Ansgar Koene is the Global AI Ethics and Regulatory Leader at Ernst & Young (EY), a Sr. Research Fellow at the University of Nottingham and chair of the IEEE P7003 Standard for Algorithm Bias Considerations working group.
In this visionary discussion, Ansgar traces his path from robotics and computational social science to the ethics of data sharing and AI. Drawing from his wide-ranging research, Ansgar illustrates the need for true stakeholder representation; what diversity looks like in practice; and why context, critical thinking and common sense are required in AI.
Describing some of the more subtle yet most impactful dilemmas in AI, Ansgar highlights the natural tension between developing foresight to avoid harms whilst reacting to harms that have already occurred. Ansgar and Kimberly discuss emerging regulations and the link between power and accountability in AI. Ansgar advocates for broad AI literacy but cautions against setting citizens and users up with unrealistic expectations. Lastly, Ansgar muses about the future and why the biggest challenges created by AI might not be obvious today.
A full transcript of this episode can be found here.
Thank you for joining us for Season 1 of Pondering AI. Join us next season as we ponder the ways in which AI continues to elevate and challenge our humanity. Subscribe to Pondering AI now so you don’t miss it.
Lama Nachman is an Intel fellow and the director of Intel’s Human & AI Systems Research Lab. She also led Intel’s Responsible AI program. Lama’s team researches how AI can be applied to deliver contextually appropriate experiences that increase accessibility and amplify human potential.
In this inspirational discussion, Lama exposes the need for equity in AI, demonstrates the difficulty in empowering authentic human interaction, and why ‘Wizard of Oz’ approaches as well as a willingness to go back to the drawing board are critical.
Through the lens of her work in early childhood education to manufacturing and assistive technologies, Lama deftly illustrates the ethical dilemmas that arise with any AI application - no matter how well-meaning. Kimberly and Lama discuss why perfectionism in the enemy of progress and the need to design for uncertainty in AI. Speaking to her quest to give people suffering from ALS back their voice, Lama stresses how designing for authenticity over expediency is critical to unlock the human experience.
While pondering the many ethical conundrums that keep her up at night, Lama shows how an expansive, multi-disciplinary approach is critical to mitigate harm. Any why cooperation between humans and AI maximizes the potential of both.
A full transcript of this episode can be found here.
Our final episode this season features Dr. Ansgar Koene. Ansgar is the Global AI Ethics and Regulatory Leader at EY and a Sr. Research Fellow who specializes in social media, data ethics and AI regulation. Subscribe now to Pondering AI so you don’t miss him.
Shalini Kantayya is a storyteller, social activist, and filmmaker who explores challenging social topics with empathy and humor. Shalini’s film Coded Bias debunks the myth that AI algorithms are objective by nature.
In this thought-provoking discussion, Shalini illustrates why film is a powerful medium for social change (hint: it’s about empathy), shares her belief that humans – not machines – must reinvent the future, and shows how inclusion and a focus on the human experience are critical to get AI right.
Shalini artfully traces the inspiration for Coded Bias and the danger in ceding human autonomy to any unintelligent system. Kimberly and Shalini discuss why good intent and a sole focus on fairness and bias are not enough when considering AI’s future. Highlighting the work of researchers such as Dr. Timnit Gebru and Joy Buolamwini, Shalini makes the case for inclusion in AI and shares a proven recipe for moving the dial on ethical AI. Finally, Shalini speaks to the need for empathy in all things – including toward our innate human propensity for bias. And how storytelling keeps the human experience front-and-center, allowing us to cross boundaries and open hearts and minds to a different point of view.
A full transcript of this episode can be found here.
Our next episode features Lama Nachman. Lama leads Intel’s Human & AI Systems Research Lab where she directs some of the most impactful work - such as giving people back their voice - in applied AI today. Subscribe now to Pondering AI so you don’t miss her.
Teemu Roos is the lead instructor of the Elements of AI online course which has a pivotal role in Finland's unique, inclusive AI strategy. Teemu is also a Professor of Computer Science at the University of Helsinki and leader of the AI Education programme at the Finnish Center for AI.
In this encouraging discussion, Teemu shares how an insatiable appetite for discovery led to a career as a ML researcher and educator. His excitement about projects ranging from astrophysics to neonatal brain development highlight AI’s endless potential and the importance of imagination and curiosity.
Teemu deftly explains why homogeneity makes doing good AI hard. He enthusiastically demonstrates how collaboration between data scientists, experts and laypersons exposes otherwise hidden opportunities. Kimberly and Teemu discuss the need for broad citizen engagement in AI and why the target audience for Elements of AI is “everyone who isn’t interested in AI”. And why we must focus on ethics and privacy now. With humor and optimism, Teemu helps us envision a future where everyone is informed, passionate and actively engaged in AI.
A full transcript of this episode can be found here.
Our next episode features Shalini Kantayya. Shalini is a filmmaker, activist, and self-proclaimed sci-fi fanatic. Her documentary Coded Bias exposes the biases and inequalities that can lurk within AI algorithms. Subscribe to Pondering AI now so you don’t miss her.
Beena Ammanath is the Executive Director of Deloitte’s AI Institute and leads their Trustworthy AI practice. She is a seasoned executive with global cross-industry experience and has been a board member and advisor to numerous tech startups. Beena is also the founder of the non-profit Humans For AI.
In this insightful discussion, Beena traces AI ethics from click-bait to operational reality. She explores the interplay between R&D, value creation and ethics and why expecting – and adapting to - the unexpected is key to trustworthy AI.
Using practical examples, Beena illustrates why AI ethics go beyond fairness and bias and why principles do, in fact, matter. Kimberly and Beena discuss how AI challenges traditional views of privacy and how companies can make ethics real. Beena provides guidance on leveraging ethical frameworks and why ethical evaluations are not one-size-fits-all or once-and-done. Finally, Beena shares her hope that lessons learned from AI will inform adoption of technologies such as AR/VR and quantum computing.
A full transcript of this episode can be found here.
Our next episode features Teemu Roos. Teemu is the lead instructor of the Elements of AI online course that has a pivotal role in Finland's unique, inclusive AI strategy, with over 650,000 participants to date. Teemu is also a Professor of Computer Science at the University of Helsinki and leader of the AI Education programme at the Finnish Center for AI. His research focuses on future applications of machine learning. Subscribe to Pondering AI now so you don’t miss him.
Renée Cummings is a criminologist, criminal psychologist, AI ethics evangelist and data activist in residence at the University of Virginia.
In this compelling discussion, Renée shares her journey from journalism to the judiciary and into AI. She articulates the power of perspective, why intersectionality and imagination are key to AI’s future, and the extraordinary good we can accomplish with AI in all domains - including policing. If, that is, we vigilantly guard against creating a future modeled only on the past.
Renée is comfortable being uncomfortable and believes this is vital when developing AI systems. Kimberly and Renée discuss the need for balance in solving the thorniest AI dilemmas. Technology or thinking? Risk- or right-based assessment? Debiasing data or the mind? Social sciences or STEM? Renée broadens our understanding of why diverse tactics produce better AI. And why authenticity and the courage to admit when we get it wrong (because we will) will create an AI legacy we can all be proud of.
A full transcript of this episode can be found here.
Our next episode will feature Beena Ammanath, Executive Director of Deloitte’s Global AI Institute and founder of the non-profit Humans for AI. Subscribe to Pondering AI now so you don’t miss it.
Tess Posner is an educator, social entrepreneur, CEO of AI-4-All and an avid advocate for diversity, inclusion and equity in the tech economy.
In this inspiring and insightful discussion, Tess shares her mission to make technology and education accessible to all, inspiring work being done by rising student leaders in the AI-4-All Changemaker community, some eye-opening statistics on the state of diversity in AI, research on bias in today’s AI systems, and the importance of not letting cynicism rule the day.
Tess’s passion is infectious as she explains why AI literacy and education cultivates future leaders, not just future data scientists. Kimberly and Tess talk about the hard but necessary work of creating diverse, inclusive cultures and why the benefits go far beyond positive optics. As well as why viewing technology as a silver bullet is fraught and the importance of unlocking human potential. Finally, Tess identifies tangible actions individuals, organizations, and communities can take today to ensure everyone benefits from AI tomorrow.
A full transcript of this episode can be found here.
Our next episode features Renée Cummings: a criminologist, criminal psychologist and AI ethics evangelist who is passionate about keeping the human experience at the center of AI. Subscribe to Pondering AI now so you don’t miss it.
Michael Kanaan is the author of the best-selling book T-AI and the former chairperson of AI for the U.S. Air Force, Headquarters Pentagon.
In this far-reaching discussion, Michael provides perspectives on the peril of anthropomorphizing AI and how differentiating between intelligence and consciousness creates clarity. He shares his own reckoning with humility while writing T-AI, popular misconceptions about AI, where we can go awry in addressing – or not addressing – AI’s inherent dualities, pros and cons of the technology’s ready availability, and why unflinching due diligence is critical to deploying AI safely, ethically, and responsibly.
After a brief diversion into the perils of technology that is too responsive to our whims (ahem, social media), Kimberly and Michael discuss the importance of bridging the digital divide so everyone can contribute to and benefit from AI. Michael also makes the case for how AI may have the greatest impact on subject matter experts and decision makers and why explainability is overrated. And, finally, why AI’s future will be determined not by data scientists but by artists, sociologists, teachers and more.
A transcript of this episode can be found here.
Our next episode will feature Tess Posner: an educator, social entrepreneur, and CEO of AI-4-All. Subscribe to Pondering AI now so you don’t miss it.
En liten tjänst av I'm With Friends. Finns även på engelska.