111 avsnitt • Längd: 35 min • Veckovis: Torsdag
Welcome to the Regulating AI: Innovate Responsibly podcast with host and AI regulation expert Sanjay Puri. Sanjay is a pivotal leader at the intersection of technology, policy and entrepreneurship and explores the intricate landscape of artificial intelligence governance on this podcast.
You can expect thought-provoking conversations with global leaders as they tackle the challenge of regulating AI without stifling innovation. With diverse perspectives from industry giants, government officials and civil liberty proponents, each episode explores key questions and actionable steps for creating a balanced AI-driven world.
Don’t miss this essential guide to the future of AI governance, with a fresh episode available every week!
The podcast Regulating AI: Innovate Responsibly is created by Sanjay Puri. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
📢 Listen Now: A deep dive into the EU AI Act with Axel Voss, Member of the European Parliament (MEP), one of the key architects behind this groundbreaking legislation.
🔹 Key Topics Covered:
✅ The core principles of the EU AI Act and its enforcement
✅ How this regulation impacts global businesses and AI startups
✅ The EU’s approach to AI liability, ethics, and innovation
✅ Comparing the EU AI Act with US and China’s AI policies
✅ Challenges of implementing AI laws across industries
📌 Don't miss this insightful conversation with one of Europe’s leading voices on AI regulation and digital policy.
🔔 Subscribe to RegulatingAI for more expert conversations on AI governance.
Resources Mentioned:
https://www.linkedin.com/in/axel-voss-a1744969/
📌 In This Episode: In a world where AI is transforming industries, how do we ensure ethical AI governance? In this episode of RegulatingAI, Sanjay Puri sits down with JoAnn Stonier, EVP & Fellow of Data and AI at Mastercard, to discuss:
✔ The evolving role of data governance and privacy in AI
✔ How companies like Mastercard are implementing ethical AI frameworks
✔ The challenges of AI regulation and global compliance
✔ The balance between AI innovation and consumer trust
Why Listen?
JoAnn brings decades of expertise in AI ethics, financial services, and regulatory compliance.
Whether you're an AI researcher, policymaker, or business leader, this conversation provides actionable insights on responsible AI development.
Resources Mentioned: https://www.linkedin.com/in/joann-stonier-5540b86/
In this thought-provoking conversation, Congressman Ted Lieu explores the friction between open innovation and responsible oversight in AI. From open-source models to the rise of sovereign AI, we dive deep into the geopolitics of algorithms.
🔹 Open-source AI: A catalyst for innovation or a national security risk?
🔹 DeepSeek, LLaMA, and China's AI momentum
🔹 Export controls, chip access, and tech diplomacy
🔹 The myth and reality of Artificial General Intelligence (AGI)
🔹 Why U.S. policymakers must balance global collaboration with domestic resilience
Explore the intersection of regulation, innovation, and international power dynamics—straight from Capitol Hill.
Resources Mentioned: https://lieu.house.gov/ https://x.com/RepTedLieu
Artificial intelligence is transforming industries, societies, and daily life—but who ensures it remains ethical? In this episode of the RegulatingAI Podcast, host Sanjay Puri welcomes Dr. Emmanuel R. Goffi, AI ethicist and professor at the Paris Institute of Digital Technology, to discuss:
🔹 The role of AI ethics in shaping policy and regulation 🔹 Key concerns around AI bias, fairness, and transparency 🔹 The global impact of AI on human rights and democratic values 🔹 How policymakers and tech leaders can collaborate for responsible AI development
📢 Join the conversation and shape the future of AI ethics!
Resources Mentioned: https://www.linkedin.com/in/emmanuelgoffi/
Join us on the RegulatingAI Podcast as we welcome Professor Antonio Krüger, CEO of the German Research Center for Artificial Intelligence (DFKI) and Professor of Computer Science at Saarland University, to discuss the evolving relationship between artificial intelligence and human-computer interaction.
🔹 Key Takeaways:
🎧 Watch now and explore AI's next frontier!
Resources Mentioned:
https://www.linkedin.com/in/antonio-kr%C3%BCger-3202b46/?originalSubdomain=de
💡 In this episode of the RegulatingAI Podcast, host Sanjay Puri talks with Dr. Ami B. Bhatt, Chief Innovation Officer at the American College of Cardiology, to discuss how AI is revolutionizing cardiovascular care.
🔍 Key Topics Covered:
👉 Watch now and discover how AI is redefining cardiovascular care!
Resources Mentioned:
AI is reshaping the world, but how do we regulate it without stifling innovation? Congressman Nick Begich joins RegulatingAI Podcast host Sanjay Puri to discuss:
🎧 Watch now to gain insights into the future of AI governance!
📌 Subscribe to RegulatingAI Podcast for more discussions with global AI leaders!
Resources Mentioned:
https://begich.house.gov/about
⏱️ Timestamps:
00:00 - Podcast Episode Highlights
01:45 - Personal Journey into Politics
05:10 - The Role of Technology in Society
09:25 - AI and National Security
13:00 - Energy Independence and Policy
17:20 - Education and Workforce of the Future
21:15 - Federal Overreach and States’ Rights
25:40 - Economic Policy and Small Business
29:50 - U.S. Global Positioning and Competition
34:30 - Audience Q&A and Reflections
38:45 - Closing Remarks
👉 Listen Now: How can AI be regulated to protect civil rights? In this episode, Sanjay Puri sits down with Koustubh "K.J." Bagchi, Vice President of The Leadership Conference's Center for Civil Rights and Technology, to explore the intersection of AI and civil liberties.
🔍 In this episode:
📢 Don't miss this thought-provoking conversation on how to make AI more ethical and equitable.
Resources Mentioned:
Artificial Intelligence is evolving at an unprecedented pace—can policies and regulations keep up? In this episode of the RegulatingAI Podcast, Congressman Mike Kennedy shares his perspective on:
🔴 Watch now for a policymaker’s take on the future of AI governance!
Resources Mentioned:
⏱️ Timestamps:
00:00 – Podcast Episode Highlights
02:00 – Congressman Mike’s Background & Mission
04:30 – Bridging the Urban-Rural Digital Divide
07:15 – AI in Education & Skilling
10:00 – Congressman Mike Kennedy Foundation & Youth Empowerment
13:30 – The Role of AI in Public Service Delivery
16:45 – Policy Recommendations for AI & Innovation
20:00 – Women & AI: Breaking Barriers
23:15 – Global AI Standards & India’s Role
26:00 – Final Message: Youth Are the Future
🌍 How can AI drive Africa’s digital future while maintaining fairness and accountability?
In this insightful episode of RegulatingAI, Dr. Shikoh Gitau, CEO of Qhala, sits down with Sanjay Puri to discuss Africa's growing influence in the AI sector.
Highlights:
✔️ Why African voices must shape global AI regulations
✔️ The challenges of developing ethical AI frameworks
✔️ Qhala’s innovative approach to AI development and deployment
👀 Don't miss this conversation about the future of AI in Africa!
Resources Mentioned:
https://www.linkedin.com/in/Shikohh/
⏱️ Timestamps:
00:00 – Podcast Episode Highlights
02:00 – Dr. Gitau’s Journey in AI & Data Science
05:00 – AI’s Potential for Emerging Markets
08:30 – Barriers to AI Adoption in the Global South
12:00 – Ethical AI & Data Sovereignty
15:30 – The Role of Governments & Policymakers
18:45 – Innovation vs. Regulation: Striking the Balance
22:00 – AI in Healthcare & Public Services
26:30 – AI & Financial Inclusion
30:00 – The Future of AI in Africa
34:00 – Final Thoughts & Takeaways
🎙 How AI Will Shape Public Policy and National Security – A Conversation with Congressman Jake Auchincloss
Join host Sanjay Puri in this insightful episode of the RegulatingAI Podcast as he sits down with Congressman Jake Auchincloss to discuss the intersection of AI, public policy, and national security. Discover why AI regulation matters and how the US can stay ahead in the global AI race.
✅ Key topics discussed:
👉 Watch Now: Don’t miss this important discussion on AI policy and leadership!
Resources Mentioned:
https://auchincloss.house.gov/
⏱️ Timestamps:
00:00 – Podcast Episode Highlights
01:30 – Why Use AI in Congress?
03:10 – Industry-Specific Regulation vs. Comprehensive Laws
06:00 – Critique of the EU AI Act
08:30 – Outcomes-Based Regulation Explained
12:00 – Gaps in Current U.S. Law
14:40 – Democratizing Access to AI
17:00 – Reforming Section 230
20:30 – Deepfake Legislation: The Intimate Privacy Protection Act
23:00 – The Three Pillars of AI Innovation
26:30 – The Rise of ‘Acquihires’ & Antitrust Loopholes
30:00 – National Security & the China Challenge
33:45 – AI & Energy: The Nuclear Opportunity
36:30 – AI + Robotics = Future Defense
40:00 – Export Controls Aren’t Enough
43:00 – Rebuilding Global Trade Leadership
46:00 – AI Policy in the Next Congress
48:20 – The Deepfake Bill & Bipartisan Momentum
50:30 – Keeping Up with AI’s Pace
53:00 – Open Source vs. Proprietary AI
54:00 – Final Advice: Support Local News
🎙 RegulatingAI Podcast | Congressman Gabe Amo on Public Service and AI Regulation
In this episode of the RegulatingAI Podcast, host Sanjay Puri welcomes Congressman Gabe Amo to discuss his journey in public service and how it connects with AI regulation. Congressman Amo shares valuable insights on:
🎧 Watch now to learn how Congressman Amo is working to balance technological growth with ethical guidelines.
Resources Mentioned:
In this episode of the RegulatingAI Podcast, Sanjay Puri sits down with Congressman Ted Lieu to explore how bipartisan efforts are shaping the future of AI regulation. Congressman Lieu shares valuable insights on:
📺 Watch now and discover how AI policy decisions today will impact the future!
Resources Mentioned:
⏱️ Timestamps:
00:00 – Introduction
01:15 – Why AI Regulation is Urgent Now
03:30 – National Security & AI
06:10 – Balancing Innovation with Guardrails
08:30 – Who Should Be Regulated?
10:45 – The Role of Congress in a Fast-Moving Field
13:00 – The Need for a Federal AI Agency
15:20 – AI & Democratic Values
17:45 – Risks of Deepfakes & Misinformation
20:15 – Ensuring Equity & Preventing Bias
22:30 – The Role of Public Engagement in AI Governance
24:30 – Final Thoughts & A Call to Action
🌍 As AI reshapes the global landscape, how should governments respond? In this thought-provoking episode of Regulating AI, Sanjay Puri engages in a deep dive with Congressman David Schweikert on:
🚀 Stay ahead of the curve—hit play and join the conversation!
👍 Like, comment, and subscribe for more AI policy insights!
Resources Mentioned:
https://www.linkedin.com/in/david-schweikert-54ab0218/
In this episode of RegulatingAI, host Sanjay Puri sits down with Jonathan Horowitz, Legal Advisor at the International Committee of the Red Cross, to explore the complex legal landscape surrounding AI regulation in situations of armed conflicts. Jonathan shares his deep insights into how AI is intersecting with international law and war.
Key Highlights:
👉 Watch now to uncover expert insights on the future of AI regulation!
Resources Mentioned:
https://www.linkedin.com/in/jonathan-horowitz-b78b6026/
ICRC Position Paper: Artificial intelligence and machine learning in armed conflict: A human-centred approach | International Review of the Red Cross: https://international-review.icrc.org/articles/ai-and-machine-learning-in-armed-conflict-a-human-centred-approach-913
What you need to know about artificial intelligence and armed conflict: https://www.icrc.org/en/document/what-you-need-know-about-artificial-intelligence-armed-conflict
Expert Consultation report – Artificial intelligence and Related Technologies in Military Decision-Making on the Use of Force in Armed conflicts: Current Developments and Potential Implications: https://shop.icrc.org/expert-consultation-report-artificial-intelligence-and-related-technologies-in-military-decision-making-on-the-use-of-force-in-armed-conflicts-current-developments-and-potential-implications-pdf-en.html
Decisions, Decisions, Decisions: computation and Artificial Intelligence in military decision-making: https://shop.icrc.org/decisions-decisions-decisions-computation-and-artificial-intelligence-in-military-decision-making-pdf-en.html
AI is shaping the future, but who ensures it remains responsible and ethical?
In this compelling conversation, Dr. Richard Benjamins shares insights on:
✔️ His work in AI ethics and policy advocacy
✔️ How companies like Telefonica approach responsible AI
✔️ The role of international regulatory bodies in AI governance
✔️ Key challenges in enforcing AI compliance across industries
🌍 As AI adoption accelerates, ensuring ethical oversight is more critical than ever. This episode provides essential insights for business leaders, policymakers, and AI enthusiasts.
📢 Tune in to gain expert perspectives on responsible AI!
Resources Mentioned:
https://www.linkedin.com/in/richard-benjamins/
⏱️ Timestamps:
00:00 - Podcast Episode Highlights
02:00 - Richard’s Journey into AI Ethics
05:15 - Defining Responsible AI
08:30 - Global AI Regulation: Differences & Challenges
12:50 - Need for International Collaboration in AI Regulation
17:20 - AI Ethics Boards & Their Role in Companies
22:10 - Who Should Implement AI Ethics in a Company?
26:40 - Addressing Bias & Privacy in AI Development
31:50 - AI’s Impact on Smaller Languages & Cultures
36:20 - The Monopoly Risk in AI
41:00 - Ethical Principles for AI Development
46:00 - Future-Proofing AI Regulation
50:10 - AI’s Disruptive Impact on Jobs & Workforce
55:20 - Final Advice for Policymakers & Researchers
58:00 - Closing Remarks
Artificial Intelligence is reshaping the landscape of higher education and research. In this episode of the RegulatingAI Podcast, Sanjay Puri sits down with Nicholas Dirks, President of the New York Academy of Sciences, to discuss the profound impact of AI on academia and scientific exploration.
🔍 Topics Covered:
📢 Listen now and discover how AI is influencing the future of learning and research!
Resources Mentioned:
https://www.nicholasbdirks.com/
https://www.linkedin.com/in/nicholas-dirks-84a1ab149/
📢 AI is advancing faster than ever—but can policy keep up?
In this episode of Regulating AI Podcast, US Congresswoman Suzan DelBene discusses the future of AI governance and how policymakers are shaping the landscape.
🔹 Key discussion points:
🎥 Join the conversation and stay ahead of AI regulations! Watch now!
Resources Mentioned:
https://www.linkedin.com/in/suzan-delbene-752a174/
⏱️ Timestamps:
00:00:00 - Podcast Episode Highlights
00:01:13 - Rep. DelBene’s Journey to Congress
00:03:20 - Why Privacy is the Foundation for AI Regulation
00:06:56 - Challenges in Passing U.S. Privacy Laws
00:09:02 - AI Bias & Discrimination Risks
00:11:09 - Congress’s Role in AI Guardrails
00:14:21 - Balancing Innovation & Regulation
00:16:56 - International AI Policy Leadership
00:19:04 - Centering Marginalized Communities
00:20:52 - Preventing AI Monopolies
00:24:30 - Workforce Readiness & Education
00:27:35 - Closing Thoughts
How do enterprises scale AI responsibly? Join us as Sanjay Puri sits down with Emre Kazim, Co-CEO of Holistic AI, to explore the critical role of AI governance in building trustworthy AI systems.
🔹 Key Topics Discussed:
✔️ Why AI governance is more than just compliance
✔️ How trust and accountability impact AI adoption
✔️ The biggest risks enterprises face with AI deployment
✔️ AI policy and governance models across different regions
✔️ How businesses can scale AI while maintaining safety and oversight
💡 Holistic AI’s Vision: Enabling enterprises to adopt AI with confidence through governance frameworks that align with business needs.
🔴 Watch now and gain insights into how governance can shape the future of AI!
🔔 Don’t forget to LIKE, SHARE, and SUBSCRIBE for more expert conversations on AI governance.
Resources Mentioned:
How do we balance AI innovation with responsible regulation? 🤖⚖️
Sanjay Puri, Chairman & Founder of Knowledge Networks, joined an expert panel at the Generative AI Summit in Washington, D.C., hosted by the AI Accelerator Institute, to discuss "Innovation vs. Regulation – Ethically Balancing Rapid Development with a Safety-First Approach."
In this engaging conversation, industry leaders Daniel Fenton, Zorina Alliata, and Zachary Hanif shared insights on:
✅ Why AI innovation and regulation must go hand in hand.
✅ How collaboration between policymakers, industry leaders, and AI practitioners is key to responsible AI.
✅ Why ethical compliance is not a roadblock but a catalyst for sustainable AI development.
A big thank you to the AI Accelerator Institute, our incredible panelists, and everyone who joined the discussion! Let’s keep pushing the conversation forward on responsible AI innovation. 🚀💡
🔔 Subscribe for more expert discussions on AI governance, ethics, and innovation!
🌍 How do you regulate AI without stifling innovation?
Many believe regulation slows down technological progress, but according to Lucilla Sioli, this is a false dilemma. The EU AI Act is designed to support both innovation and governance, ensuring that AI systems remain safe, reliable, and beneficial for all.
📢 Key Takeaways:
✅ Why regulation and innovation are not conflicting forces but complementary.
✅ How AI regulation creates trust, leading to broader adoption and investment.
✅ The role of AI sandboxes in allowing startups to test AI applications in a controlled environment.
✅ The AI Pact’s growing global interest—why even U.S. and Korean companies are voluntarily aligning with EU AI standards.
✅ What businesses can learn from the EU’s risk-based approach to AI regulation.
🔍 Lucilla explains how the EU’s structured, risk-based framework ensures AI development remains competitive while prioritizing safety.
Resources Mentioned:
https://www.linkedin.com/in/lucilla-sioli-b944392/
AI is shaping our future, but who ensures its ethical use? In this episode of Regulating AI, Sanjay Puri sits down with Prof. Emma Ruttkamp-Bloem, Chair of UNESCO’s World Commission on the Ethics of Scientific Knowledge and Technology.
🔹 Key insights:
~ The role of ethics in shaping AI policy and governance
~ How UNESCO’s AI recommendations are influencing global regulations
~ Balancing innovation with responsible AI development
~ The ethical dilemmas of AI in decision-making
Resources Mentioned:
https://www.linkedin.com/in/emma-ruttkamp-bloem-19400248/
⏱️ Timestamps:
00:00 - Podcast Highlights
02:21 - What inspired your journey into AI ethics?
05:13 - What is the significance of UNESCO's AI ethics recommendation?
11:10 - What are the key ethical challenges in AI development?
16:01 - How can the Global South's voice be amplified in AI policy?
19:24 - What are the ethical concerns of AI in military use?
22:06 - Why is data sovereignty important, and what role do data centres play?
27:29 - What insights have you gained on AI governance in Africa?
33:10 - Can online education help address training and connectivity in Africa?
39:27 - What role should international organisations play in AI governance?
42:08 - Will Africa develop its AI regulations or rely on others like the EU?
45:56 - How do you see human-technology relations evolving with AI?
47:07 - (Lightning Round)
50:45 - What gives you hope for AI governance, and what are your main concerns?
AI is shaping our future, but who ensures its ethical use? In this episode of Regulating AI, Sanjay Puri sits down with Prof. Emma Ruttkamp-Bloem, Chair of UNESCO’s World Commission on the Ethics of Scientific Knowledge and Technology.
🔹 Key insights:
~ The role of ethics in shaping AI policy and governance
~ How UNESCO’s AI recommendations are influencing global regulations
~ Balancing innovation with responsible AI development
~ The ethical dilemmas of AI in decision-making
Resources Mentioned:
https://www.linkedin.com/in/emma-ruttkamp-bloem-19400248/
⏱️ Timestamps:
00:00 - Podcast Highlights
02:21 - What inspired your journey into AI ethics?
05:13 - What is the significance of UNESCO's AI ethics recommendation?
11:10 - What are the key ethical challenges in AI development?
16:01 - How can the Global South's voice be amplified in AI policy?
19:24 - What are the ethical concerns of AI in military use?
22:06 - Why is data sovereignty important, and what role do data centres play?
27:29 - What insights have you gained on AI governance in Africa?
33:10 - Can online education help address training and connectivity in Africa?
39:27 - What role should international organisations play in AI governance?
42:08 - Will Africa develop its AI regulations or rely on others like the EU?
45:56 - How do you see human-technology relations evolving with AI?
47:07 - (Lightning Round)
50:45 - What gives you hope for AI governance, and what are your main concerns?
In this episode of the RegulatingAI Podcast, we have Prof. Ramayya Krishnan, Dean of Heinz College at Carnegie Mellon University and a distinguished voice in AI policy and governance.
With expertise in technology, public policy, and societal impact, Prof. Krishnan explores the delicate balance between AI innovation and regulation, the role of AI in shaping public policy, and the governance frameworks needed to ensure responsible AI deployment.
🎙️ Key Takeaways:
✅ Regulating AI Without Killing Innovation – How can policymakers create guardrails without slowing down progress?
✅ AI & Public Trust – Why ethical AI frameworks are essential for societal acceptance.
✅ Bridging the AI Policy Gap – How global collaboration can create effective AI governance models.
✅ The Role of Universities in AI Governance – How academia contributes to responsible AI development.
🔔 Watch now to gain exclusive insights from one of AI policy’s leading experts!
Resources Mentioned:
Listen our latest Podcast on the RegulatingAI Podcast with Nobel Laureate Brian Schmidt as he talks about the AI’s regulatory landscape, and the lessons science can teach us about responsible innovation.
In this episode, he gives his insights on:
🔹 The current state of AI regulation—why it’s lagging behind innovation.
🔹 The ethical dilemmas of self-regulation in tech.
🔹 Lessons from physics on managing disruptive technologies.
🔹 Australia’s regulatory stance on AI vs. global approaches.
🔹 The balance between innovation, accountability, and legal oversight.
Brian highlights the urgent need for AI governance that is both adaptive and forward-thinking, ensuring that technology serves humanity rather than outpacing our ability to regulate it.
Resources Mentioned:
https://en.wikipedia.org/wiki/Brian_Schmidt
At Big Data Expo Global, industry leaders came together to tackle one of the most pressing topics in AI today—Ethical Considerations in Gen AI and Data Science: Navigating Complex Terrain.
Sanjay Puri, Founder & Chairman of Knowledge Networks, joined an esteemed panel of experts to explore the challenges and responsibilities in building ethical AI systems. The discussion featured:
🔹 Shairil Yahya – Legal Compliance Technology & Solution Director, Philips
🔹 Emily Yang – Head of Human-Centred AI and Innovation, Standard Chartered
🔹 Larry Orimoloye – Principal Architect AI/ML - Field CTO, Snowflake
🔹 Sanjay Puri – Founder and Chairman, Knowledge Networks Group
🔹 Chandrashekhar Kachole – Chief Technology Officer
🔹 Saber Fallah – Professor of Safe AI and Autonomy, University of Surrey
How can we navigate the complexities of ethical AI while fostering innovation? Share your thoughts in the comments!
Watch the latest RegulatingAI Podcast from DeepFest 2025 featuring Mia Dand, Abir Habbal, Aadil Jaleel Choudhry, and Sanjay Puri (Moderator)! 🎙
In this insightful session, the panelists discuss the challenges and urgency of establishing responsible AI governance in a rapidly evolving landscape. Dive into a conversation on AI ethics, accountability, and regulation that highlights the need for thoughtful, global solutions.
🔍 The Panelists Shared Their Thoughts On:
✅ The role of ethical guidelines in shaping AI innovation
✅ How transparency and accountability are critical for AI's future
✅ Addressing biases and maintaining fairness in AI models
✅ Navigating the complex relationship between regulation and innovation
✅ Global alignment for ethical AI practices across industries
A huge thank you to our Sanjay Puri (Moderator), all our esteemed panelists and the DeepFest community for organising this essential conversation. The dialogue around AI regulation continues to grow, and these insights are helping shape a more ethical and accountable AI future.
Governments worldwide are navigating the complexities of AI adoption, from policy development to ethical considerations. In this insightful panel discussion, Margarete Schramboeck (Former Minister of Economy of Austria, Board Member & Advisor, Aramco Digital), Lama Arabiat (Director of AI & Advanced Technologies, Ministry of Digital Economy and Entrepreneurship, Jordan), and Abdullah AlThawad (Takamol) share their expertise on how nations can create AI frameworks that align with national priorities while fostering innovation, data governance, and international collaboration.
Moderated by Sanjay Puri, Founder & Chairman, Knowledge Networks, this conversation highlights the key challenges and opportunities in government AI readiness.
🔹 What are the biggest hurdles in AI policy implementation?
🔹 How can governments balance innovation with responsible AI governance?
🔹 What role does international cooperation play in AI adoption?
In this episode of RegulatingAI Podcast, Dr. Dominique J. Monlezun—the world’s first triple-doctorate-trained physician, data scientist, and AI ethicist—shares his extraordinary journey from a small farming town to shaping global AI healthcare policies.
With a background spanning cardio-oncology, public health, and AI ethics, Dr. Monlezun brings a unique, multidisciplinary approach to healthcare innovation. He discusses how AI can bridge health disparities, why AI literacy is essential, and the ethical challenges of AI-driven medicine.
🎙️ Key Takeaways
✅ AI Isn’t Replacing Doctors – Doctors Who Use AI Are – Why upskilling in AI is crucial for healthcare professionals.
✅ Bridging the AI Adoption Gap – The divide between healthcare systems adopting AI and those falling behind.
✅ AI Ethics & Patient Trust – The role of AI in healthcare and ensuring responsible governance.
✅ Global Collaboration in AI – Why AI development must be inclusive, ethical, and internationally cooperative.
Watch the latest RegulatingAI Podcast at AI Big Data Expo, London featuring Shachar Schnap, Co-Founder & CEO at PVML! 🎙
He discusses how AI can navigate global compliance challenges, mitigate bias, and enhance data privacy with cutting-edge techniques. Don't miss this deep dive into the evolving AI landscape!
🔍 Key Topics:
✅ The role of differential privacy in AI compliance
✅ How retrieval-augmented generation (RAG) minimizes AI hallucinations
✅ The bias problem in AI models—can it ever be solved?
✅ The limitations of synthetic data in analysis and decision-making
✅ The rise of open-source AI models and their regulatory challenges
Listen our latest Podcast on the RegulatingAI Podcast with Dr. Ahmed Serag, Professor, Founder and Director, AI Innovation Lab at Cornell University, as he discusses the future of AI in medicine.
Dr. Serag shares insights on:
✅ The challenges of using AI in healthcare—only 20% of global medical data is AI-ready.
✅ The role of synthetic data in medical AI—how it can protect privacy while accelerating research.
✅ Why AI is reshaping drug discovery and clinical trials, cutting timelines from years to months.
✅ The need for global standards in medical data privacy and AI-driven diagnostics.
✅ How AI is assisting, not replacing, doctors—and why physicians who embrace AI will lead the future.
Dr. Serag highlights how AI is revolutionizing medicine, from radiology to digital twins, and the critical role of collaboration between researchers, policymakers, and clinicians.
Listen our latest Podcast on the RegulatingAI Podcast with Amr Metwally, Director of Innovation at Hamad Medical Corporation, as he discusses AI’s transformative role in healthcare.
He gave insights on:
✅ How AI is reshaping medical diagnostics, from radiology to oncology.
✅ The role of AI in expanding healthcare access in underserved regions.
✅ The ethical and regulatory challenges of patient data privacy.
✅ Qatar’s approach to AI governance and healthcare innovation.
✅ Why AI won’t replace doctors—but doctors using AI will outperform those who don’t.
Amr highlights how AI is not just an add-on but a fundamental shift in how healthcare is delivered, urging leaders to embrace innovation while ensuring ethical safeguards.
Listen our latest Podcast on the RegulatingAI Podcast with Areiel Wolanow, Founder & Managing Director of Finserv Experts, as he breaks down AI adoption, business transformation, and quantum computing.
He gave insights on:
✅ Why 90% of companies investing in AI won’t see ROI—and how to be in the winning 10%.
✅ The critical need for new business and operating models to fully leverage AI.
✅ How organizations can ensure AI governance, compliance, and accountability.
✅ Whether companies should hire a Chief AI Officer and what the role should actually entail.
✅ The intersection of AI and quantum computing—how it can supercharge machine learning and data security.
Areiel highlights why AI-driven transformation isn’t just about tech—it’s about strategic leadership, ethical frameworks, and a deep understanding of business impact.
Welcome to the Regulating AI podcast, coming to you live from the Big Data & TechX4 Conference in London (Feb 6, 2025)!
In this episode, we sit down with Dr. Mandy Crawford-Lee, CEO of the University Vocational Awards Council (UVAC), to discuss how AI is reshaping education, workforce development, and policy frameworks.
✅ How AI is transforming education through personalization
✅ The role of AI in workforce evolution & reskilling initiatives
✅ Policy & regulation challenges in AI adoption
✅ The digital divide—who benefits and who gets left behind?
✅ The future of AI in higher education & beyond
In this episode, Tim Cook, Founder of AIConfident, explores the Adoption of AI Across the Economy. He discusses how AI is revolutionizing various industries and what lies ahead for its integration into the global economy. Watch this insightful discussion on AI’s growing impact and the future of innovation.
🎙️ Key Takeaways:
✅ AI Integration at Scale – How businesses are leveraging AI to drive efficiency, innovation, and competitive advantage.
✅ Bridging the AI Adoption Gap – Addressing challenges in AI implementation across sectors and strategies for overcoming them.
🚨 China’s Deep Seek R1 AI model is raising big questions about global security and AI dominance. 🚨
In this eye-opening conversation on the RegulatingAI Podcast, Anja Manuel—foreign policy expert, advisor, and former diplomat—joins Sanjay Puri to discuss on:
AI isn’t just about innovation—it’s about power. Watch the latest episode to stay informed on the future of AI governance and global competition.
👉 Subscribe to RegulatingAI for cutting-edge discussions on AI policy and strategy!
Listen our latest Podcast on the RegulatingAI Podcast with Isa Mutlib, Founder of Portland AI talk on the Role of AI Skills in Future of Work.
She gave her insights on:
✅ How AI is disrupting the global workforce and what it means for policymakers, businesses, and employees.
✅ The speed of digital transformation and why it's different from previous technological revolutions.
✅ AI’s impact on developing countries—can AI be a great equalizer in global innovation?
✅ The need for open-source AI vs. concerns around security and control.
✅ Practical advice for professionals worried about AI replacing their jobs—how to upskill and stay ahead.
Isa highlights how AI presents both challenges and opportunities, urging leaders and individuals to embrace change rather than resist it.
Join Sanjay Puri, Founder & Chairman of Knowledge Networks, as he talks about the future of AI in education with P. Anand Rao at AI Big Data Global, Olympia, London.
In this episode on The RegulatingAI Podcast, we explore innovative approaches to preparing students for an AI-driven world through debate-based learning and AI pluralism.
Our guest, Prof Anand Rao, Professor of Communication & Digital Studies and Director, Center for AI and the Liberal Arts, University of Mary Washington, shares insights on:
🔹 Augmented Debate-Centered Instruction – A transformative model that uses debate to develop essential skills like critical thinking, collaboration, and research.
🔹 AI Pluralism – A novel approach promoting diversity in AI agents to improve transparency, contestability, and bias mitigation.
Discover how integrating these strategies can enhance education, foster critical thinking, and address key challenges in AI development.
In this episode of the Regulating AI Podcast, recorded LIVE at the AI Big Data Global Expo 2025, with our guest Ron Gafni, Co-founder & Chairman, G-Foresight. He talked about AI Revelation in Management Tools, how AI is evolving in Israel—from its rapid advancements to the crucial need for regulations and governance.
Ron shares insights on:
✅ The AI revolution and its impact on management and decision-making
✅ Israel’s AI leadership in Défense, agriculture, and national security
✅ The role of AI regulations, privacy, and ethical considerations
✅ How companies should educate their boards and executives on AI adoption
Join us in this insightful episode of the RegulatingAI Podcast as we sit down with Francesca Rossi, IBM Fellow and Global Leader for AI Ethics. Based at IBM's T.J. Watson Research Lab in New York, Francesca shares her expertise on cutting-edge AI topics, including constraint reasoning, multi-agent systems, neuro-symbolic AI, and value alignment. With over 220 published works and leadership roles in renowned AI organizations like AAAI and EurAI, Francesca provides a thought-provoking perspective on ethical AI, governance, and the future of artificial intelligence. Don't miss this fascinating conversation!
Resources:
https://www.linkedin.com/in/francesca-rossi-34b8b95/
About Regulating AI:
RegulatingAI is a dedicated non-profit organization designed for experts, mentors, and users of artificial intelligence (AI) with a keen interest in exploring the intersection of AI and regulation. We aim to unite individuals with diverse expertise and backgrounds, fostering collaboration to collectively advance the understanding and implementation of AI regulations.
About your host
Sanjay Puri is a recognized authority on US-India relations. He serves as the Chairman of the US-India Political Action Committee (USINPAC), a national, bipartisan political action committee representing Indian-Americans. He is also the founder of the Alliance for US India Business (AUSIB), an organization dedicated to strengthening economic ties between the US and India. He is also a successful technology entrepreneur, mentor, and investor.
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #regulatingaipodcast #innovationinAIRegulation
Streaming On:
Apple Podcast: https://podcasts.apple.com/us/podcast/regulating-ai-innovate-responsibly/id1714410167
Spotify: https://open.spotify.com/show/3ZkXYPINugnegkORcBCrYo?si=a7ad672e8e194bea
YouTube: https://www.youtube.com/@The_Regulating_AI_Podcast
Join our fastest-growing AI Community:
Instagram: https://www.instagram.com/regulating_ai/
Twitter: https://twitter.com/RegulatingAI
LinkedIn: https://www.linkedin.com/company/regulating-ai
Facebook: https://www.facebook.com/RegulatingAI
Read Our Blogs, News & Updates:https://regulatingai.org/
Join the Conversation:
Leave your thoughts and questions in the comments below. We'd love to hear from you!
In this episode on RegulatingAI, Former FCC Chairman Tom Wheeler unpacks the complexities of AI regulation and governance. Drawing from his vast experience in telecommunications, Wheeler emphasizes the critical need for balanced oversight that fosters innovation without compromising fairness or safety.
He has shared his thoughts on:
Resources:
"https://www.brookings.edu/people/tom-wheeler/
https://www.amazon.com/dp/B0C4FZ1QT4?ref_=cm_sw_r_cp_ud_dp_4VWY6H0X6YKWMRDBPSSD"
In this episode on RegulatingAI, Patrik Gayer, Head of Global Affairs at Silo AI discusses the challenges and opportunities in regulating artificial intelligence. With his expertise in AI policy, Patrick provides a deep dive into creating fair, practical legislation that fosters innovation while addressing global concerns.
Resources:
https://hir.harvard.edu/the-eus-chance-to-lead-forging-a-global-regulatory-framework-for-artificial-intelligence-amidst-exponential-progress/
https://www.linkedin.com/posts/harvard-ksr_volume-xxiii-activity-7147390411131482113-x16t?utm_source=share&utm_medium=member_desktop"
https://www.linkedin.com/in/patrikgayer/
In this episode of RegulatingAI Podcast, we’re joined by Congressman Scott Franklin from Florida’s 18th Congressional District, a member of the House AI Task Force, and a strong advocate for responsible AI regulation. Drawing on his unique background in the Navy, insurance, and agriculture, Rep. Franklin provides valuable insights into Congress’s role in the ever-evolving world of AI governance.
Resources:
https://franklin.house.gov/about
https://en.wikipedia.org/wiki/Scott_Franklin_(politician)
https://www.linkedin.com/in/cscottfranklin/
https://www.congress.gov/member/c-franklin/F000472
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Join us for an insightful discussion on the intersection of AI and Green Technology as drivers of global progress and sustainable development. This roundtable features highlights from the Imperial Springs International Forum 2024, hosted by Club de Madrid, where over 130 leaders from 40+ countries gathered to explore the future of international cooperation and multilateralism.
Artificial Intelligence has immense potential, but it also carries risks — particularly when it comes to civil liberties. In this episode, I speak with Faiza Patel, Senior Director of the Liberty and National Security Program at the Brennan Center for Justice at NYU Law. Together, we explore how AI can be regulated to ensure fairness, accountability and civil rights, especially in the context of national security and law enforcement.
Key Takeaways:
(01:53) AI in national security, law enforcement and immigration contexts.
(05:00) The dangers of AI in government decisions, from immigration to surveillance.
(09:09) Long-standing issues with AI, including biased training data in facial recognition.
(12:55) The complexities of regulating AI-generated media, such as deepfakes, while protecting free speech.
(17:00) The need for transparency in AI systems and the importance of scrutinizing outputs.
(20:25) How marginalized communities are disproportionately affected by AI.
(23:30) Companies developing AI must embed civil rights principles into their products.
(26:45) Creating unbiased AI systems is a challenge, but necessary to avoid harm.
(29:58) The need for a dedicated regulatory body to oversee AI, especially in national security.
(34:00) AI’s potential impact on jobs and why policymakers need to prepare for labor disruption.
Resources Mentioned:
https://www.linkedin.com/in/faiza-patel-5a042816/
https://www.brennancenter.org/
President Biden’s Executive Order on AI -
https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
https://www.whitehouse.gov/ostp/ai-bill-of-rights/
Brennan Center - Faiza Patel -
https://www.brennancenter.org/experts/faiza-patel
National Security Carve-Outs Undermine AI Regulations -
https://www.brennancenter.org/our-work/analysis-opinion/national-security-carve-outs-undermine-ai-regulations
Senate AI Hearings Highlight Increased Need for Regulation -
https://www.brennancenter.org/our-work/analysis-opinion/senate-ai-hearings-highlight-increased-need-regulation
The Perils and Promise of AI Regulation -
https://www.brennancenter.org/our-work/analysis-opinion/perils-and-promise-ai-regulation
Advances in AI Increase Risks of Government Social Media Monitoring -
https://www.brennancenter.org/our-work/analysis-opinion/advances-ai-increase-risks-government-social-media-monitoring
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
In this episode of the RegulatingAI podcast, Sanjay Puri hosts an insightful discussion with Mr. Boris Tadić, former President of Serbia, to explore the profound implications of artificial intelligence (AI) on governance, society, and global relations at Imperial Springs International Forum 2024, Madrid, Spain. From its potential to revolutionise education and development to concerns about its effects on democracy and societal values, this conversation delves deep into the opportunities and challenges AI presents.
Resources:
https://x.com/boristadic58
https://clubmadrid.org/who/members/tadic-boris/
https://en.wikipedia.org/wiki/Boris_Tadi%C4%87
In this Episode, Tunisia’s Former Prime Minister, Mehdi Jomaa, shares his vision for the country’s potential to emerge as a leading technology hub in the Arab world and the Global South. With its strategic location bridging Africa, Europe, and the Middle East, Tunisia is positioned to become a key player in the global technological revolution, particularly in artificial intelligence.
Resources:
https://www.linkedin.com/in/mehdi-jomaa-60a8333b/
https://x.com/Mehdi_Jomaa
https://www.facebook.com/M.mehdi.jomaa
In this Episode, Tunisia’s Former Prime Minister, Mehdi Jomaa, shares his vision for the country’s potential to emerge as a leading technology hub in the Arab world and the Global South. With its strategic location bridging Africa, Europe, and the Middle East, Tunisia is positioned to become a key player in the global technological revolution, particularly in artificial intelligence.
Resources:
https://www.linkedin.com/in/mehdi-jomaa-60a8333b/
https://x.com/Mehdi_Jomaa
https://www.facebook.com/M.mehdi.jomaa
https://clubmadrid.org/who/members/mehdi-jomaa/
The rapid rise of AI brings both extraordinary potential and profound risks, demanding urgent global collaboration to ensure its safe development. In this episode, I’m joined by Professor S. Alex Yang, Professor of Management Science and Operations at the London Business School, to explore the complexities of regulating AI, the challenges of international collaboration, and the potential existential risks posed by AI development. With his extensive experience in AI and risk management, Professor Yang provides unique insights into the future of AI governance.
Key Takeaways:
(02:12) Professor Yang’s early AI experiences and his value chain research.
(06:57) The biggest risks from AI, including existential risk and job displacement.
(11:42) The debate on AI nationalism and the preservation of cultural heritage.
(16:28) How China’s chip-making capacity could reshape AI competition.
(21:13) Open-source versus closed-source AI models and the risks involved.
(25:58) Why monitoring monopolies in AI is crucial for innovation.
(30:44) How content creators can benefit from AI and how copyright law is evolving.
(35:29) The importance of fair use standards for AI-generated content.
(40:14) Data aggregation and its future role in AI development.
(45:00) Professor Yang’s final thoughts on the need for agile, principle-based AI regulation.
Resources Mentioned:
https://www.linkedin.com/in/songayang/
London Business School | LinkedIn -
https://www.linkedin.com/school/london-business-school/
London Business School | Website -
https://www.london.edu/?utm_source=google&utm_medium=ppc&utm_campaign=MC_BRBRAND_ppc_google&sc_camp=760e17bef14a4b399386ef32e55393a8&gad_source=1&gclid=Cj0KCQjwo8S3BhDeARIsAFRmkON1oXbsOVjQ73dCIwrvngSGSF0PBYwWGVKRtCdil8ptF2vmAzcW7lEaAvCxEALw_wcB&gclsrc=aw.ds
https://worldcoin.org/
The Case for Regulating Generative AI Through Common Law -
https://www.project-syndicate.org/commentary/european-union-ai-act-could-impede-innovation-by-s-alex-yang-and-angela-huyue-zhang-2024-02
Generative AI and Copyright: A Dynamic Perspective -
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4716233
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
AI presents endless opportunities, but its implications for privacy and governance are multifaceted. On this episode, I’m joined by Professor Norman Sadeh, a Computer Science Professor at Carnegie Mellon University, and Co-Founder and Co-Director of the Privacy Engineering Program. With years of experience in AI and privacy, he offers valuable insights into the complexities of AI governance, the evolving landscape of data privacy and why a multidisciplinary approach is vital for creating effective and ethical AI policies.
Key Takeaways:
(02:09) How Professor Sadeh’s work in AI and privacy began.
(05:30) Privacy engineers are in AI governance.
(08:45) Why AI governance must integrate with existing company structures.
(12:10) The challenges of data ownership and consent in AI applications.
(15:20) Privacy implications of foundational models in AI.
(18:30) The limitations of current regulations like GDPR in addressing AI concerns.
(22:00) How user expectations shape the principles of AI governance.
(26:15) The growing debate around the need for specialized AI regulations.
(30:40) The role of transparency in AI governance for building trust.
(35:50) The potential impact of open-source AI models on security and privacy.
Resources Mentioned:
https://www.linkedin.com/in/normansadeh/
Carnegie Mellon University | LinkedIn -
https://www.linkedin.com/school/carnegie-mellon-university/
Carnegie Mellon University | Website -
https://www.cmu.edu/
https://artificialintelligenceact.eu/
General Data Protection Regulation (GDPR) -
https://gdpr-info.eu/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
In this inspiring episode, we explore how AI is not only transforming industries but also reshaping education and the future of work. Learn how diversity, AI skills, and youth empowerment are critical in building an ethical, AI-driven world.
Our guest, Elena Sinel, FRSA and Founder of Teens in AI, shares her mission to champion diversity and equip young people with the skills they need to thrive in the AI era. She discusses the importance of empowering youth to lead the way in creating ethical AI solutions for a better future.
In this thought-provoking episode, we explore the crucial role governments play in democratizing AI, ensuring its benefits reach all sectors of society. We discuss the ethical and governance challenges involved in shaping AI policy, as well as the philosophical underpinnings that drive this evolving landscape.
Our distinguished guest, Ted Lechterman, Holder of the UNESCO Chair in AI Ethics & Governance at IE University, provides critical perspectives on how governments can lead the way in creating inclusive, ethical AI policies that align with democratic values.
In this episode, we dive into the complexities of AI compliance and the challenges organizations face in navigating the evolving regulatory landscape, especially with the European AI Act. Learn how businesses can stay compliant while driving innovation in AI development.
Our guest, Sean Musch, Founder and CEO of AI & Partners, shares his expertise on the European AI Act and other regulatory frameworks shaping the future of AI. Discover practical strategies for navigating compliance while fostering responsible AI practices.
In this episode, we explore how geospatial data is being leveraged to improve crisis response efforts through the integration of AI. Learn about the groundbreaking work of the Humanitarian OpenStreetMap Team in mapping vulnerable areas and using AI to support humanitarian missions in real-time.
Our guest, Paul Uithol, Director of Humanitarian Data at the Humanitarian OpenStreetMap Team, shares his insights into how geospatial data and AI are transforming disaster management and crisis response. Discover the innovative strategies that enable faster, more accurate responses to humanitarian challenges.
In this episode, we explore the complexities of global AI regulation and enforcement, focusing on how governments and organizations can balance the need for compliance while fostering innovation. We dive into the challenges of supervising AI across different legislative frameworks and how these regulations shape the future of AI technologies.
Our featured guest, Huub Janssen, Manager on AI at the Ministry of Economic Affairs and the Dutch Authority for Digital Infrastructure, The Netherlands, shares his insights on navigating the regulatory landscape and driving responsible AI development.
In this insightful episode, we explore the intersection of AI governance and legal innovation. Join us as we discuss the critical challenges and opportunities that arise as organizations strive to implement responsible AI practices in an ever-evolving regulatory landscape.
Our esteemed guest, Hadassah Drukarch, Director of Policy and Delivery at the Responsible AI Institute, shares her expertise on how to navigate the complexities of AI governance, legal frameworks, and the importance of fostering ethical AI practices.
In this compelling episode, we explore how artificial intelligence is transforming disaster response efforts, especially for vulnerable communities impacted by crises. Join us as we discuss innovative strategies that leverage AI to enhance humanitarian action and build more resilient systems.
Our special guest, Katya Klinova, Head of AI and Data Insights for Social and Humanitarian Action at the United Nations Secretary-General's Innovation Lab, shares invaluable insights into the role of AI in disaster management and its potential to bridge critical gaps in support for those most in need.
In the latest episode of the RegulatingAI Podcast at the World Summit AI on October 9, 2024, the discussion dives deep into the critical AI competencies driving organizational transformation. The episode explores how AI revolutionizes the workforce through augmentation, reskilling, and enhancing human-computer interaction, all while promoting ethical AI hiring practices.
Special guest Dr. Kevin J. Jones, Director at the IU Columbus Center for Teaching and Learning and Associate Professor of Management, shares insights on how leaders can leverage AI to enhance their organizations and stay ahead of the curve.
On this episode, I’m joined by Ruslan Salakhutdinov, UPMC Professor of Computer Science at Carnegie Mellon University. Ruslan discusses the pressing need for AI regulation, its potential for societal transformation and the ethical considerations of its future development, including how to safeguard humanity while embracing innovation.
Key Takeaways:
(02:14) The need to regulate AI to prevent monopolization by large corporations.
(06:03) The dangers of AI-driven misinformation and its impact on public opinion.
(10:32) The risks AI poses in job displacement across multiple industries.
(14:22) How deepfake technology is evolving and its potential consequences.
(18:47) The challenge of balancing AI innovation with data privacy concerns.
(22:10) AI’s growing role in military applications and the need for careful oversight.
(26:05) How AI agents could autonomously interact and the risks involved.
(31:30) The potential for AI to surpass human performance in certain professions.
(37:14) Why international collaboration is critical for effective AI regulation.
(42:56) The ethical dilemmas surrounding AI’s influence in healthcare and decision-making.
Resources Mentioned:
https://www.linkedin.com/in/russ-salakhutdinov-53a0b610/
https://openai.com/index/sora/
Geoffrey Hinton and his contributions to AI -
https://www.linkedin.com/pulse/geoffrey-hinton-alan-francis/
https://www.cmu.edu
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
The race for AI leadership is not just about technology; it’s a battle of values and national security that will shape our future. In this episode, I’m joined by Senator Todd Young, United States Senator (R-Ind.) at the United States Senate. He shares insights into AI policy, national security and the steps needed to maintain US leadership in this critical field.
Key Takeaways:
(01:54) The bipartisan effort behind the Senate AI Working Group.
(03:34) How existing laws adapt to an AI-enabled world.
(05:17) Identifying AI risks and regulatory barriers.
(07:41) The role of government expertise in AI-related areas.
(10:12) Understanding the significance of the $32 billion AI public investment.
(13:17) Applying AI innovations across various industries.
(15:27) The impact of China on AI competition and US strategy.
(17:44) Why semiconductors are vital to AI development.
(20:26) Balancing open-source and closed-source AI models.
(22:51) The need for global AI standards and harmonization.
Resources Mentioned:
https://www.linkedin.com/in/senator-todd-young/
https://www.young.senate.gov/
https://www.linkedin.com/company/ussenate/
National AI Research Resource -
https://nairrpilot.org/
https://www.whitehouse.gov/briefing-room/statements-releases/2022/08/09/fact-sheet-chips-and-science-act-will-lower-costs-create-jobs-strengthen-supply-chains-and-counter-china/
https://www.young.senate.gov/wp-content/uploads/One_Pager_Roadmap.pdf
National Security Commission on Artificial Intelligence -
https://reports.nscai.gov/final-report/introduction
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
AI and RNA are revolutionizing drug discovery, promising a future where life-saving medications are developed faster and at lower costs.
In this episode, Raphael Townshend, PhD, Founder and CEO of Atomic AI, sits down with me to discuss the intersection of AI and RNA in drug development. We explore how AI technologies can reduce the cost and time required for clinical trials and target previously incurable diseases.
Key Takeaways:
(02:15) Raphael's background in AI and biology, and founding of Atomic AI.
(05:59) Reducing time and failure rate in drug discovery with AI.
(07:16) AlphaFold's breakthrough in understanding molecular shapes using AI.
(09:23) Ensuring transparency and accountability in AI-driven drug discovery.
(12:22) Navigating intellectual property concerns in healthcare AI.
(15:34) Integrating AI with wet lab testing for accurate drug discovery results.
(17:31) Balancing intellectual property and open research in biotech.
(20:02) Addressing data privacy and security in AI algorithms.
(22:30) Educating users and healthcare professionals about AI in drug discovery.
(24:48) Collaborating with global regulators for AI-driven drug discovery innovations.
Resources Mentioned:
https://www.linkedin.com/in/raphael-townshend-9154962a/
Atomic AI | LinkedIn -
https://www.linkedin.com/company/atomic-ai-rna/
https://deepmind.google/technologies/alphafold/
https://atomic.ai/
https://www.biospace.com/atomic-ai-creates-first-large-language-model-using-chemical-mapping-data-to-optimize-rna-therapeutic-development
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
In this episode, I’m joined by Dr. Rashawn Ray, Vice President at the American Institutes for Research (AIR) and Executive Director of AIR Equity Initiative, Professor of Sociology at the University of Maryland and Senior Fellow at The Brookings Institution. Dr. Ray’s innovative work lies at the powerful intersection of policing, technology and social equity, where he explores how AI can be designed and implemented to enhance fairness, reduce inequality and ultimately be a force for positive change in both local communities and the broader world.
Key Takeaways:
(01:00) Regulating AI without stifling innovation is crucial.
(07:06) How virtual reality enhances police training by addressing implicit bias.
(12:22) The impact of diverse teams on equitable AI development.
(19:36) Overcoming challenges in implementing VR training in smaller law enforcement agencies.
(25:50) Tech companies collaborating on socially impactful AI projects is vital.
(31:55) Community involvement is critical in shaping AI and VR technologies.
(36:21) The role of DEI initiatives in improving AI’s fairness and effectiveness.
(42:09) The future of AI legislation and its potential to democratize technology.
Resources Mentioned:
https://www.linkedin.com/in/sociologistray/
AIR | Website - https://www.air.org/
AIR Equity Initiative | LinkedIn -
https://www.linkedin.com/showcase/air-equity-initiative/about/
AIR Equity Initiative Website -
https://www.air.org/air-equity-initiative-bridge-more-equitable-world
Lab for Applied Social Science Research -
https://socy.umd.edu/centers/lab-applied-social-science-research-%28lassr%29
https://www.brookings.edu
https://www.air.org/experts/person/rashawn-ray
https://www.rashawnray.com/
“Extracting Protest Events from Newspaper Articles with ChatGPT” (working paper) - https://uncmap.org/publication/chat-wp/
“5 questions policymakers should ask about facial recognition, law enforcement and algorithmic bias” - https://www.brookings.edu/articles/5-questions-policymakers-should-ask-about-facial-recognition-law-enforcement-and-algorithmic-bias/
“Examining equity in transportation safety enforcement” -
https://www.brookings.edu/articles/examining-equity-in-transportation-safety-enforcement/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I’m joined by Senator Mike Rounds, US Senator for South Dakota and Co-Chair of the Senate AI Caucus, to discuss how the US can regulate AI responsibly while fostering innovation. With his extensive experience in both state and federal government, Senator Rounds shares his insights into the Bipartisan Senate AI Working Group and its roadmap for AI policy.
Key Takeaways:
(01:23) The Bipartisan Senate AI Working Group aims to balance AI regulation and innovation.
(05:07) Why intellectual property protections are essential in AI development.
(07:27) National security implications of AI in weapons systems and defense.
(09:19) The potential of AI to revolutionize healthcare through faster drug approvals.
(10:55) How AI can aid in detecting and combating biological threats.
(15:00) The importance of workforce training to mitigate AI-driven job displacement.
(19:05) The role of community colleges in preparing the workforce for an AI-driven future.
(24:00) Insights from international collaboration on AI regulation.
Resources Mentioned:
Senator Mike Rounds Homepage -
https://www.rounds.senate.gov/
https://www.rounds.senate.gov/newsroom/press-releases/rounds-introduces-artificial-intelligence-policy-package
https://www.linkedin.com/company/medshield-llc
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
In this episode, I’m joined by Charity Rae Clark, Vermont Attorney General, and Monique Priestley, Vermont State Representative. They have been instrumental in shaping Vermont’s legislative approach to data privacy and AI. We dive into the challenges of regulating AI to keep citizens safe, the importance of data minimization and the broader implications for society.
Key Takeaways:
(02:10) “Free” apps and websites take payment with your data.
(08:15) The Data Privacy Act includes stringent provisions to protect children online.
(10:05) Protecting consumer privacy and reducing security risks.
(15:29) Vermont’s legislative journey includes educating lawmakers.
(18:45) Innovation and regulation must be balanced for future AI development.
(23:50) Collaboration and education can overcome intense pressure from lobbyists.
(30:02) AI’s potential to exacerbate discrimination demands regulation.
(36:15) Deepfakes present a growing threat.
(42:40) Consumer trust could be lost due to premature releases of AI products.
(50:10) The necessity of a strong foundation in data privacy.
Resources Mentioned:
https://www.linkedin.com/in/charityrclark/
https://www.linkedin.com/in/mepriestley/
Vermont -
https://www.linkedin.com/company/state-of-vermont/
“The Age of Surveillance Capitalism” by Shoshana Zuboff -
https://www.amazon.com/Age-Surveillance-Capitalism-Future-Frontier/dp/1610395697
“Why Privacy Matters” by Neil Richards -
https://www.amazon.com/Why-Privacy-Matters-Neil-Richards/dp/0190940553
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Dive into the tangled web of AI and copyright law with Keith Kupferschmid, CEO of the Copyright Alliance, as he reveals how AI companies navigate legal responsibilities and examines what creators can do to safeguard their intellectual property in an AI-driven world.
Key Takeaways:
(02:00) The Copyright Alliance represents over 15,000 organizations and 2 million individual creators.
(05:12) Two potential copyright infringement settings: during the ingestion process and the output stage.
(06:00) There have been 17 or 18 AI copyright cases filed recently.
(08:00) Fair Use in AI is not categorical and is decided on a case-by-case basis.
(13:32) AI companies often shift liability to prompters, but both can be held liable under existing laws.
(15:00) Creators should clearly state their licensing preferences on their works to protect themselves.
(17:50) Current copyright laws are flexible enough to adapt to AI without needing new legislation.
(20:00) Market-based solutions, such as licensing, are crucial for addressing AI copyright issues.
(27:34) Education and public awareness are vital for understanding copyright issues related to AI.
Resources Mentioned:
https://www.linkedin.com/in/keith-kupferschmid-723b19a/
https://copyrightalliance.org
https://www.copyright.gov
https://www.gettyimages.com
National Association of Realtors -
https://www.nar.realtor
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
The future of AI lies at the intersection of technology and ethics. How do we navigate this complex landscape? Today, I’m joined by Maria Luciana Axente, Head of Public Policy and Ethics at PwC UK and Intellectual Forum Senior Research Associate at Jesus College Cambridge, who offers key insights into the ethical implications of AI.
Key Takeaways:
(03:56) The importance of integrating ethical principles into AI.
(08:22) Preserving humanity in the age of AI.
(12:19) Embedding value alignment in AI systems.
(15:59) Fairness and voluntary commitments in AI.
(21:01) Participatory AI and including diverse voices.
(24:05) Cultural value systems shaping AI policies.
(26:25) The importance of reflecting on AI’s impact before implementation.
(27:48) Learning from other industries to govern AI better.
(28:59) AI as a socio-technical system, not just technology.
Resources Mentioned:
https://www.linkedin.com/in/mariaaxente/
PwC UK -
https://www.linkedin.com/company/pwc-uk/
https://www.linkedin.com/company/jesus-college-cambridge/
https://www.pwc.co.uk/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Can AI spark new creative revolutions? On this episode, I’m joined by Lianne Baron, Strategic Partner Manager for Creative Partnerships at Meta. Lianne unveils how AI is not just a tool but a transformative force in the creative landscape, emphasizing the irreplaceable value of human imagination. We explore the rapid pace of innovation, the challenges of embracing new tech, and the exciting future of idea generation and delivery.
Key Takeaways:
(03:50) Embrace AI's changes; it challenges traditional methods.
(05:13) AI speeds up the journey from imagination to delivery.
(07:15) The move to cinematic quality sparks excitement and fear.
(08:30) Education is key in democratizing AI for all.
(15:00) Risk of bias without diverse voices in AI development.
(17:15) Ideas, not skills, are the new currency in AI.
(26:16) Imagination and human experience are irreplaceable by AI.
(29:11) AI can democratize storytelling, sharing diverse narratives.
(33:00) AI breaks down barriers, fostering new creative opportunities.
(36:20) Understanding authenticity is crucial in an AI-driven world.
Resources Mentioned:
https://www.linkedin.com/in/liannebaron/
Meta -
https://www.meta.com/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
The potential of AI is transforming industries, but how do we regulate this rapidly evolving technology without stifling innovation?
On this episode, I’m joined by Professor Zico Kolter, Professor and Director of the Machine Learning Department at Carnegie Mellon University and Chief Expert at Bosch USA, who shares his insights on AI regulation and its challenges.
Key Takeaways:
(02:41) AI innovation outpaces legislation.
(04:00) Regulating technology vs. its usage is crucial.
(06:36) AI is advancing faster than ever.
(11:14) Companies must prevent AI misuse.
(15:30) Bias-free algorithms are not feasible.
(21:34) Human interaction in AI decisions is essential.
(27:49) The competitive environment benefits AI development.
(32:26) Perfectly accepted regulations indicate mistakes.
(37:52) Regulations should adapt to technological changes.
(42:49) AI developers aim to benefit people.
(45:16) Human-in-the-loop AI is crucial for reliability.
(46:30) Addressing gaps in AI systems is critical.
Resources Mentioned:
Zico Kolter - https://www.linkedin.com/in/zico-kolter-560382a4/
Carnegie Mellon University - https://www.linkedin.com/school/carnegie-mellon-university/
Bosch USA - https://www.linkedin.com/company/boschusa/
EU AI Act - https://ec.europa.eu/digital-strategy/our-policies/eu-regulatory-framework-artificial-intelligence_en
OpenAI - https://www.openai.com/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I’m joined by Professor Paul Rainey to discuss the evolutionary principles applicable to AI development and the potential risks of self-replicating AI systems. Paul is Director of the Department of Microbial Population Biology at the Max Planck Institute for Evolutionary Biology in Plön; Professor at ESPCI in Paris; Fellow of the Royal Society of New Zealand; a Member of EMBO & European Academy of Microbiology; and Honorary Professor at Christian Albrechts University in Kiel.
Key Takeaways:
(00:04) Evolutionary transitions form higher-level structures.
(00:06) Eukaryotic cells parallel future AI-human interactions.
(00:08) Major evolutionary transitions inform AI-human interactions.
(00:11) Algorithms can evolve with variation, replication and heredity.
(00:13) Natural selection drives complexity.
(00:18) AI adapts to selective pressures unpredictably.
(00:21) Humans risk losing autonomy to AI.
(00:25) Societal engagement is needed before developing self-replicating AIs.
(00:30) The challenge of controlling self-replicating systems.
(00:33) Interdisciplinary collaboration is crucial for AI challenges.
Resources Mentioned:
Max Planck Institute for Evolutionary Biology
Professor Paul Rainey - Max Planck Institute
Max Planck Research Magazine - Issue 3/2023
Paul Rainey’s article in The Royal Society Publishing
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
In this episode, I’m joined by Jaap van Etten, CEO and Co-Founder of Datenna, the leading provider of techno-economic intelligence in China. Jaap’s unique background as a diplomat turned entrepreneur provides invaluable insights into the intersection of AI, innovation and policy.
Key Takeaways:
(01:30) Transitioning from diplomat to tech entrepreneur.
(05:23) Key differences in AI approaches between China, Europe and the US.
(07:20) The Chinese entrepreneurial mindset and its impact on innovation.
(10:03) China’s strategy in AI and the importance of being a technological leader.
(17:05) Challenges and misconceptions about China’s technological capabilities.
(23:17) Recommendations for AI regulation and international cooperation.
(30:19) Jaap’s perspective on the future of AI legislation.
(35:12) The role of AI in policymaking and decision-making.
(40:54) Policymakers need scenario planning and foresight exercises to keep up with rapid technological advancements.
Resources:
Jaap van Etten - https://www.linkedin.com/in/jaapvanetten/
Datenna - https://www.linkedin.com/company/datenna/
https://www.nytimes.com/2006/05/15/technology/15fraud.htm
http://www.china.org.cn/english/scitech/168482.htm
https://en.wikipedia.org/wiki/Hanxin
https://www.linkedin.com/pulse/china-marching-forward-artificial-intelligence-jaap-van-etten/
https://github.com/Kkevsterrr/geneva
https://www.grc.com/sn/sn-779.pdf
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I’m joined by Dr. Abhinav Valada, Professor and Director of the Robot Learning Lab at the University of Freiburg, to explore the future of robotics and the essential regulations needed for their integration into society.
Key Takeaways:
(00:00) The potential economic impact of AI.
(03:37) The distinction between perceived and actual AI capabilities.
(04:24) Challenges in training robots with real-world data.
(08:51) Limitations of current AI reasoning capabilities.
(13:16) The importance of conveying robot intent for collaboration.
(17:33) The need for specific guidelines for robotic systems.
(21:00) Mandating AI ethics courses in Germany.
(25:10) Collaborative robots and workforce implications.
(30:00) Privacy issues in human-robot interaction.
(35:02) The importance of pilot programs for autonomous vehicles.
(39:00) International collaboration in AI legislation.
(40:38) Inclusion of diverse voices in robotics research.
Resources Mentioned:
Dr. Abhinav Valada - https://www.linkedin.com/in/avalada/
University of Freiburg - https://www.linkedin.com/company/university-of-freiburg/
EU AI Act - https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
Robot Learning Lab, University of Freiburg - https://www.researchgate.net/lab/Robot-Learning-Lab-Abhinav-Valada
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Striking a balance between artificial intelligence innovation and regulation is crucial for leveraging its benefits while safeguarding against risks. On this episode, I’m joined by Congressman Buddy Carter, U.S. Representative for Georgia's 1st District, to explore the complexities of AI regulation and its impact on healthcare and other sectors.
Key Takeaways:
(01:48) President Biden's Executive Order on AI aims to set new standards.
(04:34) AI's potential in healthcare, including telehealth and drug development.
(05:47) Legal implications for doctors not using available AI technologies.
(07:55) AI could speed up the drug development process.
(10:52) The need for constantly updated AI standards.
(11:56) Debate on creating a separate regulatory body for AI.
(14:03) Importance of including diverse voices in AI regulation.
(16:57) Federal preemption of state and local AI laws to avoid regulatory patchwork.
Resources Mentioned:
Buddy Carter - https://www.linkedin.com/in/buddycarterga/
President Biden's Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
EU AI Act - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Section 230 of the Communications Decency Act - https://www.eff.org/issues/cda230
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I am joined by Daniel Colson, Executive Director of the AI Policy Institute, to consider some pressing issues. Daniel shares his insights into the risks, opportunities and future directions of AI policy.
Key Takeaways:
(02:15) Daniel analyzes President Biden's recent executive order on AI.
(04:13) Differentiating risks in AI technologies and their applications.
(08:52) Concerns about the open-sourcing of AI models and abuse potential.
(16:45) The importance of inclusive discussions in AI policymaking.
(19:25) Challenges and risks of regulatory capture in the AI sector.
(26:45) Balancing innovation with regulation.
(33:14) The potential for AI to transform employment and the economy.
(37:52) How AI's rapid evolution challenges our role as the dominant thinkers and prompts careful deliberation on its impact.
Resources Mentioned:
Daniel Colson - https://www.linkedin.com/in/danieljcolson/
AI Policy Institute - https://www.linkedin.com/company/aipolicyinstitute/
AI Policy Institute | Website - https://www.theaipi.org/
#AIRegulation #AISafety #AIStandard
On this episode of Regulating AI, I sit down with Professor Effy Vayena, Chair of Bioethics and Associate Vice President of Digital Transformation and Governance of the Swiss Federal Institute of Technology (ETH) and Co-Director of Stavros Niarchos Foundation Bioethics Academy. Together we delve deep into the world of AI, its ethical challenges, and how thoughtful regulation can ensure equitable benefits.
Key Takeaways:
(03:45) The importance of developing and using technology in ways that meet ethical standards.
(10:31) The necessity of agile regulation and continuous dialogue with tech developers.
(13:19) The concept of regulatory sandboxes for testing policies in a controlled environment.
(17:07) Balancing AI innovation with patient privacy and data security.
(24:14) Strategies to ensure AI benefits reach marginalized communities and promote health equity.
(35:10) Considering the global impact of AI and the digital divide.
(41:06) Including and educating the public in AI regulatory processes.
(44:04) The importance of international collaboration in AI regulation.
Resources Mentioned:
Professor Effy Vayena - https://www.linkedin.com/in/effy-vayena-467b1353/
Swiss Federal Institute of Technology (ETH) - https://www.linkedin.com/school/eth-zurich/
ETH Zurich - https://ethz.ch/en.html
European Union’s AI Act - https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
U.S. FDA guidelines on AI in medical devices - https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
The integration of AI into healthcare is not only transforming the way we diagnose, treat and manage patient care but is also redefining the roles of doctors. Join me as I sit down with Dr. Brennan Spiegel to explore how AI is revolutionizing the medical field. Brennan is a Professor of Medicine and Public Health; George and Dorothy Gourrich Chair in Digital Health Ethics; Director of Health Services Research; Director, Graduate Program in Health Delivery Science; Cedars-Sinai Site Director, Clinical and Translational Science Institute; and Editor-in-Chief, Journal of Medical Extended Reality.
Key Takeaways:
(03:00) Balancing AI benefits with concerns about algorithmic bias and fairness.
(05:47) Evaluating AI for implicit bias in mental health applications.
(08:03) The need for standardized guidance and rigorous oversight in AI applications.
(10:03) Ensuring data transmitted between AI providers and health systems is HIPAA compliant.
(16:42) The evolving role of doctors in the context of AI integration.
(21:22) The importance of traditional knowledge alongside AI in medical practice.
(24:44) International collaboration and standardized approaches to AI in healthcare.
Resources Mentioned:
Dr. Brennan Spiegel - https://www.linkedin.com/in/brennan-spiegel-md-mshs-2938a4142/
Cedars-Sinai - https://www.linkedin.com/company/cedars-sinai-medical-center/
Brennan Spiegel on X - https://x.com/BrennanSpiegel
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
In this episode, I welcome Carmel Shachar, Faculty Director of the Health Law and Policy Clinic and Assistant Clinical Professor of Law at Harvard Law School Center for Health Law and Policy Innovation. We delve into how AI is shaping the future of healthcare, its profound impacts and the vital importance of thoughtful regulation. The interplay between AI and healthcare is increasingly critical, pushing the boundaries of medicine while challenging our regulatory frameworks.
Key Takeaways:
(00:00) AI’s challenges in balancing patient data needs.
(03:09) The revolutionary potential of AI in healthcare innovation.
(04:30) How AI is driving precision and personalized medicine.
(06:19) The urgent need for healthcare system evolution.
(09:00) Potential negative impacts of poorly implemented AI.
(12:00) The unique challenges posed by AI as a medical device.
(15:10) Minimizing regulatory handoffs to enhance AI efficacy.
(18:00) How AI can reduce healthcare disparities.
(20:00) Ethical considerations and biases in AI deployment.
(25:00) AI’s growing impact on healthcare operations and management.
(30:00) Enhancing patient-physician communication with AI tools.
(39:00) Future directions in AI and healthcare policy.
Resources Mentioned:
Carmel Shachar - https://www.linkedin.com/in/carmel-shachar-7b3a8525/
Harvard Law School Center for Health Law and Policy Innovation - https://www.linkedin.com/company/harvardchlpi/
Carmel Shachar's Faculty Profile at Harvard Law School - https://hls.harvard.edu/faculty/carmel-shachar/
Precision Medicine, Artificial Intelligence and the Law Project - https://petrieflom.law.harvard.edu/research/precision-medicine-artificial-intelligence-and-law
Petrie-Flom Center Blog - https://blog.petrieflom.law.harvard.edu/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I welcome Ari Kaplan, Head Evangelist of Databricks, a leading data and AI company. We discuss the intricacies of AI regulation, how different regions, like the US and EU, are addressing AI’s rapid development, and the importance of industry perspectives in shaping effective legislation.
Key Takeaways:
(04:42) Insights on the rapid advancements in AI technology and legislative responses.
(10:32) The role of tech leaders in shaping AI policy and bridging knowledge gaps.
(13:57) Open-source versus closed-source AI — Ari Kaplan advocates for transparency.
(16:56) Ethical concerns in AI across different countries.
(21:21) The necessity for both industry-specific and overarching AI regulations.
(25:09) Automation’s potential to improve efficiency also raises employment risk.
(29:17) A balanced, educational approach in the age of AI is crucial.
(32:45) Risks associated with generative AI and the importance of intellectual property rights.
Resources Mentioned:
Ari Kaplan - https://www.linkedin.com/in/arikaplan/
Databricks - https://www.linkedin.com/company/databricks/
Unity Catalog Governance Value Levers - https://www.databricks.com/blog/unity-catalog-governance-value-levers
President Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
EU AI Act Information - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
In this episode, I welcome Nicolas Kourtellis, Co-Director of Telefónica Research and Head of Systems AI Lab at Telefónica Innovación Digital, a company of the Telefonica Group. Nicolas shares his expert insights on the pivotal role of AI in revolutionizing telecommunications, the challenges of AI regulation and the innovative strides Telefónica is making toward sustainable and ethical AI deployment.
Imagine a world where every device you own not only connects seamlessly but also intelligently adapts to your needs. This isn’t just a vision for the future; it’s the reality AI is creating today in telecommunications.
Key Takeaways:
(00:00) AI research focuses and applications in telecommunications.
(03:24) AI’s role in optimizing network systems and enhancing user privacy is critical.
(06:00) How Telefónica uses AI to improve customer service through AI chatbots.
(12:03) The ethical considerations and sustainability of AI models.
(16:08) Democratizing AI to make it accessible and beneficial for all users.
(18:09) Designing AI systems with privacy and security from the start.
(27:00) The challenges and opportunities AI presents for the workforce.
(30:25) The potential of 6G and its reliance on AI technologies.
(32:16) The integral role of AI in future technological advancements and network optimizations.
(39:35) The societal impacts of AI in telecommunications.
Resources Mentioned:
Nicolas Kourtellis - https://www.linkedin.com/in/nicolas-kourtellis-3a154511/
Telefónica Innovación Digital - https://www.linkedin.com/company/telefonica-innovacion-digital/
Telefonica Group - https://www.linkedin.com/company/telefonica/
You can find all of Nicolas’ publications on his Google Scholar page: http://scholar.google.com/citations?user=Q5oWwiQAAAAJ
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode of the Regulating AI Podcast, I'm joined by Dr. Irina Mirkina, Innovation Manager and AI Lead at UNICEF's Office of Innovation. An AI strategist, speaker, and expert for the European Commission, Dr. Mirkina brings a wealth of experience from academia, the private sector, and now, the humanitarian sector. Today’s discussion focuses on AI for social good.
Key Takeaways:
(03:31) The role of international organizations like UNICEF in shaping global AI regulations.
(07:06) Challenges of democratizing AI across different regions to overcome the digital divide.
(10:28) The importance of developing AI systems that cater to local contexts.
(13:23) The transformative potential and limitations of AI in personalized education.
(16:37) Engaging vulnerable populations directly in AI policy discussions.
(20:47) UNICEF's use of AI in addressing humanitarian challenges.
(25:10) The role of civil society in AI regulation and policymaking.
(33:50) AI's risks and limitations, including issues of open-source management and societal impact.
(38:57) The critical need for international collaboration and standardization in AI regulations.
Resources Mentioned:
Dr. Irina Mirkina - https://www.linkedin.com/in/irinamirkina/
UNICEF Office of Innovation - https://www.unicef.org/innovation/
Policy Guidance on AI for Children by UNICEF - https://www.unicef.org/globalinsight/media/2356/file/UNICEF-Global-Insight-policy-guidance-AI-children-2.0-2021.pdf
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I’m joined by Professor Angela Zhang, Associate Professor of Law at the University of Hong Kong and Director of the Philip K. H. Wong Center for Chinese Law. We delve into the complexities of AI regulation in China, exploring how the government’s strategies impact both the global market and internal policies.
Key Takeaways:
(02:14) The introduction of China’s approach to AI regulation.
(06:40) Discussion on the volatile nature of Chinese regulatory processes.
(10:26) How China’s AI strategy impacts international relations and global standards.
(13:32) Angela explains the strategic use of law as an enabler in China’s AI development.
(18:53) High-level talks between the US and China on AI risk have not led to substantive actions.
(22:04) The US’s short-term gains from AI chip restrictions on China may lead to long-term disadvantages as China becomes self-sufficient and less cooperative.
(24:13) Unintended consequences of the Chinese regulatory system.
(29:19) Angela advocates for a slower development of AI technology to better assess and manage risks before they become unmanageable.
Resources Mentioned:
Professor Angela Zhang - http://www.angelazhang.net
High Wire by Angela Zhang - https://global.oup.com/academic/product/high-wire-9780197682258
Article: The Promise and Perils of China’s Regulation - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4708676
Research: Generative AI and Copyright: A Dynamic Perspective - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4716233
Research: The Promise and Perils of China's Regulation of Artificial Intelligence - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4708676
Angela Zhang’s Website - https://www.angelazhang.net/
High Wire Book Trailer - https://www.youtube.com/watch?v=u6OPSit6k6s
Purchase High Wire by Angela Zhang - https://www.amazon.com/High-Wire-Regulates-Governs-Economy/dp/0197682251/ref=sr_1_1?crid=2A7D070KIAGT&keywords=high+wire+angela+zhang&qid=1706441967&sprefix=high+wire+angela+zha,aps,333&sr=8-1
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I am thrilled to sit down with Congressman Joseph Morelle, who represents New York's 25th Congressional District and serves on the House Appropriations Committee. As an influential voice in the dialogue on artificial intelligence, Congressman Morelle shares his deep insights into AI's potential and challenges, particularly concerning legislation and societal impacts.
Key Takeaways:
(02:13) Congressman Morelle's extensive experience in AI legislation and its implications.
(04:27) Deep fakes and their growing threat to privacy and integrity.
(07:13) Introducing federal legislation against non-consensual deep fakes.
(14:00) Urgent need for social media platforms to enforce their guidelines rigorously.
(19:46) The No AI Fraud Act and protecting individual likeness in AI use.
(23:06) The importance of adaptable and 'living' statutes in technology regulation.
(32:59) The critical role of continuous education and skill adaptation in the AI era.
(37:47) Exploring the use of AI in Congress to ensure unbiased, culturally appropriate policymaking and data privacy.
Resources Mentioned:
Congressman Joseph Morelle - https://www.linkedin.com/in/joe-morelle-8246099/
No AI Fraud Act - https://www.congress.gov/bill/118th-congress/house-bill/6943/text?s=1&r=9
Preventing Deep Fakes of Intimate Images Act - https://www.congress.gov/bill/118th-congress/house-bill/3106
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I welcome Dr. Sethuraman Panchanathan, Director of the U.S. National Science Foundation and a professor at Arizona State University. Sethuraman shares personal insights on the transformative power of artificial intelligence and the importance of democratizing this technology to be sure it benefits humanity as a whole.
Key Takeaways:
(00:21) AI’s pivotal role in enhancing speech-language services.
(01:28) Introduction to Sethuraman’s visionary leadership at NSF.
(02:36) NSF’s significant AI investment totaled over $820 million.
(06:19) The shift toward interdisciplinary AI research at NSF.
(10:26) NSF’s initiative of launching 25 AI institutes for innovation.
(18:26) Emphasis on AI democratization through education and training.
(25:11) The NSF ExpandAI program boosts AI in minority-serving institutions.
(30:21) Focus on ethical AI development to build public trust.
(40:10) AI’s transformative applications in healthcare, agriculture and more.
(42:45) The importance of ethical guardrails in AI’s development.
(43:08) Advancing AI through international collaborations.
(44:53) Lessons from a career in AI and advice for the next generation.
(50:19) Motivating young researchers and entrepreneurs in AI.
(52:24) Advocating for AI innovation and accessibility for everyone.
Resources Mentioned:
https://www.linkedin.com/in/drpanch/
U.S. National Science Foundation | LinkedIn -
https://www.linkedin.com/company/national-science-foundation/
U.S. National Science Foundation | Website -
https://www.nsf.gov/
https://www.linkedin.com/school/arizona-state-university/
https://new.nsf.gov/funding/opportunities/expanding-ai-innovation-through-capacity-building
Dr. Sethuraman Panchanathan’s NSF Profile -
https://www.nsf.gov/staff/staff_bio.jsp?lan=spanchan
NSF Regional Innovation Engines -
https://new.nsf.gov/funding/initiatives/regional-innovation-engines
National AI Research Resource (NAIRR) -
https://new.nsf.gov/focus-areas/artificial-intelligence/nairr
NSF Focus on Artificial Intelligence -
https://new.nsf.gov/focus-areas/artificial-intelligence
https://new.nsf.gov/funding/opportunities/national-artificial-intelligence-research
GRANTED Initiative for Broadening Participation in STEM -
https://new.nsf.gov/funding/initiatives/broadening-participation/granted
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
The rapid evolution of artificial intelligence in cybersecurity presents both significant opportunities and daunting challenges. On this episode, I'm joined by Bruce Schneier, who is renowned globally for his expertise in cybersecurity and is dubbed a “security guru” by the Economist. Bruce, a best-selling author and lecturer at Harvard Kennedy School, discusses the fast-paced world of AI and cybersecurity, exploring how these technologies intersect with national security and what that means for future regulations.
Key Takeaways:
(00:00) I discuss with Bruce the challenges of regulating AI in the US.
(02:28) Bruce explains the role and future potential of AI in cybersecurity.
(05:05) The benefits of AI in defense, enhancing capabilities at computer speeds.
(07:22) The need for robust regulations akin to those in the EU.
(12:56) Bruce draws analogies between AI regulation and pharmaceutical controls.
(19:56) The critical role of knowledgeable staff in supporting legislators.
(22:24) The challenges of effectively regulating AI.
(26:15) The potential of AI to transform enforcement across various sectors.
(30:58) Reflections on the future of AI governance and ethical considerations.
Resources Mentioned:
Bruce Schneier Website - https://www.schneier.com/
EU AI Strategy - https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I’m joined by Trooper Sanders, CEO of Benefits Data Trust and a member of the White House National Artificial Intelligence Advisory Committee. Trooper’s expertise in leveraging AI to enhance the efficiency and humanity of America’s social safety net offers unique insights into the potential and challenges of AI in public services.
Key Takeaways:
(02:27) The role of Benefits Data Trust in connecting people to essential benefits using AI.
(04:54) The components of trustworthy AI: reliability, public interest alignment, security, transparency, explainability, privacy and harm mitigation.
(09:38) The ‘tortoise and hare’ challenge in aligning AI advancements with legislative processes.
(16:17) The significance of voluntary industry commitments in shaping AI’s ethical use.
(20:32) Ethical considerations in deploying AI, focusing on its societal impact and the readiness of systems for AI integration.
(22:53) Addressing biases in AI to ensure fairness and equitable benefits across all socioeconomic groups.
(27:52) Amplifying diverse voices in the AI discussion to encompass a wide range of societal perspectives.
(34:22) The potential workforce disruption by AI and the necessity of supportive measures for affected individuals.
(37:26) Considering the potentially massive impact of AI-driven career changes across various professions.
Resources Mentioned:
https://www.linkedin.com/in/troopersanders/
Benefits Data Trust | LinkedIn -
https://www.linkedin.com/company/benefits-data-trust/
Benefits Data Trust | Website -
https://bdtrust.org/
White House National Artificial Intelligence Advisory Committee -
https://www.whitehouse.gov/ostp/ostps-teams/nstc/select-committee-on-artificial-intelligence/
BDT Launches AI and Human Services Learning Hub -
https://bdtrust.org/bdt-launches-ai-learning-lab/
Our Vision for an Intelligent Human Services and Benefits Access System -
https://bdtrust.org/our-vision-for-an-intelligent-human-services-and-benefits-access-system
Humans Must Control Human-Serving AI -
https://bdtrust.org/media-coverage-humans-must-control-human-serving-ai/
https://bdtrust.org/trooper-sanders/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
I'm thrilled to be joined by Dr. Paul Lushenko, a Lieutenant Colonel in the U.S. Army and Director of Special Operations at the U.S. Army War College. Dr. Lushenko brings a wealth of knowledge from the front line of AI implementation in military strategy. He joins me to share his insights into the delicate balance between innovation and regulation.
Key Takeaways:
(02:28) The necessity of addressing AI’s impact on warfare and crisis escalation.
(06:37) The gaps in global governance regarding AI and autonomous weapon systems.
(08:30) U.S. policies on the responsible use of AI in military operations.
(16:29) The importance of cutting-edge research in informing legislative actions on AI.
(18:49) The risk of biases in AI systems used in national security.
(20:09) Discussion on automation bias and its consequences in military operations.
(32:49) Emphasis on the importance of careful management and extensive testing to build trust in AI systems within the military.
(39:51) The critical need for data-driven decision-making in high-stakes environments, advocating for leveraging expert insights.
(24:44) Dr. Lushenko argues for the adoption of a strategic framework to guide AI development in military contexts.
Resources Mentioned:
https://www.linkedin.com/in/paul-lushenko-phd-5b805113/
https://www.linkedin.com/school/united-states-army-war-college/
Political Declaration on Responsible Use of AI in Military Technologies -
https://www.state.gov/wp-content/uploads/2023/10/Latest-Version-Political-Declaration-on-Responsible-Military-Use-of-AI-and-Autonomy.pdf
Memorandum on Ethical Use of AI - White House 2023 -
https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I welcome Randi Weingarten, President of the American Federation of Teachers (AFT). She discusses why implementing AI in education requires a collaborative effort. Join us as we explore the challenges and opportunities of AI in shaping equitable and effective educational environments.
Key Takeaways:
(01:08) Introduction of Randi Weingarten and her role in the AFT.
(05:00) The critical issue of ensuring equitable access to AI technologies in education.
(08:06) Addressing bias and discrimination within AI-driven educational systems.
(11:53) The importance of inclusive participation in the implementation of educational technologies.
(13:09) The evolving necessity for educators to acquire new skills in response to AI advancements.
(17:26) The role of personalized teaching as a complement, not a replacement, for traditional educational methods.
(18:08) Concerns surrounding data privacy and security within AI-driven platforms.
(20:25) The need for regulation and oversight in the application of AI in educational settings.
(25:22) The potential for productive industry collaboration in developing AI tools for education.
(30:28) Advocating for a just transition fund to support workers displaced by AI and technological advancements.
Resources Mentioned:
Randi Weingarten - https://www.linkedin.com/in/randi-weingarten-05896224/
American Federation of Teachers - https://www.aft.org/
Testimony to Senator Schumer by Randi Weingarten on equity in AI - https://www.aft.org/press-release/afts-weingarten-calls-ai-guardrails-smart-regulation-ensure-new-technology-benefits
Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
AI regulation is not a simple field, particularly in the realm of national security, and it requires a nuanced approach. In this episode, I welcome Anja Manuel, the Executive Director of the Aspen Strategy Group and the Aspen Security Forum, as well as Co-Founder and Partner at Rice, Hadley, Gates & Manuel, LLC. Anja’s insights make the path forward clearer, framing effective AI legislation and emphasizing the need for global cooperation and ethical considerations. Her perspective, deeply rooted in national security expertise, underscores the critical balance between innovation and safeguarding against misuse.
Key Takeaways:
(00:17) The functionality of intelligence committees across party lines.
(00:59) AI in warfare reflects a shift from World War I tactics to modern tech battles.
(03:10) The rapid innovation in military technology and the US’s efforts to adapt.
(03:53) Risks of unregulated AI, including in cyber, autonomous weapons and bio-tech.
(07:09) AI regulation is needed both globally and nationally.
(11:21) International collaboration plays a vital role in AI regulation.
(13:39) Ethical considerations unique to AI applications in national security.
(14:31) National security agencies’ openness to regulatory frameworks.
(15:35) Public-private collaboration in addressing national security considerations.
(17:08) Establishing standards in AI technology for national security is necessary.
(18:28) Regulation of autonomous weapons and international agreements.
(19:32) Balancing secrecy in national security operations with public scrutiny of AI use.
(20:17) AI’s role and risks in intelligence and privacy.
(21:13) Regulating AI in cybersecurity and other areas is a challenge.
Resources Mentioned:
Anja Manuel - https://www.linkedin.com/in/anja-manuel-26805023/
Aspen Strategy Group - https://www.aspeninstitute.org/programs/aspen-strategy-group/
Aspen Security Forum - https://www.aspensecurityforum.org/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I’m joined by Dr. Gunter Beitinger, Senior Vice President of Manufacturing and Head of Factory Digitalization and Product Carbon Footprint at Siemens. Dr. Beitinger lends a comprehensive view on AI’s role in transforming manufacturing, emphasizing its potential to enhance productivity, ensure workforce well-being and drive sustainable practices without displacing human labor.
Key Takeaways:
(02:17) Dr. Beitinger’s extensive background and role at Siemens.
(05:13) Specific examples of AI-driven improvements in Siemens’ operations.
(07:52) The measurable productivity gains attributed to AI in manufacturing.
(10:02) The impact of AI on employment and the importance of re-skilling.
(13:06) The necessity for a collaborative approach between governments and the private sector in workforce development.
(16:24) The role of AI in improving the working conditions of industrial workers.
(26:53) The potential for smaller companies to leverage AI and compete with industry giants.
(36:49) AI’s future role in creating digital twins and the industrial metaverse.
Resources Mentioned:
https://www.linkedin.com/in/gunter-dr-beitinger/
Siemens | LinkedIn -
https://www.linkedin.com/showcase/siemens-industry-/?trk=public_post-text
Siemens | Website -
https://www.siemens.com/
https://blog.siemens.com/space/artificial-intelligence-in-industry/
https://blog.siemens.com/2023/07/the-need-to-rethink-production/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I’m joined by Sarah Kreps, the John L Wetherell Professor in the Department of Government, Adjunct Professor of Law, and the Director of the Tech Policy Institute at Cornell Brooks School of Public Policy. Her expertise in international politics, technology and national security offers a valuable perspective on shaping AI legislation.
Key Takeaways:
(00:20) The significant impact of industry and NGOs on AI regulation and congressional awareness.
(03:27) AI's multifaceted applications and its national security implications.
(05:07) Advanced efficiency of AI in misinformation campaigns and the importance of legislative responses.
(10:58) Proactive measures by AI firms like OpenAI for electoral fidelity and misinformation control.
(14:23) The challenge of balancing AI innovation with security and economic considerations in legislation.
(20:30) Concerns about potential AI monopolies and the economic consequences.
(28:16) Ethical and practical aspects of AI assistance in legislative processes.
(30:13) The critical need for human involvement in AI-augmented military decisions.
(35:32) National security agencies' approach to AI regulatory frameworks.
(39:13) The imperative of Congress's engagement with diverse sectors for comprehensive AI legislation.
Resources Mentioned:
Sarah Kreps - https://www.linkedin.com/in/sarah-kreps-51a3b7257/
Cornell - https://www.linkedin.com/school/cornell-university/
Sarah Kreps’ paper for the Brookings Institution - https://www.brookings.edu/articles/democratizing-harm-artificial-intelligence-in-the-hands-of-non-state-actors/
President Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
Discussions on AI Global Governance - https://www.american.edu/sis/news/20230523-four-questions-on-ai-global-governance-following-the-g7-hiroshima-summit.cfm
Sarah Kreps - Cornell University -
https://government.cornell.edu/sarah-kreps
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I’m joined by Professor Ronald Arkin, a renowned expert in robotics and roboethics from the Georgia Institute of Technology. Our discussion focuses on AI and robotics. We explore the ethical implications and the necessity for regulatory frameworks that ensure responsible development and deployment.
Key Takeaways:
(02:40) Ethical guidelines for AI and robotics.
(03:19) IEEE’s role in creating soft law guidelines.
(06:56) Robotics’ overshadowing by large language models.
(10:13) The necessity of oversight and compliance in AI development.
(15:30) Ethical considerations for emotionally expressive robots.
(23:41) Liability frameworks for ethical lapses in robotics.
(27:43) The debate on open-sourcing robotics software.
(29:52) The impact of robotics on workforce and employment.
(33:37) Human rights implications in robotic deployment.
(42:55) Final insights on cautious advancement in AI regulation.
Resources Mentioned:
Ronald Arkin - https://sites.cc.gatech.edu/aimosaic/faculty/arkin/
Ronald Arkin | LinkedIn - https://www.linkedin.com/in/ronald-arkin-a3a9206/
Georgia Tech Mobile Robot Lab - https://sites.cc.gatech.edu/ai/robot-lab/
Georgia Institute of Technology - https://www.linkedin.com/school/georgia-institute-of-technology/
IEEE Standards Association - https://standards.ieee.org/
United Nations Convention on Certain Conventional Weapons - https://treaties.un.org/pages/ViewDetails.aspx?chapter=26&clang=_en&mtdsg_no=XXVI-2&src=TREATY
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I welcome Steve Mills, Global Chief AI Ethics Officer for Boston Consulting Group and Global AI Lead for the Public Sector. Steve shares insights into the intersection of AI innovation and ethical responsibility, guiding us through the often-confusing topic of AI regulation and ethics.
Key Takeaways:
(00:26) The role clear regulations play in fostering innovation.
(02:43) The importance of consultation with industry to set achievable regulations.
(04:07) Addressing the uncertainty surrounding AI regulation.
(06:19) The necessity of sector-specific AI regulations.
(07:33) The debate over establishing a separate AI regulatory body.
(09:22) Adapting AI policy to keep pace with technological advancements.
(11:40) Enhancing AI literacy and upskilling the workforce.
(13:06) Ethical considerations in AI deployment, focusing on trustworthiness and harmlessness.
(15:01) Strategies for ensuring AI systems are fair and equitable.
(20:10) The discussion on open-source AI and combating monopolies.
(22:00) The importance of transparency in AI usage by companies.
Resources Mentioned:
Steve Mills - https://www.linkedin.com/in/stevndmills/
Boston Consulting Group - https://www.linkedin.com/company/boston-consulting-group/
Responsible AI Ethics - https://www.bcg.com/capabilities/artificial-intelligence/responsible-ai
Study on the impact of AI in the workforce - https://www.bcg.com/publications/2022/a-responsible-ai-leader-does-more-than-just-avoiding-risk
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I welcome Kai Zenner, Head of Office and Digital Policy Advisor at the European Parliament. We discuss the complexities and challenges of Artificial Intelligence, especially focusing on the legislative efforts within the EU to regulate AI technologies.
Key Takeaways:
(01:36) Diverse perspectives in AI legislation play a significant role.
(02:34) The EU AI Act’s status and its risk-based, innovation-friendly approach.
(07:11) The recommendation for a vertical, industry-specific approach to AI legislation.
(08:32) Measures in the AI Act to prevent AI power concentration and ensure transparency.
(11:50) The global approach of the EU AI Act and its focus on international alignment.
(14:28) Ethical considerations in AI development addressed by the AI Act.
(16:21) Implementation and enforcement mechanisms of the EU AI Act.
(23:31) The involvement of industry experts, researchers and civil society in developing the AI Act.
(29:51) The importance of educating the public on AI issues.
(33:12) Concerns about deepfake technology and election interference.
Resources Mentioned:
Kai Zenner - https://www.linkedin.com/in/kzenner/?originalSubdomain=be
European Parliament - https://www.linkedin.com/company/european-parliament/
EU AI Act - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I’m joined by Lexi Kassan, Lead Data and AI Strategist of Databricks and Founder and Host of the Data Science Ethics Podcast. Lexi brings a wealth of knowledge from her dual role as an AI ethicist and industry insider, providing an in-depth perspective on how legislation can shape the future of AI without curbing its potential.
Key Takeaways:
(02:44) The global impact of the EU AI Act.
(03:46) The necessity for risk-based AI model assessments.
(08:20) Ethical challenges hidden within AI applications.
(11:45) Strategies for inclusive AI benefiting marginalized communities.
(13:29) Core ethical principles for AI systems.
(19:50) The complexity of creating unbiased AI data sets.
(21:58) Categories of unacceptable risks in AI according to the EU Act.
(27:18) Accountability in AI deployment.
(30:53) The role of open-source models in AI development.
(36:24) Businesses seek clear regulatory guidelines.
Resources Mentioned:
Lexi Kassan - https://www.linkedin.com/in/lexykassan/?originalSubdomain=uk
Data Science Ethics Podcast - https://www.linkedin.com/company/dsethics/
EU AI Act - https://artificialintelligenceact.eu/
Databricks - https://www.databricks.com/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
In a world racing toward the development of Artificial General Intelligence (AGI), the balance between innovation and existential risk becomes a pivotal conversation. In this episode, I’m joined by Otto Barten, Founder of the Existential Risk Observatory. We focus on the critical issue of artificial general intelligence (AGI) and its potential to pose existential risks to humanity. Otto shares valuable insights into the necessity of global policy innovation and raising public awareness to navigate these uncharted waters responsibly.
Key Takeaways:
(00:18) Public awareness of AI risks is rising rapidly.
(01:39) The Existential Risk Observatory’s mission is to mitigate human extinction risks.
(02:51) The European Union’s political consensus on the EU AI Act.
(04:11) Otto explains multiple AI threat models leading to existential risks.
(07:01) Why distinguish between AGI and current AI capabilities?
(09:18) Sam Altman and Mark Zuckerberg made recent statements on AGI.
(12:15) The potential dangers of open-sourcing AGI.
(14:17) The current regulatory landscapes and potential improvements.
(17:01) The concept of a “pause button” for AI development is introduced.
(20:13) Balancing AI development with ethical considerations and existential risks.
(23:51) Increasing public and legislative awareness of AI risks.
(29:01) The significance of transparency and accountability in AI development.
Resources Mentioned:
Otto Barten - https://www.linkedin.com/in/ottobarten/?originalSubdomain=nl
Existential Risk Observatory - https://www.linkedin.com/company/existential-risk-observatory/
European Union AI Act -
The Bletchley Process for global AI safety summits -
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I'm joined by Daniel Jeffries, Managing Director of the AI Infrastructure Alliance and CEO of Kentauros, to explore the complexities of AI's potential and the critical need for balanced, forward-thinking legislation.
Key Takeaways:
(02:05) Recent executive orders on AI, watermarking and model size regulation.
(03:54) Autonomous weapons and the need for regulation in areas exempted by governments.
(07:01) Liability in AI-induced harm and the challenge of assigning responsibility.
(07:52) The rapid evolution of AI and the legislative challenge to keep pace.
(10:37) The risk of regulatory capture and the importance of preventing AI monopolies.
(13:29) The role of open source in fostering innovation.
(16:32) Skepticism towards the feasibility of a global consensus on AI regulation.
(18:21) Advocacy for industry-specific regulations, emphasizing use-case and industry nuances.
(22:33) Recommendations for policymakers to focus on real-world problems.
Resources Mentioned:
Daniel Jeffries - https://www.linkedin.com/in/danjeffries/
AI Infrastructure Alliance - https://www.linkedin.com/company/ai-infrastructure-alliance/
Kentauros - https://www.linkedin.com/company/kentauros-ai/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I welcome Alex Swartsel, Managing Director of Insights at JFFLabs. We discuss AI’s role in the employment landscape’s transformation, highlighting the delicate balance between leveraging AI for growth and mitigating its potential disruptions.
Key Takeaways:
(00:16) AI’s transformative impact on employment.
(02:35) The role AI plays in job transformation and skill enhancement.
(04:30) The automation and augmentation of tasks by AI.
(06:10) Rethinking education and skill development in the age of AI.
(09:22) The significance of soft skills in conjunction with technical knowledge.
(11:00) AI’s potential to customize learning experiences.
(17:20) The pivotal role of community colleges in workforce training.
(21:33) The imperative of reskilling and the government’s role.
(29:51) Using AI for personalized education and career guidance.
(35:09) Promoting AI as a tool for human advancement.
Resources Mentioned:
Alex Swartsel - https://www.linkedin.com/in/alexswartsel/
JFFLabs’ New Center for Artificial Intelligence and the Future of Work - https://www.jff.org/
The AI-Ready Workforce report - https://info.jff.org/ai-ready
IMF Report on AI’s Impact on Jobs - https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2024/01/14/Gen-AI-Artificial-Intelligence-and-the-Future-of-Work-542379
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I'm joined by Professor Avi Loeb, Professor of Science at Harvard University, Director of the Institute for Theory and Computation within the Harvard Smithsonian Center for Astrophysics, Head of the Galileo Project, Chair of Harvard's Department of Astronomy and best-selling author. Avi provides an astrophysicist's perspective on the ethical and regulatory frameworks necessary to ensure the responsible use of artificial intelligence.
Key Takeaways:
(00:36) The essential role of academia in fostering dialogue across differing viewpoints.
(06:58) Professor Loeb's concerns about AI's unpredictability.
(09:18) The importance of training AI systems with value-aligned datasets to moderate societal risks.
(10:59) Assigning responsibility for AI's actions.
(14:29) The need for international treaties to regulate AI's use in national security and warfare.
(17:58) Addressing internal disinformation and the role of AI in amplifying societal divisions.
(22:40) Engaging the public in AI regulation discussions to ensure diverse perspectives.
(26:37) The potential for AI to revolutionize space exploration and decision-making in remote environments.
Resources Mentioned:
Harvard University's Galileo Project - https://projects.iq.harvard.edu/galileo/home
Rubin Observatory - https://rubinobservatory.org/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
In this latest episode, I'm joined by Timothy Bean, President and COO of Fortem Technologies, to explore the intricate interplay between artificial intelligence, national security and the legislative landscape that surrounds it.
Key Takeaways:
(02:42) The evolution of national security tools and the advent of AI.
(03:49) The importance of data privacy in AI legislation and national security.
(05:07) The challenges of regulating AI in a rapidly advancing technological landscape.
(10:13) How legislative bodies should adapt and embrace AI to keep pace with technological advancements.
(12:13) The impending impact of quantum computing on AI and national security.
(15:38) The US faces an arms race in AI and quantum computing against global competitors like China and Russia.
(17:25) Public-private partnerships in enhancing national security through AI.
(18:39) The role of transparency and accountability in AI applications for national security.
(22:16) Debating the merits of open-sourcing AI models in the context of national security.
(24:55) The significance of educating the public on data privacy and the potential of AI.
Resources Mentioned:
https://www.linkedin.com/in/meghalred/
https://www.linkedin.com/company/fortem-technologies/
President Biden’s Executive Order on AI -
https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
Department of Defense AI Ethics Principles -
https://www.ai.mil/blog_02_26_21-ai_ethics_principles-highlighting_the_progress_and_future_of_responsible_ai.html
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I'm thrilled to chat with Nathan Grant, Policy Fellow of TeachAI, an initiative championed by notable organizations including Code.org, ETS, ISTE, Khan Academy and the World Economic Forum. Nathan shares invaluable insights on integrating AI education within K-12, emphasizing the importance of a balanced approach to harness AI's potential while mitigating its risks.
Key Takeaways:
(01:16) Introduction of Nathan Grant and the TeachAI initiative.
(02:14) TeachAI's broad coalition, including tech giants and educational stakeholders.
(03:45) Perspectives on President Biden's Executive Order on AI.
(06:27) AI literacy's critical role across all subjects in K-12 education.
(07:30) Addressing the digital and AI divide for equitable education.
(09:03) Engaging students in the AI legislation dialogue.
(12:44) Concerns over banning AI tools like ChatGPT in schools.
(14:33) The risk of AI tool monopolization by a few large tech companies.
(16:00) The importance of education in demonstrating AI's potential and ensuring its responsible use.
(18:59) The potential for standardized AI education guidelines.
Resources Mentioned:
Nathan Grant - https://www.linkedin.com/in/nathan-grant-t/
Code.org - https://www.linkedin.com/company/code-org/
President Biden's Executive Order on AI - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
TeachAI initiative - https://www.teachai.org/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
In a world where AI shapes our daily lives, ethical considerations are paramount. In this episode, I have the pleasure of speaking with Beth Rudden, CEO of Bast AI and a trailblazer in AI ethics. Her journey from IBM to leading Bast AI offers a unique lens on the intricate relationship between AI, ethics and technology.
Key Takeaways:
(01:25) Insights into diverse perspectives on AI regulation.
(02:24) Beth discusses the ethical risks in AI development.
(03:38) The importance of education in AI ethics and technology.
(05:05) Emphasizing explainable AI in regulation.
(06:35) Discussing the role of data privacy and dignity.
(09:01) The necessity of transparency in AI systems.
(12:16) The impact of AI on social media and communication.
(15:33) Core ethical principles in AI development.
(19:25) The role of accountability in AI systems.
(22:09) The concept of AI as a community utility.
(26:39) Beth's views on creating unbiased AI systems.
(30:17) The importance of human rights and privacy in AI.
(34:27) Addressing AI's role in societal issues.
Resources Mentioned:
Beth Rudden - https://www.linkedin.com/in/brudden/
Joy Boulamwini's "Unmasking AI" - https://www.penguinrandomhouse.com/books/670356/unmasking-ai-by-joy-buolamwini/
EU AI Act - https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
Bast AI Website - https://bast.ai/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Creating a safe and ethical AI system starts at its conception. On this episode, I have the pleasure of speaking with Haniyeh Mahmoudian, Ph.D., distinguished Global AI Ethicist at DataRobot and Advisor to NAIAC (National AI Advisory Committee). We discuss AI regulation, ethical considerations and the importance of education around responsible use of AI.
Key Takeaways:
(02:09) Insights into President Biden’s AI Executive Order.
(04:32) The importance of private-public partnerships in AI education and workforce upskilling.
(06:35) The need for realistic job qualifications in AI-related fields.
(08:23) The EU AI Act, its risk framework for AI use cases and the need for flexible and adaptable legislative frameworks in AI regulation.
(11:42) The US's approach to AI regulation compared to the EU.
(15:59) Ethical risks in AI development, particularly the lack of education in AI literacy.
(18:55) Ensuring historically marginalized communities can participate in and benefit from AI advancements.
(21:04) The need for robust governance processes and accountability at every stage of AI development and deployment.
(23:53) Challenges and benefits of democratizing AI technology access.
(25:50) The necessity of companies disclosing their use of AI systems to end-users.
(27:12) Concerns about the impact of AI, particularly deepfakes, on democracy.
Resources Mentioned:
Haniyeh Mahmoudian - https://www.linkedin.com/in/haniyeh-mahmoudian-ph-d-78a18072
DataRobot - https://www.linkedin.com/company/datarobot
President Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
EU AI Act - https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
National AI Advisory Committee Recommendations - https://ai.gov/naiac/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
This era of rapid technological advancement can make finding the equilibrium between innovation and responsible governance difficult. On this episode, I’m joined by Dr. Ravit Dotan, Founder and CEO of TechBetter, Responsible AI Advocate of Bria and AI Ethicist. We discuss the complexities of AI regulation in our modern world. We also focus on the pivotal role policies and ethics play in steering the course of AI toward a future that benefits all.
Key Takeaways:
(01:18) Discussing President Biden’s Executive Order on AI and its implications for a new era of regulation.
(03:02) Contrasting the divergent paths of the US and UK in AI regulation.
(07:18) Investigating AI regulation’s influence on innovation.
(08:22) Assessing the ethical risks of misinformation within AI systems.
(12:13) Addressing the amplification of biases in AI decision-making.
(16:42) The challenge of achieving fairness in AI.
(17:40) The necessity of banning harmful AI applications.
(19:52) The role of AI ethics officers in organizations.
(21:30) Analyzing responsibility in AI-related incidents.
(24:26) The influence of major tech companies on AI’s direction.
(30:50) Discussing strategies against AI deepfakes in political campaigns.
Resources Mentioned:
Dr. Ravit Dotan - https://www.linkedin.com/in/ravit-dotan/
TechBetter - https://www.linkedin.com/company/techbetter/
Bria - https://www.linkedin.com/company/briaai/
President Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
EU AI Act - https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode of Regulating AI: Innovate Responsibly, I am thrilled to host Esha Bhandari, the Deputy Project Director of the ACLU (American Civil Liberties Union), who shares her expertise in AI and civil liberties. Esha is also a Member of the Law Enforcement Subcommittee of the National AI Advisory and Adjunct Professor of Clinical Law at the New York University School of Law.
We explore the complex relationship between artificial intelligence and civil liberties, discussing the implications of AI regulation, the challenges posed by algorithmic bias and the potential impact of AI on various sectors, including law enforcement, housing and employment.
Key Takeaways:
(01:59) Esha’s perspective on President Biden’s Executive Order on AI, emphasizing the inclusion of civil liberties and civil rights.
(04:01) Challenges in law enforcement and national security contexts regarding AI.
(07:56) A discussion on the potential of a separate government agency for AI regulation.
(10:41) The balancing act between preventing AI from replicating societal biases and fostering innovation.
(12:53) The question of liability in AI systems: developer, deployer, or user?
(14:21) Keeping pace with rapid AI advancements in policy and legislation.
(18:51) The ACLU’s stance on open-source technology and AI.
(25:01) The role AI regulation plays on a global scale.
(26:44) Addressing the potential impacts of AI on upcoming elections and protecting civil liberties.
Resources Mentioned:
https://www.linkedin.com/in/eshabhandari/
ACLU (American Civil Liberties Union) -
https://www.linkedin.com/company/aclu/
President Biden’s Executive Order on AI -
https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
Discussions on AI Regulation in the EU -
https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I'm delighted to be joined by a leading mind in AI, Stuart Russell, Professor of Computer Science at UC Berkeley; Former Chair of the Electrical Engineering and Computer Science Program at UC Berkeley; Holder of the Smith-Zadeh Chair in Engineering; Director of the Center for Human-Compatible AI; Author of Artificial Intelligence: A Modern Approach, which is currently part of the curriculum in 1,500 universities in 135 countries and translated into 20 languages.
Our conversation ventures into the depths of AI's potential, its impact on society and the critical role of legislation in shaping a safe and prosperous AI-powered future.
Key Takeaways:
(00:56) Introduction of Professor Stuart Russell and his significant contributions to AI.
(02:22) Analysis of the Biden Executive Order on AI and its limitations.
(03:49) Evolution and current status of the EU AI Act.
(07:31) The paradox of open-source AI in regulatory contexts.
(08:31) The challenge of controlling AI systems that are more powerful than humans.
(13:08) The necessity of proactive safety measures in AI development.
(15:12) The potential risks and concerns around AI agents.
(17:02) Balancing innovation and regulation in AI.
(19:20) Adapting AI legislation to technological advancements.
(21:49) The need for a dedicated regulatory agency for AI.
(26:08) Global collaboration on AI safety and national security.
(30:33) Public perception and education on AI safety.
(34:23) The role of AI in national security and ethical concerns.
(37:04) The impact of AI and deepfakes on the 2024 elections.
Resources Mentioned:
Stuart Russell - https://www.linkedin.com/in/stuartjonathanrussell/
President Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
EU AI Act - https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I'm joined by Congresswoman Anna Eshoo, Co-Chair of AI Caucus. Time Magazine has selected Anna as one of the 100 most influential people in AI, and I’m delighted to hear her invaluable insights into the legislative challenges and opportunities in the world of AI.
Key Takeaways:
(01:23) The role of the National AI Research Resource in President Biden’s executive order.
(03:20) The urgency for Congress to enact durable AI statutes.
(05:31) Objectives of the Create AI Act in making AI accessible to diverse sectors.
(08:03) The dynamic nature of AI policy and state-level legislation's role.
(10:43) The security implications of open-source AI models.
(12:18) Addressing the threat of deepfakes in elections.
(14:29) Strategies for workforce reskilling and attracting global AI talent.
(18:15) Democratizing AI to avert monopolistic trends.
(20:38) US Rep. Eshoo's predictions on the AI legislative timeline.
Resources Mentioned:
Anna Eshoo - https://www.linkedin.com/in/anna-eshoo-b0392095/
President Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
National AI Research Resource - https://www.whitehouse.gov/ostp/news-updates/2023/01/24/national-artificial-intelligence-research-resource-task-force-releases-final-report/
Keep STEM Talent Act 2021 - https://www.congress.gov/bill/117th-congress/house-bill/5924?q=%7B%22search%22%3A%5B%22h.r.+5924%22%2C%22h.r.%22%2C%225924%22%5D%7D&s=1&r=2
Create AI Act - https://eshoo.house.gov/sites/evo-subsites/eshoo.house.gov/files/evo-media-document/eshoo_043_xml.pdf
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Navigating the labyrinth of AI policy is a daunting task, especially for startups. In this episode, I explore this complex world with Nathan Lindfors, who brings unique insights from his role as Policy Director of Engine, an organization at the forefront of advocating for startup interests in the AI realm.
Key Takeaways:
(01:40) The mission and goals of Engine in advocating for startups.
(02:40) How startups differ from companies like OpenAI and Anthropic in the AI space.
(04:22) The role of Engine in educating startups on AI policy developments.
(05:33) Nathan’s take on President Biden’s Executive Order on AI.
(09:12) Concerns over regulatory capture impacting startup innovation.
(10:28) The debate around open-sourcing AI models.
(13:17) Addressing the risks of AI tools falling into the hands of bad actors.
(16:46) Liability issues in AI and their impact on startups.
(19:50) Preparing the workforce for the future of AI.
(23:25) The need for transparent AI usage disclosures by companies.
(25:28) Discussion on the complexities of global versus regional AI regulations.
Resources Mentioned:
https://www.linkedin.com/in/nathan-lindfors-24032b150/
Engine -
https://www.linkedin.com/company/engine-advocacy/
Engine Advocacy for Startups -
https://www.linkedin.com/company/engine-advocacy/
President Biden’s Executive Order on AI -
https://www.whitehouse.gov/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
As artificial intelligence continues to revolutionize our society, the need for thoughtful regulation becomes increasingly crucial. In this episode, I have the honor of discussing these challenges with Senator Pete Ricketts from Nebraska. With his background in governance and entrepreneurship, Senator Ricketts offers invaluable insights into the legislative aspects of AI. Together, we delve into how to harness AI responsibly for the benefit of all.
Key Takeaways:
(01:45) Introduction of a bill for watermarking AI-generated materials.
(03:15) Addressing the concerns of deepfakes and intellectual property in the AI sphere.
(04:01) AI’s transformative potential and the critical need for careful regulation.
(05:19) The impact of AI on national security and election processes.
(05:44) The importance of including small businesses and educational institutions in AI legislation.
(07:00) The need for federal preemption over state laws to avoid a patchwork of AI regulations.
(08:08) The role of workforce reskilling and talent attraction in AI development.
(10:03) Predictions for the timeline of comprehensive AI legislation in Congress.
Resources Mentioned:
Senator Ricketts’ AI Watermarking Bill - https://www.ricketts.senate.gov/press-releases/ricketts-introduces-bill-to-combat-deepfakes-require-watermarks-on-a-i-generated-content/
National Security Implications of AI - https://www.csis.org/analysis/addressing-national-security-implications-ai
AI’s Role in Elections - https://www.brookings.edu/articles/how-ai-will-transform-the-2024-elections/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Navigating the complexities of AI isn’t just about technology. It’s about sculpting our future. In this episode, I’m joined by Congressman Jay Obernolte, representing California’s 23rd district and serving as the vice-chair of the congressional AI caucus. With a rich background in AI and a keen eye for policy, Congressman Obernolte offers invaluable insights into the intricate dance of AI innovation and regulation.
Key Takeaways:
(02:06) Assessing President Biden’s Executive Order on AI and concerns of regulatory overreach.
(04:54) Exploring the Create AI Act’s goal to democratize AI research across academia.
(06:41) Addressing the risk of regulatory capture in the AI industry.
(08:57) Evaluating the role of AI in hiring and the inherent challenges of bias.
(11:05) Debating the need for a new AI regulatory structure.
(14:25) Delving into the implications of open-source AI.
(16:08) Highlighting the role of AI in spreading misinformation and the importance of transparency.
(18:19) Emphasizing the need for diverse perspectives in shaping AI regulation.
(19:44) Advocating for federal over regional or global AI regulation models.
(21:42) Offering predictions on the timeline and direction of comprehensive AI legislation in Congress.
Resources Mentioned:
Congressman Jay Obernolte - https://www.linkedin.com/in/jayobernolte/
President Biden’s Executive Order on AI - https://www.whitehouse.gov/
Create AI Act - https://www.congress.gov/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Are we ready for the AI revolution? How do we balance innovation with regulation? On this episode, I’m joined by Demetrios Brinkmann, Founder and CEO of the MLOps Community, to explore AI's impact on global economies, security and workforce, and the challenges in creating effective regulatory frameworks.
Key Takeaways:
(00:51) The dual role of AI in boosting GDP and posing a threat to workforce and national security.
(01:10) The US Congress' efforts to create a legislative framework for AI.
(02:14) The significance of the MLOps community in AI production.
(03:05) The impact of global AI regulations on the MLOps community.
(03:40) President Biden's Executive Order on AI and the challenges in regulating large language models.
(08:01) The EU's AI Act focusing on risk management and post-market monitoring.
(14:41) Identifying key risks from AI that require regulation.
(21:24) The debate over open-sourcing LLMs.
(26:15) Concerns about regulatory capture by big tech companies.
(30:38) The importance of global or regional AI regulations.
Resources Mentioned:
Demetrios Brinkmann - https://www.linkedin.com/in/dpbrinkm/
MLOps Community - https://ai-infrastructure.org/mlops-community-now/
President Biden's Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
EU AI Act - https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
In this episode, I’m joined by former Governor Terry McAuliffe, who shares his insights on the future of AI and its impact on job creation, national security and global technological dominance. With his extensive experience in both politics and entrepreneurship, Governor McAuliffe provides a unique perspective on the necessary steps the United States must make to take the lead in AI innovation and regulation.
Key Takeaways:
(02:08) The significance of President Biden’s Executive Order on AI.
(03:46) The need for long-term, consistent AI standards and legislation.
(04:25) Addressing public concerns about AI and job displacement.
(06:16) The importance of establishing a regulatory agency for AI.
(07:37) Promoting AI education starting from kindergarten.
(09:18) Proposing a scholarship program for AI studies.
(10:19) AI’s role in maintaining global leadership and job growth.
(12:34) AI is a crucial aspect of national security.
Resources Mentioned:
President Biden’s Executive Order on AI
National Science Foundation (NSF)
National Institute of Standards and Technology (NIST)
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Individual progress in technology isn’t just about personal achievement; it’s about shaping the future for society. On this episode, I’m joined by Congressman Don Beyer, US Representative for Virginia’s 8th District and Vice Chair of the AI Caucus in the House of Representatives, who brings a unique perspective to the table with his dedication to understanding and shaping AI legislation.
Key Takeaways:
(01:29) Congressman Beyer’s unique approach to learning about AI.
(02:55) The significance of President Biden’s Executive Order on AI.
(03:46) The debate on creating a separate regulatory agency for AI.
(06:36) The importance of democratizing AI through legislation like the Create AI Act.
(08:46) The pros and cons of open-sourcing AI models.
(12:10) AI’s role in political advertising and the need for ethical considerations.
(16:22) How AI will impact workforce and immigration policies.
(20:12) The priorities for AI legislation in Congress.
Resources Mentioned:
Congressman Don Beyer - https://www.linkedin.com/in/don-beyer-6b444b4/
House of Representatives - https://www.linkedin.com/company/u.s.-house-of-representatives/
President Biden’s Executive Order on AI - https://www.whitehouse.gov/
Create AI Act - https://www.congress.gov/
Discussions on AI with EU Parliamentarians - https://www.europarl.europa.eu/
National AI Research Resource - https://www.nsf.gov/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
The potential of AI is limitless, yet its implications are complex and multifaceted. Striking a balance between innovation and regulation is crucial for harnessing its benefits while safeguarding against risks.
In this episode, I sit down with Raja Krishnamoorthi, US Congressman, Representing Illinois 8th District, to delve deep into the world of AI, its possibilities, its dangers and how the US is positioning itself in this global race.
Key Takeaways:
(02:36) The necessity of AI regulation.
(03:06) Debating a potential AI regulatory agency.
(04:09) Concerns about global competitiveness, especially China’s AI advances.
(04:52) Introduction of the P.A.S.T. model for AI legislation: Privacy, Accountability, Security and Transparency.
(07:00) Concerns about regulatory capture by corporations and the need for diverse perspectives.
(08:35) Thoughts on open-sourcing large AI language models and implications.
(13:10) The geopolitical impact of AI development, especially in China’s context.
(15:48) Worries about deepfake technology and its election impact.
(21:34) Congressional challenges and ambitious goals for AI regulations, with potential timing considerations.
Resources Mentioned:
Raja Krishnamoorthi - https://www.linkedin.com/in/rajakrishnamoorthi/
US Congressman - https://www.linkedin.com/company/u.s.-house-of-representatives/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
En liten tjänst av I'm With Friends. Finns även på engelska.