68 avsnitt • Längd: 35 min • Veckovis: Torsdag
Welcome to the Regulating AI: Innovate Responsibly podcast with host and AI regulation expert Sanjay Puri. Sanjay is a pivotal leader at the intersection of technology, policy and entrepreneurship and explores the intricate landscape of artificial intelligence governance on this podcast.
You can expect thought-provoking conversations with global leaders as they tackle the challenge of regulating AI without stifling innovation. With diverse perspectives from industry giants, government officials and civil liberty proponents, each episode explores key questions and actionable steps for creating a balanced AI-driven world.
Don’t miss this essential guide to the future of AI governance, with a fresh episode available every week!
The podcast Regulating AI: Innovate Responsibly is created by Sanjay Puri. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
Join us for an insightful discussion on the intersection of AI and Green Technology as drivers of global progress and sustainable development. This roundtable features highlights from the Imperial Springs International Forum 2024, hosted by Club de Madrid, where over 130 leaders from 40+ countries gathered to explore the future of international cooperation and multilateralism.
Artificial Intelligence has immense potential, but it also carries risks — particularly when it comes to civil liberties. In this episode, I speak with Faiza Patel, Senior Director of the Liberty and National Security Program at the Brennan Center for Justice at NYU Law. Together, we explore how AI can be regulated to ensure fairness, accountability and civil rights, especially in the context of national security and law enforcement.
Key Takeaways:
(01:53) AI in national security, law enforcement and immigration contexts.
(05:00) The dangers of AI in government decisions, from immigration to surveillance.
(09:09) Long-standing issues with AI, including biased training data in facial recognition.
(12:55) The complexities of regulating AI-generated media, such as deepfakes, while protecting free speech.
(17:00) The need for transparency in AI systems and the importance of scrutinizing outputs.
(20:25) How marginalized communities are disproportionately affected by AI.
(23:30) Companies developing AI must embed civil rights principles into their products.
(26:45) Creating unbiased AI systems is a challenge, but necessary to avoid harm.
(29:58) The need for a dedicated regulatory body to oversee AI, especially in national security.
(34:00) AI’s potential impact on jobs and why policymakers need to prepare for labor disruption.
Resources Mentioned:
https://www.linkedin.com/in/faiza-patel-5a042816/
https://www.brennancenter.org/
President Biden’s Executive Order on AI -
https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
https://www.whitehouse.gov/ostp/ai-bill-of-rights/
Brennan Center - Faiza Patel -
https://www.brennancenter.org/experts/faiza-patel
National Security Carve-Outs Undermine AI Regulations -
https://www.brennancenter.org/our-work/analysis-opinion/national-security-carve-outs-undermine-ai-regulations
Senate AI Hearings Highlight Increased Need for Regulation -
https://www.brennancenter.org/our-work/analysis-opinion/senate-ai-hearings-highlight-increased-need-regulation
The Perils and Promise of AI Regulation -
https://www.brennancenter.org/our-work/analysis-opinion/perils-and-promise-ai-regulation
Advances in AI Increase Risks of Government Social Media Monitoring -
https://www.brennancenter.org/our-work/analysis-opinion/advances-ai-increase-risks-government-social-media-monitoring
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
In this episode of the RegulatingAI podcast, Sanjay Puri hosts an insightful discussion with Mr. Boris Tadić, former President of Serbia, to explore the profound implications of artificial intelligence (AI) on governance, society, and global relations at Imperial Springs International Forum 2024, Madrid, Spain. From its potential to revolutionise education and development to concerns about its effects on democracy and societal values, this conversation delves deep into the opportunities and challenges AI presents.
Resources:
https://x.com/boristadic58
https://clubmadrid.org/who/members/tadic-boris/
https://en.wikipedia.org/wiki/Boris_Tadi%C4%87
In this Episode, Tunisia’s Former Prime Minister, Mehdi Jomaa, shares his vision for the country’s potential to emerge as a leading technology hub in the Arab world and the Global South. With its strategic location bridging Africa, Europe, and the Middle East, Tunisia is positioned to become a key player in the global technological revolution, particularly in artificial intelligence.
Resources:
https://www.linkedin.com/in/mehdi-jomaa-60a8333b/
https://x.com/Mehdi_Jomaa
https://www.facebook.com/M.mehdi.jomaa
In this Episode, Tunisia’s Former Prime Minister, Mehdi Jomaa, shares his vision for the country’s potential to emerge as a leading technology hub in the Arab world and the Global South. With its strategic location bridging Africa, Europe, and the Middle East, Tunisia is positioned to become a key player in the global technological revolution, particularly in artificial intelligence.
Resources:
https://www.linkedin.com/in/mehdi-jomaa-60a8333b/
https://x.com/Mehdi_Jomaa
https://www.facebook.com/M.mehdi.jomaa
https://clubmadrid.org/who/members/mehdi-jomaa/
The rapid rise of AI brings both extraordinary potential and profound risks, demanding urgent global collaboration to ensure its safe development. In this episode, I’m joined by Professor S. Alex Yang, Professor of Management Science and Operations at the London Business School, to explore the complexities of regulating AI, the challenges of international collaboration, and the potential existential risks posed by AI development. With his extensive experience in AI and risk management, Professor Yang provides unique insights into the future of AI governance.
Key Takeaways:
(02:12) Professor Yang’s early AI experiences and his value chain research.
(06:57) The biggest risks from AI, including existential risk and job displacement.
(11:42) The debate on AI nationalism and the preservation of cultural heritage.
(16:28) How China’s chip-making capacity could reshape AI competition.
(21:13) Open-source versus closed-source AI models and the risks involved.
(25:58) Why monitoring monopolies in AI is crucial for innovation.
(30:44) How content creators can benefit from AI and how copyright law is evolving.
(35:29) The importance of fair use standards for AI-generated content.
(40:14) Data aggregation and its future role in AI development.
(45:00) Professor Yang’s final thoughts on the need for agile, principle-based AI regulation.
Resources Mentioned:
https://www.linkedin.com/in/songayang/
London Business School | LinkedIn -
https://www.linkedin.com/school/london-business-school/
London Business School | Website -
https://www.london.edu/?utm_source=google&utm_medium=ppc&utm_campaign=MC_BRBRAND_ppc_google&sc_camp=760e17bef14a4b399386ef32e55393a8&gad_source=1&gclid=Cj0KCQjwo8S3BhDeARIsAFRmkON1oXbsOVjQ73dCIwrvngSGSF0PBYwWGVKRtCdil8ptF2vmAzcW7lEaAvCxEALw_wcB&gclsrc=aw.ds
https://worldcoin.org/
The Case for Regulating Generative AI Through Common Law -
https://www.project-syndicate.org/commentary/european-union-ai-act-could-impede-innovation-by-s-alex-yang-and-angela-huyue-zhang-2024-02
Generative AI and Copyright: A Dynamic Perspective -
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4716233
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
AI presents endless opportunities, but its implications for privacy and governance are multifaceted. On this episode, I’m joined by Professor Norman Sadeh, a Computer Science Professor at Carnegie Mellon University, and Co-Founder and Co-Director of the Privacy Engineering Program. With years of experience in AI and privacy, he offers valuable insights into the complexities of AI governance, the evolving landscape of data privacy and why a multidisciplinary approach is vital for creating effective and ethical AI policies.
Key Takeaways:
(02:09) How Professor Sadeh’s work in AI and privacy began.
(05:30) Privacy engineers are in AI governance.
(08:45) Why AI governance must integrate with existing company structures.
(12:10) The challenges of data ownership and consent in AI applications.
(15:20) Privacy implications of foundational models in AI.
(18:30) The limitations of current regulations like GDPR in addressing AI concerns.
(22:00) How user expectations shape the principles of AI governance.
(26:15) The growing debate around the need for specialized AI regulations.
(30:40) The role of transparency in AI governance for building trust.
(35:50) The potential impact of open-source AI models on security and privacy.
Resources Mentioned:
https://www.linkedin.com/in/normansadeh/
Carnegie Mellon University | LinkedIn -
https://www.linkedin.com/school/carnegie-mellon-university/
Carnegie Mellon University | Website -
https://www.cmu.edu/
https://artificialintelligenceact.eu/
General Data Protection Regulation (GDPR) -
https://gdpr-info.eu/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
In this inspiring episode, we explore how AI is not only transforming industries but also reshaping education and the future of work. Learn how diversity, AI skills, and youth empowerment are critical in building an ethical, AI-driven world.
Our guest, Elena Sinel, FRSA and Founder of Teens in AI, shares her mission to champion diversity and equip young people with the skills they need to thrive in the AI era. She discusses the importance of empowering youth to lead the way in creating ethical AI solutions for a better future.
In this thought-provoking episode, we explore the crucial role governments play in democratizing AI, ensuring its benefits reach all sectors of society. We discuss the ethical and governance challenges involved in shaping AI policy, as well as the philosophical underpinnings that drive this evolving landscape.
Our distinguished guest, Ted Lechterman, Holder of the UNESCO Chair in AI Ethics & Governance at IE University, provides critical perspectives on how governments can lead the way in creating inclusive, ethical AI policies that align with democratic values.
In this episode, we dive into the complexities of AI compliance and the challenges organizations face in navigating the evolving regulatory landscape, especially with the European AI Act. Learn how businesses can stay compliant while driving innovation in AI development.
Our guest, Sean Musch, Founder and CEO of AI & Partners, shares his expertise on the European AI Act and other regulatory frameworks shaping the future of AI. Discover practical strategies for navigating compliance while fostering responsible AI practices.
In this episode, we explore how geospatial data is being leveraged to improve crisis response efforts through the integration of AI. Learn about the groundbreaking work of the Humanitarian OpenStreetMap Team in mapping vulnerable areas and using AI to support humanitarian missions in real-time.
Our guest, Paul Uithol, Director of Humanitarian Data at the Humanitarian OpenStreetMap Team, shares his insights into how geospatial data and AI are transforming disaster management and crisis response. Discover the innovative strategies that enable faster, more accurate responses to humanitarian challenges.
In this episode, we explore the complexities of global AI regulation and enforcement, focusing on how governments and organizations can balance the need for compliance while fostering innovation. We dive into the challenges of supervising AI across different legislative frameworks and how these regulations shape the future of AI technologies.
Our featured guest, Huub Janssen, Manager on AI at the Ministry of Economic Affairs and the Dutch Authority for Digital Infrastructure, The Netherlands, shares his insights on navigating the regulatory landscape and driving responsible AI development.
In this insightful episode, we explore the intersection of AI governance and legal innovation. Join us as we discuss the critical challenges and opportunities that arise as organizations strive to implement responsible AI practices in an ever-evolving regulatory landscape.
Our esteemed guest, Hadassah Drukarch, Director of Policy and Delivery at the Responsible AI Institute, shares her expertise on how to navigate the complexities of AI governance, legal frameworks, and the importance of fostering ethical AI practices.
In this compelling episode, we explore how artificial intelligence is transforming disaster response efforts, especially for vulnerable communities impacted by crises. Join us as we discuss innovative strategies that leverage AI to enhance humanitarian action and build more resilient systems.
Our special guest, Katya Klinova, Head of AI and Data Insights for Social and Humanitarian Action at the United Nations Secretary-General's Innovation Lab, shares invaluable insights into the role of AI in disaster management and its potential to bridge critical gaps in support for those most in need.
In the latest episode of the RegulatingAI Podcast at the World Summit AI on October 9, 2024, the discussion dives deep into the critical AI competencies driving organizational transformation. The episode explores how AI revolutionizes the workforce through augmentation, reskilling, and enhancing human-computer interaction, all while promoting ethical AI hiring practices.
Special guest Dr. Kevin J. Jones, Director at the IU Columbus Center for Teaching and Learning and Associate Professor of Management, shares insights on how leaders can leverage AI to enhance their organizations and stay ahead of the curve.
On this episode, I’m joined by Ruslan Salakhutdinov, UPMC Professor of Computer Science at Carnegie Mellon University. Ruslan discusses the pressing need for AI regulation, its potential for societal transformation and the ethical considerations of its future development, including how to safeguard humanity while embracing innovation.
Key Takeaways:
(02:14) The need to regulate AI to prevent monopolization by large corporations.
(06:03) The dangers of AI-driven misinformation and its impact on public opinion.
(10:32) The risks AI poses in job displacement across multiple industries.
(14:22) How deepfake technology is evolving and its potential consequences.
(18:47) The challenge of balancing AI innovation with data privacy concerns.
(22:10) AI’s growing role in military applications and the need for careful oversight.
(26:05) How AI agents could autonomously interact and the risks involved.
(31:30) The potential for AI to surpass human performance in certain professions.
(37:14) Why international collaboration is critical for effective AI regulation.
(42:56) The ethical dilemmas surrounding AI’s influence in healthcare and decision-making.
Resources Mentioned:
https://www.linkedin.com/in/russ-salakhutdinov-53a0b610/
https://openai.com/index/sora/
Geoffrey Hinton and his contributions to AI -
https://www.linkedin.com/pulse/geoffrey-hinton-alan-francis/
https://www.cmu.edu
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
The race for AI leadership is not just about technology; it’s a battle of values and national security that will shape our future. In this episode, I’m joined by Senator Todd Young, United States Senator (R-Ind.) at the United States Senate. He shares insights into AI policy, national security and the steps needed to maintain US leadership in this critical field.
Key Takeaways:
(01:54) The bipartisan effort behind the Senate AI Working Group.
(03:34) How existing laws adapt to an AI-enabled world.
(05:17) Identifying AI risks and regulatory barriers.
(07:41) The role of government expertise in AI-related areas.
(10:12) Understanding the significance of the $32 billion AI public investment.
(13:17) Applying AI innovations across various industries.
(15:27) The impact of China on AI competition and US strategy.
(17:44) Why semiconductors are vital to AI development.
(20:26) Balancing open-source and closed-source AI models.
(22:51) The need for global AI standards and harmonization.
Resources Mentioned:
https://www.linkedin.com/in/senator-todd-young/
https://www.young.senate.gov/
https://www.linkedin.com/company/ussenate/
National AI Research Resource -
https://nairrpilot.org/
https://www.whitehouse.gov/briefing-room/statements-releases/2022/08/09/fact-sheet-chips-and-science-act-will-lower-costs-create-jobs-strengthen-supply-chains-and-counter-china/
https://www.young.senate.gov/wp-content/uploads/One_Pager_Roadmap.pdf
National Security Commission on Artificial Intelligence -
https://reports.nscai.gov/final-report/introduction
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
AI and RNA are revolutionizing drug discovery, promising a future where life-saving medications are developed faster and at lower costs.
In this episode, Raphael Townshend, PhD, Founder and CEO of Atomic AI, sits down with me to discuss the intersection of AI and RNA in drug development. We explore how AI technologies can reduce the cost and time required for clinical trials and target previously incurable diseases.
Key Takeaways:
(02:15) Raphael's background in AI and biology, and founding of Atomic AI.
(05:59) Reducing time and failure rate in drug discovery with AI.
(07:16) AlphaFold's breakthrough in understanding molecular shapes using AI.
(09:23) Ensuring transparency and accountability in AI-driven drug discovery.
(12:22) Navigating intellectual property concerns in healthcare AI.
(15:34) Integrating AI with wet lab testing for accurate drug discovery results.
(17:31) Balancing intellectual property and open research in biotech.
(20:02) Addressing data privacy and security in AI algorithms.
(22:30) Educating users and healthcare professionals about AI in drug discovery.
(24:48) Collaborating with global regulators for AI-driven drug discovery innovations.
Resources Mentioned:
https://www.linkedin.com/in/raphael-townshend-9154962a/
Atomic AI | LinkedIn -
https://www.linkedin.com/company/atomic-ai-rna/
https://deepmind.google/technologies/alphafold/
https://atomic.ai/
https://www.biospace.com/atomic-ai-creates-first-large-language-model-using-chemical-mapping-data-to-optimize-rna-therapeutic-development
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
In this episode, I’m joined by Dr. Rashawn Ray, Vice President at the American Institutes for Research (AIR) and Executive Director of AIR Equity Initiative, Professor of Sociology at the University of Maryland and Senior Fellow at The Brookings Institution. Dr. Ray’s innovative work lies at the powerful intersection of policing, technology and social equity, where he explores how AI can be designed and implemented to enhance fairness, reduce inequality and ultimately be a force for positive change in both local communities and the broader world.
Key Takeaways:
(01:00) Regulating AI without stifling innovation is crucial.
(07:06) How virtual reality enhances police training by addressing implicit bias.
(12:22) The impact of diverse teams on equitable AI development.
(19:36) Overcoming challenges in implementing VR training in smaller law enforcement agencies.
(25:50) Tech companies collaborating on socially impactful AI projects is vital.
(31:55) Community involvement is critical in shaping AI and VR technologies.
(36:21) The role of DEI initiatives in improving AI’s fairness and effectiveness.
(42:09) The future of AI legislation and its potential to democratize technology.
Resources Mentioned:
https://www.linkedin.com/in/sociologistray/
AIR | Website - https://www.air.org/
AIR Equity Initiative | LinkedIn -
https://www.linkedin.com/showcase/air-equity-initiative/about/
AIR Equity Initiative Website -
https://www.air.org/air-equity-initiative-bridge-more-equitable-world
Lab for Applied Social Science Research -
https://socy.umd.edu/centers/lab-applied-social-science-research-%28lassr%29
https://www.brookings.edu
https://www.air.org/experts/person/rashawn-ray
https://www.rashawnray.com/
“Extracting Protest Events from Newspaper Articles with ChatGPT” (working paper) - https://uncmap.org/publication/chat-wp/
“5 questions policymakers should ask about facial recognition, law enforcement and algorithmic bias” - https://www.brookings.edu/articles/5-questions-policymakers-should-ask-about-facial-recognition-law-enforcement-and-algorithmic-bias/
“Examining equity in transportation safety enforcement” -
https://www.brookings.edu/articles/examining-equity-in-transportation-safety-enforcement/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I’m joined by Senator Mike Rounds, US Senator for South Dakota and Co-Chair of the Senate AI Caucus, to discuss how the US can regulate AI responsibly while fostering innovation. With his extensive experience in both state and federal government, Senator Rounds shares his insights into the Bipartisan Senate AI Working Group and its roadmap for AI policy.
Key Takeaways:
(01:23) The Bipartisan Senate AI Working Group aims to balance AI regulation and innovation.
(05:07) Why intellectual property protections are essential in AI development.
(07:27) National security implications of AI in weapons systems and defense.
(09:19) The potential of AI to revolutionize healthcare through faster drug approvals.
(10:55) How AI can aid in detecting and combating biological threats.
(15:00) The importance of workforce training to mitigate AI-driven job displacement.
(19:05) The role of community colleges in preparing the workforce for an AI-driven future.
(24:00) Insights from international collaboration on AI regulation.
Resources Mentioned:
Senator Mike Rounds Homepage -
https://www.rounds.senate.gov/
https://www.rounds.senate.gov/newsroom/press-releases/rounds-introduces-artificial-intelligence-policy-package
https://www.linkedin.com/company/medshield-llc
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
In this episode, I’m joined by Charity Rae Clark, Vermont Attorney General, and Monique Priestley, Vermont State Representative. They have been instrumental in shaping Vermont’s legislative approach to data privacy and AI. We dive into the challenges of regulating AI to keep citizens safe, the importance of data minimization and the broader implications for society.
Key Takeaways:
(02:10) “Free” apps and websites take payment with your data.
(08:15) The Data Privacy Act includes stringent provisions to protect children online.
(10:05) Protecting consumer privacy and reducing security risks.
(15:29) Vermont’s legislative journey includes educating lawmakers.
(18:45) Innovation and regulation must be balanced for future AI development.
(23:50) Collaboration and education can overcome intense pressure from lobbyists.
(30:02) AI’s potential to exacerbate discrimination demands regulation.
(36:15) Deepfakes present a growing threat.
(42:40) Consumer trust could be lost due to premature releases of AI products.
(50:10) The necessity of a strong foundation in data privacy.
Resources Mentioned:
https://www.linkedin.com/in/charityrclark/
https://www.linkedin.com/in/mepriestley/
Vermont -
https://www.linkedin.com/company/state-of-vermont/
“The Age of Surveillance Capitalism” by Shoshana Zuboff -
https://www.amazon.com/Age-Surveillance-Capitalism-Future-Frontier/dp/1610395697
“Why Privacy Matters” by Neil Richards -
https://www.amazon.com/Why-Privacy-Matters-Neil-Richards/dp/0190940553
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Dive into the tangled web of AI and copyright law with Keith Kupferschmid, CEO of the Copyright Alliance, as he reveals how AI companies navigate legal responsibilities and examines what creators can do to safeguard their intellectual property in an AI-driven world.
Key Takeaways:
(02:00) The Copyright Alliance represents over 15,000 organizations and 2 million individual creators.
(05:12) Two potential copyright infringement settings: during the ingestion process and the output stage.
(06:00) There have been 17 or 18 AI copyright cases filed recently.
(08:00) Fair Use in AI is not categorical and is decided on a case-by-case basis.
(13:32) AI companies often shift liability to prompters, but both can be held liable under existing laws.
(15:00) Creators should clearly state their licensing preferences on their works to protect themselves.
(17:50) Current copyright laws are flexible enough to adapt to AI without needing new legislation.
(20:00) Market-based solutions, such as licensing, are crucial for addressing AI copyright issues.
(27:34) Education and public awareness are vital for understanding copyright issues related to AI.
Resources Mentioned:
https://www.linkedin.com/in/keith-kupferschmid-723b19a/
https://copyrightalliance.org
https://www.copyright.gov
https://www.gettyimages.com
National Association of Realtors -
https://www.nar.realtor
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
The future of AI lies at the intersection of technology and ethics. How do we navigate this complex landscape? Today, I’m joined by Maria Luciana Axente, Head of Public Policy and Ethics at PwC UK and Intellectual Forum Senior Research Associate at Jesus College Cambridge, who offers key insights into the ethical implications of AI.
Key Takeaways:
(03:56) The importance of integrating ethical principles into AI.
(08:22) Preserving humanity in the age of AI.
(12:19) Embedding value alignment in AI systems.
(15:59) Fairness and voluntary commitments in AI.
(21:01) Participatory AI and including diverse voices.
(24:05) Cultural value systems shaping AI policies.
(26:25) The importance of reflecting on AI’s impact before implementation.
(27:48) Learning from other industries to govern AI better.
(28:59) AI as a socio-technical system, not just technology.
Resources Mentioned:
https://www.linkedin.com/in/mariaaxente/
PwC UK -
https://www.linkedin.com/company/pwc-uk/
https://www.linkedin.com/company/jesus-college-cambridge/
https://www.pwc.co.uk/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Can AI spark new creative revolutions? On this episode, I’m joined by Lianne Baron, Strategic Partner Manager for Creative Partnerships at Meta. Lianne unveils how AI is not just a tool but a transformative force in the creative landscape, emphasizing the irreplaceable value of human imagination. We explore the rapid pace of innovation, the challenges of embracing new tech, and the exciting future of idea generation and delivery.
Key Takeaways:
(03:50) Embrace AI's changes; it challenges traditional methods.
(05:13) AI speeds up the journey from imagination to delivery.
(07:15) The move to cinematic quality sparks excitement and fear.
(08:30) Education is key in democratizing AI for all.
(15:00) Risk of bias without diverse voices in AI development.
(17:15) Ideas, not skills, are the new currency in AI.
(26:16) Imagination and human experience are irreplaceable by AI.
(29:11) AI can democratize storytelling, sharing diverse narratives.
(33:00) AI breaks down barriers, fostering new creative opportunities.
(36:20) Understanding authenticity is crucial in an AI-driven world.
Resources Mentioned:
https://www.linkedin.com/in/liannebaron/
Meta -
https://www.meta.com/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
The potential of AI is transforming industries, but how do we regulate this rapidly evolving technology without stifling innovation?
On this episode, I’m joined by Professor Zico Kolter, Professor and Director of the Machine Learning Department at Carnegie Mellon University and Chief Expert at Bosch USA, who shares his insights on AI regulation and its challenges.
Key Takeaways:
(02:41) AI innovation outpaces legislation.
(04:00) Regulating technology vs. its usage is crucial.
(06:36) AI is advancing faster than ever.
(11:14) Companies must prevent AI misuse.
(15:30) Bias-free algorithms are not feasible.
(21:34) Human interaction in AI decisions is essential.
(27:49) The competitive environment benefits AI development.
(32:26) Perfectly accepted regulations indicate mistakes.
(37:52) Regulations should adapt to technological changes.
(42:49) AI developers aim to benefit people.
(45:16) Human-in-the-loop AI is crucial for reliability.
(46:30) Addressing gaps in AI systems is critical.
Resources Mentioned:
Zico Kolter - https://www.linkedin.com/in/zico-kolter-560382a4/
Carnegie Mellon University - https://www.linkedin.com/school/carnegie-mellon-university/
Bosch USA - https://www.linkedin.com/company/boschusa/
EU AI Act - https://ec.europa.eu/digital-strategy/our-policies/eu-regulatory-framework-artificial-intelligence_en
OpenAI - https://www.openai.com/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I’m joined by Professor Paul Rainey to discuss the evolutionary principles applicable to AI development and the potential risks of self-replicating AI systems. Paul is Director of the Department of Microbial Population Biology at the Max Planck Institute for Evolutionary Biology in Plön; Professor at ESPCI in Paris; Fellow of the Royal Society of New Zealand; a Member of EMBO & European Academy of Microbiology; and Honorary Professor at Christian Albrechts University in Kiel.
Key Takeaways:
(00:04) Evolutionary transitions form higher-level structures.
(00:06) Eukaryotic cells parallel future AI-human interactions.
(00:08) Major evolutionary transitions inform AI-human interactions.
(00:11) Algorithms can evolve with variation, replication and heredity.
(00:13) Natural selection drives complexity.
(00:18) AI adapts to selective pressures unpredictably.
(00:21) Humans risk losing autonomy to AI.
(00:25) Societal engagement is needed before developing self-replicating AIs.
(00:30) The challenge of controlling self-replicating systems.
(00:33) Interdisciplinary collaboration is crucial for AI challenges.
Resources Mentioned:
Max Planck Institute for Evolutionary Biology
Professor Paul Rainey - Max Planck Institute
Max Planck Research Magazine - Issue 3/2023
Paul Rainey’s article in The Royal Society Publishing
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
In this episode, I’m joined by Jaap van Etten, CEO and Co-Founder of Datenna, the leading provider of techno-economic intelligence in China. Jaap’s unique background as a diplomat turned entrepreneur provides invaluable insights into the intersection of AI, innovation and policy.
Key Takeaways:
(01:30) Transitioning from diplomat to tech entrepreneur.
(05:23) Key differences in AI approaches between China, Europe and the US.
(07:20) The Chinese entrepreneurial mindset and its impact on innovation.
(10:03) China’s strategy in AI and the importance of being a technological leader.
(17:05) Challenges and misconceptions about China’s technological capabilities.
(23:17) Recommendations for AI regulation and international cooperation.
(30:19) Jaap’s perspective on the future of AI legislation.
(35:12) The role of AI in policymaking and decision-making.
(40:54) Policymakers need scenario planning and foresight exercises to keep up with rapid technological advancements.
Resources:
Jaap van Etten - https://www.linkedin.com/in/jaapvanetten/
Datenna - https://www.linkedin.com/company/datenna/
https://www.nytimes.com/2006/05/15/technology/15fraud.htm
http://www.china.org.cn/english/scitech/168482.htm
https://en.wikipedia.org/wiki/Hanxin
https://www.linkedin.com/pulse/china-marching-forward-artificial-intelligence-jaap-van-etten/
https://github.com/Kkevsterrr/geneva
https://www.grc.com/sn/sn-779.pdf
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I’m joined by Dr. Abhinav Valada, Professor and Director of the Robot Learning Lab at the University of Freiburg, to explore the future of robotics and the essential regulations needed for their integration into society.
Key Takeaways:
(00:00) The potential economic impact of AI.
(03:37) The distinction between perceived and actual AI capabilities.
(04:24) Challenges in training robots with real-world data.
(08:51) Limitations of current AI reasoning capabilities.
(13:16) The importance of conveying robot intent for collaboration.
(17:33) The need for specific guidelines for robotic systems.
(21:00) Mandating AI ethics courses in Germany.
(25:10) Collaborative robots and workforce implications.
(30:00) Privacy issues in human-robot interaction.
(35:02) The importance of pilot programs for autonomous vehicles.
(39:00) International collaboration in AI legislation.
(40:38) Inclusion of diverse voices in robotics research.
Resources Mentioned:
Dr. Abhinav Valada - https://www.linkedin.com/in/avalada/
University of Freiburg - https://www.linkedin.com/company/university-of-freiburg/
EU AI Act - https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
Robot Learning Lab, University of Freiburg - https://www.researchgate.net/lab/Robot-Learning-Lab-Abhinav-Valada
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Striking a balance between artificial intelligence innovation and regulation is crucial for leveraging its benefits while safeguarding against risks. On this episode, I’m joined by Congressman Buddy Carter, U.S. Representative for Georgia's 1st District, to explore the complexities of AI regulation and its impact on healthcare and other sectors.
Key Takeaways:
(01:48) President Biden's Executive Order on AI aims to set new standards.
(04:34) AI's potential in healthcare, including telehealth and drug development.
(05:47) Legal implications for doctors not using available AI technologies.
(07:55) AI could speed up the drug development process.
(10:52) The need for constantly updated AI standards.
(11:56) Debate on creating a separate regulatory body for AI.
(14:03) Importance of including diverse voices in AI regulation.
(16:57) Federal preemption of state and local AI laws to avoid regulatory patchwork.
Resources Mentioned:
Buddy Carter - https://www.linkedin.com/in/buddycarterga/
President Biden's Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
EU AI Act - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Section 230 of the Communications Decency Act - https://www.eff.org/issues/cda230
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I am joined by Daniel Colson, Executive Director of the AI Policy Institute, to consider some pressing issues. Daniel shares his insights into the risks, opportunities and future directions of AI policy.
Key Takeaways:
(02:15) Daniel analyzes President Biden's recent executive order on AI.
(04:13) Differentiating risks in AI technologies and their applications.
(08:52) Concerns about the open-sourcing of AI models and abuse potential.
(16:45) The importance of inclusive discussions in AI policymaking.
(19:25) Challenges and risks of regulatory capture in the AI sector.
(26:45) Balancing innovation with regulation.
(33:14) The potential for AI to transform employment and the economy.
(37:52) How AI's rapid evolution challenges our role as the dominant thinkers and prompts careful deliberation on its impact.
Resources Mentioned:
Daniel Colson - https://www.linkedin.com/in/danieljcolson/
AI Policy Institute - https://www.linkedin.com/company/aipolicyinstitute/
AI Policy Institute | Website - https://www.theaipi.org/
#AIRegulation #AISafety #AIStandard
On this episode of Regulating AI, I sit down with Professor Effy Vayena, Chair of Bioethics and Associate Vice President of Digital Transformation and Governance of the Swiss Federal Institute of Technology (ETH) and Co-Director of Stavros Niarchos Foundation Bioethics Academy. Together we delve deep into the world of AI, its ethical challenges, and how thoughtful regulation can ensure equitable benefits.
Key Takeaways:
(03:45) The importance of developing and using technology in ways that meet ethical standards.
(10:31) The necessity of agile regulation and continuous dialogue with tech developers.
(13:19) The concept of regulatory sandboxes for testing policies in a controlled environment.
(17:07) Balancing AI innovation with patient privacy and data security.
(24:14) Strategies to ensure AI benefits reach marginalized communities and promote health equity.
(35:10) Considering the global impact of AI and the digital divide.
(41:06) Including and educating the public in AI regulatory processes.
(44:04) The importance of international collaboration in AI regulation.
Resources Mentioned:
Professor Effy Vayena - https://www.linkedin.com/in/effy-vayena-467b1353/
Swiss Federal Institute of Technology (ETH) - https://www.linkedin.com/school/eth-zurich/
ETH Zurich - https://ethz.ch/en.html
European Union’s AI Act - https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
U.S. FDA guidelines on AI in medical devices - https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
The integration of AI into healthcare is not only transforming the way we diagnose, treat and manage patient care but is also redefining the roles of doctors. Join me as I sit down with Dr. Brennan Spiegel to explore how AI is revolutionizing the medical field. Brennan is a Professor of Medicine and Public Health; George and Dorothy Gourrich Chair in Digital Health Ethics; Director of Health Services Research; Director, Graduate Program in Health Delivery Science; Cedars-Sinai Site Director, Clinical and Translational Science Institute; and Editor-in-Chief, Journal of Medical Extended Reality.
Key Takeaways:
(03:00) Balancing AI benefits with concerns about algorithmic bias and fairness.
(05:47) Evaluating AI for implicit bias in mental health applications.
(08:03) The need for standardized guidance and rigorous oversight in AI applications.
(10:03) Ensuring data transmitted between AI providers and health systems is HIPAA compliant.
(16:42) The evolving role of doctors in the context of AI integration.
(21:22) The importance of traditional knowledge alongside AI in medical practice.
(24:44) International collaboration and standardized approaches to AI in healthcare.
Resources Mentioned:
Dr. Brennan Spiegel - https://www.linkedin.com/in/brennan-spiegel-md-mshs-2938a4142/
Cedars-Sinai - https://www.linkedin.com/company/cedars-sinai-medical-center/
Brennan Spiegel on X - https://x.com/BrennanSpiegel
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
In this episode, I welcome Carmel Shachar, Faculty Director of the Health Law and Policy Clinic and Assistant Clinical Professor of Law at Harvard Law School Center for Health Law and Policy Innovation. We delve into how AI is shaping the future of healthcare, its profound impacts and the vital importance of thoughtful regulation. The interplay between AI and healthcare is increasingly critical, pushing the boundaries of medicine while challenging our regulatory frameworks.
Key Takeaways:
(00:00) AI’s challenges in balancing patient data needs.
(03:09) The revolutionary potential of AI in healthcare innovation.
(04:30) How AI is driving precision and personalized medicine.
(06:19) The urgent need for healthcare system evolution.
(09:00) Potential negative impacts of poorly implemented AI.
(12:00) The unique challenges posed by AI as a medical device.
(15:10) Minimizing regulatory handoffs to enhance AI efficacy.
(18:00) How AI can reduce healthcare disparities.
(20:00) Ethical considerations and biases in AI deployment.
(25:00) AI’s growing impact on healthcare operations and management.
(30:00) Enhancing patient-physician communication with AI tools.
(39:00) Future directions in AI and healthcare policy.
Resources Mentioned:
Carmel Shachar - https://www.linkedin.com/in/carmel-shachar-7b3a8525/
Harvard Law School Center for Health Law and Policy Innovation - https://www.linkedin.com/company/harvardchlpi/
Carmel Shachar's Faculty Profile at Harvard Law School - https://hls.harvard.edu/faculty/carmel-shachar/
Precision Medicine, Artificial Intelligence and the Law Project - https://petrieflom.law.harvard.edu/research/precision-medicine-artificial-intelligence-and-law
Petrie-Flom Center Blog - https://blog.petrieflom.law.harvard.edu/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I welcome Ari Kaplan, Head Evangelist of Databricks, a leading data and AI company. We discuss the intricacies of AI regulation, how different regions, like the US and EU, are addressing AI’s rapid development, and the importance of industry perspectives in shaping effective legislation.
Key Takeaways:
(04:42) Insights on the rapid advancements in AI technology and legislative responses.
(10:32) The role of tech leaders in shaping AI policy and bridging knowledge gaps.
(13:57) Open-source versus closed-source AI — Ari Kaplan advocates for transparency.
(16:56) Ethical concerns in AI across different countries.
(21:21) The necessity for both industry-specific and overarching AI regulations.
(25:09) Automation’s potential to improve efficiency also raises employment risk.
(29:17) A balanced, educational approach in the age of AI is crucial.
(32:45) Risks associated with generative AI and the importance of intellectual property rights.
Resources Mentioned:
Ari Kaplan - https://www.linkedin.com/in/arikaplan/
Databricks - https://www.linkedin.com/company/databricks/
Unity Catalog Governance Value Levers - https://www.databricks.com/blog/unity-catalog-governance-value-levers
President Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
EU AI Act Information - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
In this episode, I welcome Nicolas Kourtellis, Co-Director of Telefónica Research and Head of Systems AI Lab at Telefónica Innovación Digital, a company of the Telefonica Group. Nicolas shares his expert insights on the pivotal role of AI in revolutionizing telecommunications, the challenges of AI regulation and the innovative strides Telefónica is making toward sustainable and ethical AI deployment.
Imagine a world where every device you own not only connects seamlessly but also intelligently adapts to your needs. This isn’t just a vision for the future; it’s the reality AI is creating today in telecommunications.
Key Takeaways:
(00:00) AI research focuses and applications in telecommunications.
(03:24) AI’s role in optimizing network systems and enhancing user privacy is critical.
(06:00) How Telefónica uses AI to improve customer service through AI chatbots.
(12:03) The ethical considerations and sustainability of AI models.
(16:08) Democratizing AI to make it accessible and beneficial for all users.
(18:09) Designing AI systems with privacy and security from the start.
(27:00) The challenges and opportunities AI presents for the workforce.
(30:25) The potential of 6G and its reliance on AI technologies.
(32:16) The integral role of AI in future technological advancements and network optimizations.
(39:35) The societal impacts of AI in telecommunications.
Resources Mentioned:
Nicolas Kourtellis - https://www.linkedin.com/in/nicolas-kourtellis-3a154511/
Telefónica Innovación Digital - https://www.linkedin.com/company/telefonica-innovacion-digital/
Telefonica Group - https://www.linkedin.com/company/telefonica/
You can find all of Nicolas’ publications on his Google Scholar page: http://scholar.google.com/citations?user=Q5oWwiQAAAAJ
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode of the Regulating AI Podcast, I'm joined by Dr. Irina Mirkina, Innovation Manager and AI Lead at UNICEF's Office of Innovation. An AI strategist, speaker, and expert for the European Commission, Dr. Mirkina brings a wealth of experience from academia, the private sector, and now, the humanitarian sector. Today’s discussion focuses on AI for social good.
Key Takeaways:
(03:31) The role of international organizations like UNICEF in shaping global AI regulations.
(07:06) Challenges of democratizing AI across different regions to overcome the digital divide.
(10:28) The importance of developing AI systems that cater to local contexts.
(13:23) The transformative potential and limitations of AI in personalized education.
(16:37) Engaging vulnerable populations directly in AI policy discussions.
(20:47) UNICEF's use of AI in addressing humanitarian challenges.
(25:10) The role of civil society in AI regulation and policymaking.
(33:50) AI's risks and limitations, including issues of open-source management and societal impact.
(38:57) The critical need for international collaboration and standardization in AI regulations.
Resources Mentioned:
Dr. Irina Mirkina - https://www.linkedin.com/in/irinamirkina/
UNICEF Office of Innovation - https://www.unicef.org/innovation/
Policy Guidance on AI for Children by UNICEF - https://www.unicef.org/globalinsight/media/2356/file/UNICEF-Global-Insight-policy-guidance-AI-children-2.0-2021.pdf
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I’m joined by Professor Angela Zhang, Associate Professor of Law at the University of Hong Kong and Director of the Philip K. H. Wong Center for Chinese Law. We delve into the complexities of AI regulation in China, exploring how the government’s strategies impact both the global market and internal policies.
Key Takeaways:
(02:14) The introduction of China’s approach to AI regulation.
(06:40) Discussion on the volatile nature of Chinese regulatory processes.
(10:26) How China’s AI strategy impacts international relations and global standards.
(13:32) Angela explains the strategic use of law as an enabler in China’s AI development.
(18:53) High-level talks between the US and China on AI risk have not led to substantive actions.
(22:04) The US’s short-term gains from AI chip restrictions on China may lead to long-term disadvantages as China becomes self-sufficient and less cooperative.
(24:13) Unintended consequences of the Chinese regulatory system.
(29:19) Angela advocates for a slower development of AI technology to better assess and manage risks before they become unmanageable.
Resources Mentioned:
Professor Angela Zhang - http://www.angelazhang.net
High Wire by Angela Zhang - https://global.oup.com/academic/product/high-wire-9780197682258
Article: The Promise and Perils of China’s Regulation - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4708676
Research: Generative AI and Copyright: A Dynamic Perspective - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4716233
Research: The Promise and Perils of China's Regulation of Artificial Intelligence - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4708676
Angela Zhang’s Website - https://www.angelazhang.net/
High Wire Book Trailer - https://www.youtube.com/watch?v=u6OPSit6k6s
Purchase High Wire by Angela Zhang - https://www.amazon.com/High-Wire-Regulates-Governs-Economy/dp/0197682251/ref=sr_1_1?crid=2A7D070KIAGT&keywords=high+wire+angela+zhang&qid=1706441967&sprefix=high+wire+angela+zha,aps,333&sr=8-1
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I am thrilled to sit down with Congressman Joseph Morelle, who represents New York's 25th Congressional District and serves on the House Appropriations Committee. As an influential voice in the dialogue on artificial intelligence, Congressman Morelle shares his deep insights into AI's potential and challenges, particularly concerning legislation and societal impacts.
Key Takeaways:
(02:13) Congressman Morelle's extensive experience in AI legislation and its implications.
(04:27) Deep fakes and their growing threat to privacy and integrity.
(07:13) Introducing federal legislation against non-consensual deep fakes.
(14:00) Urgent need for social media platforms to enforce their guidelines rigorously.
(19:46) The No AI Fraud Act and protecting individual likeness in AI use.
(23:06) The importance of adaptable and 'living' statutes in technology regulation.
(32:59) The critical role of continuous education and skill adaptation in the AI era.
(37:47) Exploring the use of AI in Congress to ensure unbiased, culturally appropriate policymaking and data privacy.
Resources Mentioned:
Congressman Joseph Morelle - https://www.linkedin.com/in/joe-morelle-8246099/
No AI Fraud Act - https://www.congress.gov/bill/118th-congress/house-bill/6943/text?s=1&r=9
Preventing Deep Fakes of Intimate Images Act - https://www.congress.gov/bill/118th-congress/house-bill/3106
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I welcome Dr. Sethuraman Panchanathan, Director of the U.S. National Science Foundation and a professor at Arizona State University. Sethuraman shares personal insights on the transformative power of artificial intelligence and the importance of democratizing this technology to be sure it benefits humanity as a whole.
Key Takeaways:
(00:21) AI’s pivotal role in enhancing speech-language services.
(01:28) Introduction to Sethuraman’s visionary leadership at NSF.
(02:36) NSF’s significant AI investment totaled over $820 million.
(06:19) The shift toward interdisciplinary AI research at NSF.
(10:26) NSF’s initiative of launching 25 AI institutes for innovation.
(18:26) Emphasis on AI democratization through education and training.
(25:11) The NSF ExpandAI program boosts AI in minority-serving institutions.
(30:21) Focus on ethical AI development to build public trust.
(40:10) AI’s transformative applications in healthcare, agriculture and more.
(42:45) The importance of ethical guardrails in AI’s development.
(43:08) Advancing AI through international collaborations.
(44:53) Lessons from a career in AI and advice for the next generation.
(50:19) Motivating young researchers and entrepreneurs in AI.
(52:24) Advocating for AI innovation and accessibility for everyone.
Resources Mentioned:
https://www.linkedin.com/in/drpanch/
U.S. National Science Foundation | LinkedIn -
https://www.linkedin.com/company/national-science-foundation/
U.S. National Science Foundation | Website -
https://www.nsf.gov/
https://www.linkedin.com/school/arizona-state-university/
https://new.nsf.gov/funding/opportunities/expanding-ai-innovation-through-capacity-building
Dr. Sethuraman Panchanathan’s NSF Profile -
https://www.nsf.gov/staff/staff_bio.jsp?lan=spanchan
NSF Regional Innovation Engines -
https://new.nsf.gov/funding/initiatives/regional-innovation-engines
National AI Research Resource (NAIRR) -
https://new.nsf.gov/focus-areas/artificial-intelligence/nairr
NSF Focus on Artificial Intelligence -
https://new.nsf.gov/focus-areas/artificial-intelligence
https://new.nsf.gov/funding/opportunities/national-artificial-intelligence-research
GRANTED Initiative for Broadening Participation in STEM -
https://new.nsf.gov/funding/initiatives/broadening-participation/granted
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
The rapid evolution of artificial intelligence in cybersecurity presents both significant opportunities and daunting challenges. On this episode, I'm joined by Bruce Schneier, who is renowned globally for his expertise in cybersecurity and is dubbed a “security guru” by the Economist. Bruce, a best-selling author and lecturer at Harvard Kennedy School, discusses the fast-paced world of AI and cybersecurity, exploring how these technologies intersect with national security and what that means for future regulations.
Key Takeaways:
(00:00) I discuss with Bruce the challenges of regulating AI in the US.
(02:28) Bruce explains the role and future potential of AI in cybersecurity.
(05:05) The benefits of AI in defense, enhancing capabilities at computer speeds.
(07:22) The need for robust regulations akin to those in the EU.
(12:56) Bruce draws analogies between AI regulation and pharmaceutical controls.
(19:56) The critical role of knowledgeable staff in supporting legislators.
(22:24) The challenges of effectively regulating AI.
(26:15) The potential of AI to transform enforcement across various sectors.
(30:58) Reflections on the future of AI governance and ethical considerations.
Resources Mentioned:
Bruce Schneier Website - https://www.schneier.com/
EU AI Strategy - https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I’m joined by Trooper Sanders, CEO of Benefits Data Trust and a member of the White House National Artificial Intelligence Advisory Committee. Trooper’s expertise in leveraging AI to enhance the efficiency and humanity of America’s social safety net offers unique insights into the potential and challenges of AI in public services.
Key Takeaways:
(02:27) The role of Benefits Data Trust in connecting people to essential benefits using AI.
(04:54) The components of trustworthy AI: reliability, public interest alignment, security, transparency, explainability, privacy and harm mitigation.
(09:38) The ‘tortoise and hare’ challenge in aligning AI advancements with legislative processes.
(16:17) The significance of voluntary industry commitments in shaping AI’s ethical use.
(20:32) Ethical considerations in deploying AI, focusing on its societal impact and the readiness of systems for AI integration.
(22:53) Addressing biases in AI to ensure fairness and equitable benefits across all socioeconomic groups.
(27:52) Amplifying diverse voices in the AI discussion to encompass a wide range of societal perspectives.
(34:22) The potential workforce disruption by AI and the necessity of supportive measures for affected individuals.
(37:26) Considering the potentially massive impact of AI-driven career changes across various professions.
Resources Mentioned:
https://www.linkedin.com/in/troopersanders/
Benefits Data Trust | LinkedIn -
https://www.linkedin.com/company/benefits-data-trust/
Benefits Data Trust | Website -
https://bdtrust.org/
White House National Artificial Intelligence Advisory Committee -
https://www.whitehouse.gov/ostp/ostps-teams/nstc/select-committee-on-artificial-intelligence/
BDT Launches AI and Human Services Learning Hub -
https://bdtrust.org/bdt-launches-ai-learning-lab/
Our Vision for an Intelligent Human Services and Benefits Access System -
https://bdtrust.org/our-vision-for-an-intelligent-human-services-and-benefits-access-system
Humans Must Control Human-Serving AI -
https://bdtrust.org/media-coverage-humans-must-control-human-serving-ai/
https://bdtrust.org/trooper-sanders/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
I'm thrilled to be joined by Dr. Paul Lushenko, a Lieutenant Colonel in the U.S. Army and Director of Special Operations at the U.S. Army War College. Dr. Lushenko brings a wealth of knowledge from the front line of AI implementation in military strategy. He joins me to share his insights into the delicate balance between innovation and regulation.
Key Takeaways:
(02:28) The necessity of addressing AI’s impact on warfare and crisis escalation.
(06:37) The gaps in global governance regarding AI and autonomous weapon systems.
(08:30) U.S. policies on the responsible use of AI in military operations.
(16:29) The importance of cutting-edge research in informing legislative actions on AI.
(18:49) The risk of biases in AI systems used in national security.
(20:09) Discussion on automation bias and its consequences in military operations.
(32:49) Emphasis on the importance of careful management and extensive testing to build trust in AI systems within the military.
(39:51) The critical need for data-driven decision-making in high-stakes environments, advocating for leveraging expert insights.
(24:44) Dr. Lushenko argues for the adoption of a strategic framework to guide AI development in military contexts.
Resources Mentioned:
https://www.linkedin.com/in/paul-lushenko-phd-5b805113/
https://www.linkedin.com/school/united-states-army-war-college/
Political Declaration on Responsible Use of AI in Military Technologies -
https://www.state.gov/wp-content/uploads/2023/10/Latest-Version-Political-Declaration-on-Responsible-Military-Use-of-AI-and-Autonomy.pdf
Memorandum on Ethical Use of AI - White House 2023 -
https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I welcome Randi Weingarten, President of the American Federation of Teachers (AFT). She discusses why implementing AI in education requires a collaborative effort. Join us as we explore the challenges and opportunities of AI in shaping equitable and effective educational environments.
Key Takeaways:
(01:08) Introduction of Randi Weingarten and her role in the AFT.
(05:00) The critical issue of ensuring equitable access to AI technologies in education.
(08:06) Addressing bias and discrimination within AI-driven educational systems.
(11:53) The importance of inclusive participation in the implementation of educational technologies.
(13:09) The evolving necessity for educators to acquire new skills in response to AI advancements.
(17:26) The role of personalized teaching as a complement, not a replacement, for traditional educational methods.
(18:08) Concerns surrounding data privacy and security within AI-driven platforms.
(20:25) The need for regulation and oversight in the application of AI in educational settings.
(25:22) The potential for productive industry collaboration in developing AI tools for education.
(30:28) Advocating for a just transition fund to support workers displaced by AI and technological advancements.
Resources Mentioned:
Randi Weingarten - https://www.linkedin.com/in/randi-weingarten-05896224/
American Federation of Teachers - https://www.aft.org/
Testimony to Senator Schumer by Randi Weingarten on equity in AI - https://www.aft.org/press-release/afts-weingarten-calls-ai-guardrails-smart-regulation-ensure-new-technology-benefits
Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
AI regulation is not a simple field, particularly in the realm of national security, and it requires a nuanced approach. In this episode, I welcome Anja Manuel, the Executive Director of the Aspen Strategy Group and the Aspen Security Forum, as well as Co-Founder and Partner at Rice, Hadley, Gates & Manuel, LLC. Anja’s insights make the path forward clearer, framing effective AI legislation and emphasizing the need for global cooperation and ethical considerations. Her perspective, deeply rooted in national security expertise, underscores the critical balance between innovation and safeguarding against misuse.
Key Takeaways:
(00:17) The functionality of intelligence committees across party lines.
(00:59) AI in warfare reflects a shift from World War I tactics to modern tech battles.
(03:10) The rapid innovation in military technology and the US’s efforts to adapt.
(03:53) Risks of unregulated AI, including in cyber, autonomous weapons and bio-tech.
(07:09) AI regulation is needed both globally and nationally.
(11:21) International collaboration plays a vital role in AI regulation.
(13:39) Ethical considerations unique to AI applications in national security.
(14:31) National security agencies’ openness to regulatory frameworks.
(15:35) Public-private collaboration in addressing national security considerations.
(17:08) Establishing standards in AI technology for national security is necessary.
(18:28) Regulation of autonomous weapons and international agreements.
(19:32) Balancing secrecy in national security operations with public scrutiny of AI use.
(20:17) AI’s role and risks in intelligence and privacy.
(21:13) Regulating AI in cybersecurity and other areas is a challenge.
Resources Mentioned:
Anja Manuel - https://www.linkedin.com/in/anja-manuel-26805023/
Aspen Strategy Group - https://www.aspeninstitute.org/programs/aspen-strategy-group/
Aspen Security Forum - https://www.aspensecurityforum.org/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I’m joined by Dr. Gunter Beitinger, Senior Vice President of Manufacturing and Head of Factory Digitalization and Product Carbon Footprint at Siemens. Dr. Beitinger lends a comprehensive view on AI’s role in transforming manufacturing, emphasizing its potential to enhance productivity, ensure workforce well-being and drive sustainable practices without displacing human labor.
Key Takeaways:
(02:17) Dr. Beitinger’s extensive background and role at Siemens.
(05:13) Specific examples of AI-driven improvements in Siemens’ operations.
(07:52) The measurable productivity gains attributed to AI in manufacturing.
(10:02) The impact of AI on employment and the importance of re-skilling.
(13:06) The necessity for a collaborative approach between governments and the private sector in workforce development.
(16:24) The role of AI in improving the working conditions of industrial workers.
(26:53) The potential for smaller companies to leverage AI and compete with industry giants.
(36:49) AI’s future role in creating digital twins and the industrial metaverse.
Resources Mentioned:
https://www.linkedin.com/in/gunter-dr-beitinger/
Siemens | LinkedIn -
https://www.linkedin.com/showcase/siemens-industry-/?trk=public_post-text
Siemens | Website -
https://www.siemens.com/
https://blog.siemens.com/space/artificial-intelligence-in-industry/
https://blog.siemens.com/2023/07/the-need-to-rethink-production/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I’m joined by Sarah Kreps, the John L Wetherell Professor in the Department of Government, Adjunct Professor of Law, and the Director of the Tech Policy Institute at Cornell Brooks School of Public Policy. Her expertise in international politics, technology and national security offers a valuable perspective on shaping AI legislation.
Key Takeaways:
(00:20) The significant impact of industry and NGOs on AI regulation and congressional awareness.
(03:27) AI's multifaceted applications and its national security implications.
(05:07) Advanced efficiency of AI in misinformation campaigns and the importance of legislative responses.
(10:58) Proactive measures by AI firms like OpenAI for electoral fidelity and misinformation control.
(14:23) The challenge of balancing AI innovation with security and economic considerations in legislation.
(20:30) Concerns about potential AI monopolies and the economic consequences.
(28:16) Ethical and practical aspects of AI assistance in legislative processes.
(30:13) The critical need for human involvement in AI-augmented military decisions.
(35:32) National security agencies' approach to AI regulatory frameworks.
(39:13) The imperative of Congress's engagement with diverse sectors for comprehensive AI legislation.
Resources Mentioned:
Sarah Kreps - https://www.linkedin.com/in/sarah-kreps-51a3b7257/
Cornell - https://www.linkedin.com/school/cornell-university/
Sarah Kreps’ paper for the Brookings Institution - https://www.brookings.edu/articles/democratizing-harm-artificial-intelligence-in-the-hands-of-non-state-actors/
President Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
Discussions on AI Global Governance - https://www.american.edu/sis/news/20230523-four-questions-on-ai-global-governance-following-the-g7-hiroshima-summit.cfm
Sarah Kreps - Cornell University -
https://government.cornell.edu/sarah-kreps
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I’m joined by Professor Ronald Arkin, a renowned expert in robotics and roboethics from the Georgia Institute of Technology. Our discussion focuses on AI and robotics. We explore the ethical implications and the necessity for regulatory frameworks that ensure responsible development and deployment.
Key Takeaways:
(02:40) Ethical guidelines for AI and robotics.
(03:19) IEEE’s role in creating soft law guidelines.
(06:56) Robotics’ overshadowing by large language models.
(10:13) The necessity of oversight and compliance in AI development.
(15:30) Ethical considerations for emotionally expressive robots.
(23:41) Liability frameworks for ethical lapses in robotics.
(27:43) The debate on open-sourcing robotics software.
(29:52) The impact of robotics on workforce and employment.
(33:37) Human rights implications in robotic deployment.
(42:55) Final insights on cautious advancement in AI regulation.
Resources Mentioned:
Ronald Arkin - https://sites.cc.gatech.edu/aimosaic/faculty/arkin/
Ronald Arkin | LinkedIn - https://www.linkedin.com/in/ronald-arkin-a3a9206/
Georgia Tech Mobile Robot Lab - https://sites.cc.gatech.edu/ai/robot-lab/
Georgia Institute of Technology - https://www.linkedin.com/school/georgia-institute-of-technology/
IEEE Standards Association - https://standards.ieee.org/
United Nations Convention on Certain Conventional Weapons - https://treaties.un.org/pages/ViewDetails.aspx?chapter=26&clang=_en&mtdsg_no=XXVI-2&src=TREATY
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I welcome Steve Mills, Global Chief AI Ethics Officer for Boston Consulting Group and Global AI Lead for the Public Sector. Steve shares insights into the intersection of AI innovation and ethical responsibility, guiding us through the often-confusing topic of AI regulation and ethics.
Key Takeaways:
(00:26) The role clear regulations play in fostering innovation.
(02:43) The importance of consultation with industry to set achievable regulations.
(04:07) Addressing the uncertainty surrounding AI regulation.
(06:19) The necessity of sector-specific AI regulations.
(07:33) The debate over establishing a separate AI regulatory body.
(09:22) Adapting AI policy to keep pace with technological advancements.
(11:40) Enhancing AI literacy and upskilling the workforce.
(13:06) Ethical considerations in AI deployment, focusing on trustworthiness and harmlessness.
(15:01) Strategies for ensuring AI systems are fair and equitable.
(20:10) The discussion on open-source AI and combating monopolies.
(22:00) The importance of transparency in AI usage by companies.
Resources Mentioned:
Steve Mills - https://www.linkedin.com/in/stevndmills/
Boston Consulting Group - https://www.linkedin.com/company/boston-consulting-group/
Responsible AI Ethics - https://www.bcg.com/capabilities/artificial-intelligence/responsible-ai
Study on the impact of AI in the workforce - https://www.bcg.com/publications/2022/a-responsible-ai-leader-does-more-than-just-avoiding-risk
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I welcome Kai Zenner, Head of Office and Digital Policy Advisor at the European Parliament. We discuss the complexities and challenges of Artificial Intelligence, especially focusing on the legislative efforts within the EU to regulate AI technologies.
Key Takeaways:
(01:36) Diverse perspectives in AI legislation play a significant role.
(02:34) The EU AI Act’s status and its risk-based, innovation-friendly approach.
(07:11) The recommendation for a vertical, industry-specific approach to AI legislation.
(08:32) Measures in the AI Act to prevent AI power concentration and ensure transparency.
(11:50) The global approach of the EU AI Act and its focus on international alignment.
(14:28) Ethical considerations in AI development addressed by the AI Act.
(16:21) Implementation and enforcement mechanisms of the EU AI Act.
(23:31) The involvement of industry experts, researchers and civil society in developing the AI Act.
(29:51) The importance of educating the public on AI issues.
(33:12) Concerns about deepfake technology and election interference.
Resources Mentioned:
Kai Zenner - https://www.linkedin.com/in/kzenner/?originalSubdomain=be
European Parliament - https://www.linkedin.com/company/european-parliament/
EU AI Act - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I’m joined by Lexi Kassan, Lead Data and AI Strategist of Databricks and Founder and Host of the Data Science Ethics Podcast. Lexi brings a wealth of knowledge from her dual role as an AI ethicist and industry insider, providing an in-depth perspective on how legislation can shape the future of AI without curbing its potential.
Key Takeaways:
(02:44) The global impact of the EU AI Act.
(03:46) The necessity for risk-based AI model assessments.
(08:20) Ethical challenges hidden within AI applications.
(11:45) Strategies for inclusive AI benefiting marginalized communities.
(13:29) Core ethical principles for AI systems.
(19:50) The complexity of creating unbiased AI data sets.
(21:58) Categories of unacceptable risks in AI according to the EU Act.
(27:18) Accountability in AI deployment.
(30:53) The role of open-source models in AI development.
(36:24) Businesses seek clear regulatory guidelines.
Resources Mentioned:
Lexi Kassan - https://www.linkedin.com/in/lexykassan/?originalSubdomain=uk
Data Science Ethics Podcast - https://www.linkedin.com/company/dsethics/
EU AI Act - https://artificialintelligenceact.eu/
Databricks - https://www.databricks.com/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
In a world racing toward the development of Artificial General Intelligence (AGI), the balance between innovation and existential risk becomes a pivotal conversation. In this episode, I’m joined by Otto Barten, Founder of the Existential Risk Observatory. We focus on the critical issue of artificial general intelligence (AGI) and its potential to pose existential risks to humanity. Otto shares valuable insights into the necessity of global policy innovation and raising public awareness to navigate these uncharted waters responsibly.
Key Takeaways:
(00:18) Public awareness of AI risks is rising rapidly.
(01:39) The Existential Risk Observatory’s mission is to mitigate human extinction risks.
(02:51) The European Union’s political consensus on the EU AI Act.
(04:11) Otto explains multiple AI threat models leading to existential risks.
(07:01) Why distinguish between AGI and current AI capabilities?
(09:18) Sam Altman and Mark Zuckerberg made recent statements on AGI.
(12:15) The potential dangers of open-sourcing AGI.
(14:17) The current regulatory landscapes and potential improvements.
(17:01) The concept of a “pause button” for AI development is introduced.
(20:13) Balancing AI development with ethical considerations and existential risks.
(23:51) Increasing public and legislative awareness of AI risks.
(29:01) The significance of transparency and accountability in AI development.
Resources Mentioned:
Otto Barten - https://www.linkedin.com/in/ottobarten/?originalSubdomain=nl
Existential Risk Observatory - https://www.linkedin.com/company/existential-risk-observatory/
European Union AI Act -
The Bletchley Process for global AI safety summits -
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I'm joined by Daniel Jeffries, Managing Director of the AI Infrastructure Alliance and CEO of Kentauros, to explore the complexities of AI's potential and the critical need for balanced, forward-thinking legislation.
Key Takeaways:
(02:05) Recent executive orders on AI, watermarking and model size regulation.
(03:54) Autonomous weapons and the need for regulation in areas exempted by governments.
(07:01) Liability in AI-induced harm and the challenge of assigning responsibility.
(07:52) The rapid evolution of AI and the legislative challenge to keep pace.
(10:37) The risk of regulatory capture and the importance of preventing AI monopolies.
(13:29) The role of open source in fostering innovation.
(16:32) Skepticism towards the feasibility of a global consensus on AI regulation.
(18:21) Advocacy for industry-specific regulations, emphasizing use-case and industry nuances.
(22:33) Recommendations for policymakers to focus on real-world problems.
Resources Mentioned:
Daniel Jeffries - https://www.linkedin.com/in/danjeffries/
AI Infrastructure Alliance - https://www.linkedin.com/company/ai-infrastructure-alliance/
Kentauros - https://www.linkedin.com/company/kentauros-ai/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I welcome Alex Swartsel, Managing Director of Insights at JFFLabs. We discuss AI’s role in the employment landscape’s transformation, highlighting the delicate balance between leveraging AI for growth and mitigating its potential disruptions.
Key Takeaways:
(00:16) AI’s transformative impact on employment.
(02:35) The role AI plays in job transformation and skill enhancement.
(04:30) The automation and augmentation of tasks by AI.
(06:10) Rethinking education and skill development in the age of AI.
(09:22) The significance of soft skills in conjunction with technical knowledge.
(11:00) AI’s potential to customize learning experiences.
(17:20) The pivotal role of community colleges in workforce training.
(21:33) The imperative of reskilling and the government’s role.
(29:51) Using AI for personalized education and career guidance.
(35:09) Promoting AI as a tool for human advancement.
Resources Mentioned:
Alex Swartsel - https://www.linkedin.com/in/alexswartsel/
JFFLabs’ New Center for Artificial Intelligence and the Future of Work - https://www.jff.org/
The AI-Ready Workforce report - https://info.jff.org/ai-ready
IMF Report on AI’s Impact on Jobs - https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2024/01/14/Gen-AI-Artificial-Intelligence-and-the-Future-of-Work-542379
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I'm joined by Professor Avi Loeb, Professor of Science at Harvard University, Director of the Institute for Theory and Computation within the Harvard Smithsonian Center for Astrophysics, Head of the Galileo Project, Chair of Harvard's Department of Astronomy and best-selling author. Avi provides an astrophysicist's perspective on the ethical and regulatory frameworks necessary to ensure the responsible use of artificial intelligence.
Key Takeaways:
(00:36) The essential role of academia in fostering dialogue across differing viewpoints.
(06:58) Professor Loeb's concerns about AI's unpredictability.
(09:18) The importance of training AI systems with value-aligned datasets to moderate societal risks.
(10:59) Assigning responsibility for AI's actions.
(14:29) The need for international treaties to regulate AI's use in national security and warfare.
(17:58) Addressing internal disinformation and the role of AI in amplifying societal divisions.
(22:40) Engaging the public in AI regulation discussions to ensure diverse perspectives.
(26:37) The potential for AI to revolutionize space exploration and decision-making in remote environments.
Resources Mentioned:
Harvard University's Galileo Project - https://projects.iq.harvard.edu/galileo/home
Rubin Observatory - https://rubinobservatory.org/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
In this latest episode, I'm joined by Timothy Bean, President and COO of Fortem Technologies, to explore the intricate interplay between artificial intelligence, national security and the legislative landscape that surrounds it.
Key Takeaways:
(02:42) The evolution of national security tools and the advent of AI.
(03:49) The importance of data privacy in AI legislation and national security.
(05:07) The challenges of regulating AI in a rapidly advancing technological landscape.
(10:13) How legislative bodies should adapt and embrace AI to keep pace with technological advancements.
(12:13) The impending impact of quantum computing on AI and national security.
(15:38) The US faces an arms race in AI and quantum computing against global competitors like China and Russia.
(17:25) Public-private partnerships in enhancing national security through AI.
(18:39) The role of transparency and accountability in AI applications for national security.
(22:16) Debating the merits of open-sourcing AI models in the context of national security.
(24:55) The significance of educating the public on data privacy and the potential of AI.
Resources Mentioned:
https://www.linkedin.com/in/meghalred/
https://www.linkedin.com/company/fortem-technologies/
President Biden’s Executive Order on AI -
https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
Department of Defense AI Ethics Principles -
https://www.ai.mil/blog_02_26_21-ai_ethics_principles-highlighting_the_progress_and_future_of_responsible_ai.html
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I'm thrilled to chat with Nathan Grant, Policy Fellow of TeachAI, an initiative championed by notable organizations including Code.org, ETS, ISTE, Khan Academy and the World Economic Forum. Nathan shares invaluable insights on integrating AI education within K-12, emphasizing the importance of a balanced approach to harness AI's potential while mitigating its risks.
Key Takeaways:
(01:16) Introduction of Nathan Grant and the TeachAI initiative.
(02:14) TeachAI's broad coalition, including tech giants and educational stakeholders.
(03:45) Perspectives on President Biden's Executive Order on AI.
(06:27) AI literacy's critical role across all subjects in K-12 education.
(07:30) Addressing the digital and AI divide for equitable education.
(09:03) Engaging students in the AI legislation dialogue.
(12:44) Concerns over banning AI tools like ChatGPT in schools.
(14:33) The risk of AI tool monopolization by a few large tech companies.
(16:00) The importance of education in demonstrating AI's potential and ensuring its responsible use.
(18:59) The potential for standardized AI education guidelines.
Resources Mentioned:
Nathan Grant - https://www.linkedin.com/in/nathan-grant-t/
Code.org - https://www.linkedin.com/company/code-org/
President Biden's Executive Order on AI - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
TeachAI initiative - https://www.teachai.org/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
In a world where AI shapes our daily lives, ethical considerations are paramount. In this episode, I have the pleasure of speaking with Beth Rudden, CEO of Bast AI and a trailblazer in AI ethics. Her journey from IBM to leading Bast AI offers a unique lens on the intricate relationship between AI, ethics and technology.
Key Takeaways:
(01:25) Insights into diverse perspectives on AI regulation.
(02:24) Beth discusses the ethical risks in AI development.
(03:38) The importance of education in AI ethics and technology.
(05:05) Emphasizing explainable AI in regulation.
(06:35) Discussing the role of data privacy and dignity.
(09:01) The necessity of transparency in AI systems.
(12:16) The impact of AI on social media and communication.
(15:33) Core ethical principles in AI development.
(19:25) The role of accountability in AI systems.
(22:09) The concept of AI as a community utility.
(26:39) Beth's views on creating unbiased AI systems.
(30:17) The importance of human rights and privacy in AI.
(34:27) Addressing AI's role in societal issues.
Resources Mentioned:
Beth Rudden - https://www.linkedin.com/in/brudden/
Joy Boulamwini's "Unmasking AI" - https://www.penguinrandomhouse.com/books/670356/unmasking-ai-by-joy-buolamwini/
EU AI Act - https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
Bast AI Website - https://bast.ai/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Creating a safe and ethical AI system starts at its conception. On this episode, I have the pleasure of speaking with Haniyeh Mahmoudian, Ph.D., distinguished Global AI Ethicist at DataRobot and Advisor to NAIAC (National AI Advisory Committee). We discuss AI regulation, ethical considerations and the importance of education around responsible use of AI.
Key Takeaways:
(02:09) Insights into President Biden’s AI Executive Order.
(04:32) The importance of private-public partnerships in AI education and workforce upskilling.
(06:35) The need for realistic job qualifications in AI-related fields.
(08:23) The EU AI Act, its risk framework for AI use cases and the need for flexible and adaptable legislative frameworks in AI regulation.
(11:42) The US's approach to AI regulation compared to the EU.
(15:59) Ethical risks in AI development, particularly the lack of education in AI literacy.
(18:55) Ensuring historically marginalized communities can participate in and benefit from AI advancements.
(21:04) The need for robust governance processes and accountability at every stage of AI development and deployment.
(23:53) Challenges and benefits of democratizing AI technology access.
(25:50) The necessity of companies disclosing their use of AI systems to end-users.
(27:12) Concerns about the impact of AI, particularly deepfakes, on democracy.
Resources Mentioned:
Haniyeh Mahmoudian - https://www.linkedin.com/in/haniyeh-mahmoudian-ph-d-78a18072
DataRobot - https://www.linkedin.com/company/datarobot
President Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
EU AI Act - https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
National AI Advisory Committee Recommendations - https://ai.gov/naiac/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
This era of rapid technological advancement can make finding the equilibrium between innovation and responsible governance difficult. On this episode, I’m joined by Dr. Ravit Dotan, Founder and CEO of TechBetter, Responsible AI Advocate of Bria and AI Ethicist. We discuss the complexities of AI regulation in our modern world. We also focus on the pivotal role policies and ethics play in steering the course of AI toward a future that benefits all.
Key Takeaways:
(01:18) Discussing President Biden’s Executive Order on AI and its implications for a new era of regulation.
(03:02) Contrasting the divergent paths of the US and UK in AI regulation.
(07:18) Investigating AI regulation’s influence on innovation.
(08:22) Assessing the ethical risks of misinformation within AI systems.
(12:13) Addressing the amplification of biases in AI decision-making.
(16:42) The challenge of achieving fairness in AI.
(17:40) The necessity of banning harmful AI applications.
(19:52) The role of AI ethics officers in organizations.
(21:30) Analyzing responsibility in AI-related incidents.
(24:26) The influence of major tech companies on AI’s direction.
(30:50) Discussing strategies against AI deepfakes in political campaigns.
Resources Mentioned:
Dr. Ravit Dotan - https://www.linkedin.com/in/ravit-dotan/
TechBetter - https://www.linkedin.com/company/techbetter/
Bria - https://www.linkedin.com/company/briaai/
President Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
EU AI Act - https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode of Regulating AI: Innovate Responsibly, I am thrilled to host Esha Bhandari, the Deputy Project Director of the ACLU (American Civil Liberties Union), who shares her expertise in AI and civil liberties. Esha is also a Member of the Law Enforcement Subcommittee of the National AI Advisory and Adjunct Professor of Clinical Law at the New York University School of Law.
We explore the complex relationship between artificial intelligence and civil liberties, discussing the implications of AI regulation, the challenges posed by algorithmic bias and the potential impact of AI on various sectors, including law enforcement, housing and employment.
Key Takeaways:
(01:59) Esha’s perspective on President Biden’s Executive Order on AI, emphasizing the inclusion of civil liberties and civil rights.
(04:01) Challenges in law enforcement and national security contexts regarding AI.
(07:56) A discussion on the potential of a separate government agency for AI regulation.
(10:41) The balancing act between preventing AI from replicating societal biases and fostering innovation.
(12:53) The question of liability in AI systems: developer, deployer, or user?
(14:21) Keeping pace with rapid AI advancements in policy and legislation.
(18:51) The ACLU’s stance on open-source technology and AI.
(25:01) The role AI regulation plays on a global scale.
(26:44) Addressing the potential impacts of AI on upcoming elections and protecting civil liberties.
Resources Mentioned:
https://www.linkedin.com/in/eshabhandari/
ACLU (American Civil Liberties Union) -
https://www.linkedin.com/company/aclu/
President Biden’s Executive Order on AI -
https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
Discussions on AI Regulation in the EU -
https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I'm delighted to be joined by a leading mind in AI, Stuart Russell, Professor of Computer Science at UC Berkeley; Former Chair of the Electrical Engineering and Computer Science Program at UC Berkeley; Holder of the Smith-Zadeh Chair in Engineering; Director of the Center for Human-Compatible AI; Author of Artificial Intelligence: A Modern Approach, which is currently part of the curriculum in 1,500 universities in 135 countries and translated into 20 languages.
Our conversation ventures into the depths of AI's potential, its impact on society and the critical role of legislation in shaping a safe and prosperous AI-powered future.
Key Takeaways:
(00:56) Introduction of Professor Stuart Russell and his significant contributions to AI.
(02:22) Analysis of the Biden Executive Order on AI and its limitations.
(03:49) Evolution and current status of the EU AI Act.
(07:31) The paradox of open-source AI in regulatory contexts.
(08:31) The challenge of controlling AI systems that are more powerful than humans.
(13:08) The necessity of proactive safety measures in AI development.
(15:12) The potential risks and concerns around AI agents.
(17:02) Balancing innovation and regulation in AI.
(19:20) Adapting AI legislation to technological advancements.
(21:49) The need for a dedicated regulatory agency for AI.
(26:08) Global collaboration on AI safety and national security.
(30:33) Public perception and education on AI safety.
(34:23) The role of AI in national security and ethical concerns.
(37:04) The impact of AI and deepfakes on the 2024 elections.
Resources Mentioned:
Stuart Russell - https://www.linkedin.com/in/stuartjonathanrussell/
President Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
EU AI Act - https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
On this episode, I'm joined by Congresswoman Anna Eshoo, Co-Chair of AI Caucus. Time Magazine has selected Anna as one of the 100 most influential people in AI, and I’m delighted to hear her invaluable insights into the legislative challenges and opportunities in the world of AI.
Key Takeaways:
(01:23) The role of the National AI Research Resource in President Biden’s executive order.
(03:20) The urgency for Congress to enact durable AI statutes.
(05:31) Objectives of the Create AI Act in making AI accessible to diverse sectors.
(08:03) The dynamic nature of AI policy and state-level legislation's role.
(10:43) The security implications of open-source AI models.
(12:18) Addressing the threat of deepfakes in elections.
(14:29) Strategies for workforce reskilling and attracting global AI talent.
(18:15) Democratizing AI to avert monopolistic trends.
(20:38) US Rep. Eshoo's predictions on the AI legislative timeline.
Resources Mentioned:
Anna Eshoo - https://www.linkedin.com/in/anna-eshoo-b0392095/
President Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
National AI Research Resource - https://www.whitehouse.gov/ostp/news-updates/2023/01/24/national-artificial-intelligence-research-resource-task-force-releases-final-report/
Keep STEM Talent Act 2021 - https://www.congress.gov/bill/117th-congress/house-bill/5924?q=%7B%22search%22%3A%5B%22h.r.+5924%22%2C%22h.r.%22%2C%225924%22%5D%7D&s=1&r=2
Create AI Act - https://eshoo.house.gov/sites/evo-subsites/eshoo.house.gov/files/evo-media-document/eshoo_043_xml.pdf
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Navigating the labyrinth of AI policy is a daunting task, especially for startups. In this episode, I explore this complex world with Nathan Lindfors, who brings unique insights from his role as Policy Director of Engine, an organization at the forefront of advocating for startup interests in the AI realm.
Key Takeaways:
(01:40) The mission and goals of Engine in advocating for startups.
(02:40) How startups differ from companies like OpenAI and Anthropic in the AI space.
(04:22) The role of Engine in educating startups on AI policy developments.
(05:33) Nathan’s take on President Biden’s Executive Order on AI.
(09:12) Concerns over regulatory capture impacting startup innovation.
(10:28) The debate around open-sourcing AI models.
(13:17) Addressing the risks of AI tools falling into the hands of bad actors.
(16:46) Liability issues in AI and their impact on startups.
(19:50) Preparing the workforce for the future of AI.
(23:25) The need for transparent AI usage disclosures by companies.
(25:28) Discussion on the complexities of global versus regional AI regulations.
Resources Mentioned:
https://www.linkedin.com/in/nathan-lindfors-24032b150/
Engine -
https://www.linkedin.com/company/engine-advocacy/
Engine Advocacy for Startups -
https://www.linkedin.com/company/engine-advocacy/
President Biden’s Executive Order on AI -
https://www.whitehouse.gov/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
As artificial intelligence continues to revolutionize our society, the need for thoughtful regulation becomes increasingly crucial. In this episode, I have the honor of discussing these challenges with Senator Pete Ricketts from Nebraska. With his background in governance and entrepreneurship, Senator Ricketts offers invaluable insights into the legislative aspects of AI. Together, we delve into how to harness AI responsibly for the benefit of all.
Key Takeaways:
(01:45) Introduction of a bill for watermarking AI-generated materials.
(03:15) Addressing the concerns of deepfakes and intellectual property in the AI sphere.
(04:01) AI’s transformative potential and the critical need for careful regulation.
(05:19) The impact of AI on national security and election processes.
(05:44) The importance of including small businesses and educational institutions in AI legislation.
(07:00) The need for federal preemption over state laws to avoid a patchwork of AI regulations.
(08:08) The role of workforce reskilling and talent attraction in AI development.
(10:03) Predictions for the timeline of comprehensive AI legislation in Congress.
Resources Mentioned:
Senator Ricketts’ AI Watermarking Bill - https://www.ricketts.senate.gov/press-releases/ricketts-introduces-bill-to-combat-deepfakes-require-watermarks-on-a-i-generated-content/
National Security Implications of AI - https://www.csis.org/analysis/addressing-national-security-implications-ai
AI’s Role in Elections - https://www.brookings.edu/articles/how-ai-will-transform-the-2024-elections/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Navigating the complexities of AI isn’t just about technology. It’s about sculpting our future. In this episode, I’m joined by Congressman Jay Obernolte, representing California’s 23rd district and serving as the vice-chair of the congressional AI caucus. With a rich background in AI and a keen eye for policy, Congressman Obernolte offers invaluable insights into the intricate dance of AI innovation and regulation.
Key Takeaways:
(02:06) Assessing President Biden’s Executive Order on AI and concerns of regulatory overreach.
(04:54) Exploring the Create AI Act’s goal to democratize AI research across academia.
(06:41) Addressing the risk of regulatory capture in the AI industry.
(08:57) Evaluating the role of AI in hiring and the inherent challenges of bias.
(11:05) Debating the need for a new AI regulatory structure.
(14:25) Delving into the implications of open-source AI.
(16:08) Highlighting the role of AI in spreading misinformation and the importance of transparency.
(18:19) Emphasizing the need for diverse perspectives in shaping AI regulation.
(19:44) Advocating for federal over regional or global AI regulation models.
(21:42) Offering predictions on the timeline and direction of comprehensive AI legislation in Congress.
Resources Mentioned:
Congressman Jay Obernolte - https://www.linkedin.com/in/jayobernolte/
President Biden’s Executive Order on AI - https://www.whitehouse.gov/
Create AI Act - https://www.congress.gov/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Are we ready for the AI revolution? How do we balance innovation with regulation? On this episode, I’m joined by Demetrios Brinkmann, Founder and CEO of the MLOps Community, to explore AI's impact on global economies, security and workforce, and the challenges in creating effective regulatory frameworks.
Key Takeaways:
(00:51) The dual role of AI in boosting GDP and posing a threat to workforce and national security.
(01:10) The US Congress' efforts to create a legislative framework for AI.
(02:14) The significance of the MLOps community in AI production.
(03:05) The impact of global AI regulations on the MLOps community.
(03:40) President Biden's Executive Order on AI and the challenges in regulating large language models.
(08:01) The EU's AI Act focusing on risk management and post-market monitoring.
(14:41) Identifying key risks from AI that require regulation.
(21:24) The debate over open-sourcing LLMs.
(26:15) Concerns about regulatory capture by big tech companies.
(30:38) The importance of global or regional AI regulations.
Resources Mentioned:
Demetrios Brinkmann - https://www.linkedin.com/in/dpbrinkm/
MLOps Community - https://ai-infrastructure.org/mlops-community-now/
President Biden's Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
EU AI Act - https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
In this episode, I’m joined by former Governor Terry McAuliffe, who shares his insights on the future of AI and its impact on job creation, national security and global technological dominance. With his extensive experience in both politics and entrepreneurship, Governor McAuliffe provides a unique perspective on the necessary steps the United States must make to take the lead in AI innovation and regulation.
Key Takeaways:
(02:08) The significance of President Biden’s Executive Order on AI.
(03:46) The need for long-term, consistent AI standards and legislation.
(04:25) Addressing public concerns about AI and job displacement.
(06:16) The importance of establishing a regulatory agency for AI.
(07:37) Promoting AI education starting from kindergarten.
(09:18) Proposing a scholarship program for AI studies.
(10:19) AI’s role in maintaining global leadership and job growth.
(12:34) AI is a crucial aspect of national security.
Resources Mentioned:
President Biden’s Executive Order on AI
National Science Foundation (NSF)
National Institute of Standards and Technology (NIST)
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Individual progress in technology isn’t just about personal achievement; it’s about shaping the future for society. On this episode, I’m joined by Congressman Don Beyer, US Representative for Virginia’s 8th District and Vice Chair of the AI Caucus in the House of Representatives, who brings a unique perspective to the table with his dedication to understanding and shaping AI legislation.
Key Takeaways:
(01:29) Congressman Beyer’s unique approach to learning about AI.
(02:55) The significance of President Biden’s Executive Order on AI.
(03:46) The debate on creating a separate regulatory agency for AI.
(06:36) The importance of democratizing AI through legislation like the Create AI Act.
(08:46) The pros and cons of open-sourcing AI models.
(12:10) AI’s role in political advertising and the need for ethical considerations.
(16:22) How AI will impact workforce and immigration policies.
(20:12) The priorities for AI legislation in Congress.
Resources Mentioned:
Congressman Don Beyer - https://www.linkedin.com/in/don-beyer-6b444b4/
House of Representatives - https://www.linkedin.com/company/u.s.-house-of-representatives/
President Biden’s Executive Order on AI - https://www.whitehouse.gov/
Create AI Act - https://www.congress.gov/
Discussions on AI with EU Parliamentarians - https://www.europarl.europa.eu/
National AI Research Resource - https://www.nsf.gov/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
The potential of AI is limitless, yet its implications are complex and multifaceted. Striking a balance between innovation and regulation is crucial for harnessing its benefits while safeguarding against risks.
In this episode, I sit down with Raja Krishnamoorthi, US Congressman, Representing Illinois 8th District, to delve deep into the world of AI, its possibilities, its dangers and how the US is positioning itself in this global race.
Key Takeaways:
(02:36) The necessity of AI regulation.
(03:06) Debating a potential AI regulatory agency.
(04:09) Concerns about global competitiveness, especially China’s AI advances.
(04:52) Introduction of the P.A.S.T. model for AI legislation: Privacy, Accountability, Security and Transparency.
(07:00) Concerns about regulatory capture by corporations and the need for diverse perspectives.
(08:35) Thoughts on open-sourcing large AI language models and implications.
(13:10) The geopolitical impact of AI development, especially in China’s context.
(15:48) Worries about deepfake technology and its election impact.
(21:34) Congressional challenges and ambitious goals for AI regulations, with potential timing considerations.
Resources Mentioned:
Raja Krishnamoorthi - https://www.linkedin.com/in/rajakrishnamoorthi/
US Congressman - https://www.linkedin.com/company/u.s.-house-of-representatives/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
En liten tjänst av I'm With Friends. Finns även på engelska.