Welcome to the RAI Report from the Responsible AI Institute. Each week we bring you the latest news and trends happening in the responsible AI ecosystem with leading industry experts. Whether it’s unpacking promising progress, pressing dilemmas, or regulatory updates, our trailblazing guests will spotlight emerging innovations through a practical lens, helping to implement and advance AI responsibly. Support the showVisit out website at responsible.ai.
The podcast Responsible AI Report is created by Responsible AI Institute. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
In this episode of the Responsible AI Report, Vilas Dhar, President & Trustee of the Patrick J. McGovern Foundation, discusses the critical role of philanthropy in shaping responsible AI development amidst rapid technological changes. He emphasizes the need for inclusive participation from communities, the importance of global governance, and the necessity of education in empowering policymakers and citizens alike. Vilas highlights the urgency of ensuring accountability and transparency in AI systems, advocating for a future where technology serves the common good.
Takeaways
Learn more at:
https://www.mcgovern.org/
https://www.linkedin.com/in/vilasdhar/
https://www.linkedin.com/company/mcgovern-foundation/
Vilas Dhar is a leading advocate on AI for public purpose and a global expert on artificial intelligence (AI) policy. He serves as President and Trustee of the Patrick J. McGovern Foundation, a $1.5 billion philanthropy advancing AI and data solutions for a sustainable and equitable future. He champions a new digital compact that prioritizes individuals and communities in the development of new products, inspires economic and social opportunities, and empowers the most vulnerable.
Appointed by UN Secretary-General António Guterres to the High-Level Advisory Body on AI, Vilas is also the U.S. Government Nominated Expert to the Global Partnership on AI. He serves on the OECD Expert Working Group on AI Futures, the Global Future Council on AI at the World Economic Forum, and Stanford's Advisory Council on Human-Centered AI. He is Chair of the Center for Trustworthy Technology. His LinkedIn Learning course, Ethics in the Age of Generative AI, is the most-viewed AI ethics course globally, reaching over 300,000 learners.
Vilas holds a J.D. from NYU School of Law, an M.P.A. from Harvard Kennedy School, a dual Bachelor's degrees in Biomedical Engineering and Computer Science from the University of Illinois and is pursuing doctoral studies at the University of Birmingham.
Visit our website at responsible.ai
In this episode of the Responsible AI Report, Duncan Crabtree-Ireland, National Executive Director and Chief Negotiator at SAG-AFTRA, discusses the impact of AI on the entertainment industry, particularly in light of the historic 2023 strike. He outlines the protections negotiated for artists regarding their likenesses and creative work, emphasizing the importance of informed consent and fair compensation. Duncan also shares insights on how SAG-AFTRA plans to evolve its AI guidelines to keep pace with emerging technologies and highlights the vision of using AI to augment rather than replace human creativity. The discussion concludes with a call to action for ongoing engagement and education around AI's role in the industry.
Takeaways
Learn more at:
https://www.sagaftra.org/
https://www.linkedin.com/company/screen-actors-guild/
https://www.linkedin.com/in/duncanci/
@duncanci
@sagaftra
Visit our website at responsible.ai
In this episode of the Responsible AI Report, Patrick speaks with Carmine Valente, the Global Head of Cybersecurity Risk at Paramount, about the intersection of AI and cybersecurity in the entertainment industry. They discuss the legal and security risks associated with AI, the importance of understanding AI usage, and the need for robust cybersecurity protocols to protect intellectual property. The conversation also explores how entertainment companies can balance innovation with risk management and the frameworks that can guide responsible AI governance.
Takeaways
Learn more at:
https://www.linkedin.com/in/carminevalente/
Carmine Valente is an Information Security Executive with extensive cross-cultural experience in all aspects of Cyber Security, Risk Management, Incident Response, Attack Surface Management, AI Security, Audit, Business Resilience, Data Security, and Board Advisory. With a background in computer science and software engineering and a Master of Science in Cybersecurity Leadership, he has advised global clients in multiple sectors including high tech, telco, healthcare, government administration, media, financial, and professional services providing strategic influence over many cross-cultural Fortune 100 and 500 organizations. In current and past roles, Carmine provides visionary leadership to combat the Advanced Persistent Security threat to organizational infrastructures while balancing risk, privacy, compliance and empower the business lines. He is published by Springer as part of the book “Machine Learning and Data Mining in Pattern Recognition” and awarded for the research in AI-driven Network Security during the 6th International Conference on Data Mining in Leipzig (Germany). Carmine has been recognized by peers as a highly skilled subject matter expert in the field of Cyber Security.
Visit our website at responsible.ai
In episode 13 of the Responsible AI Report, Patrick speaks with Christophe Rougeaux about the importance of responsible AI and model risk management in the financial sector. They discuss how banks can expand their model risk management capabilities to include AI oversight, the challenges of building specialized expertise in risk management teams, and strategies for accelerating AI deployment while maintaining robust risk management practices. Christophe emphasizes the need for a holistic understanding of the AI lifecycle, continuous improvement, and a supportive culture for safe AI implementation.
Takeaways
Learn more at:
https://www.linkedin.com/in/christopherougeaux/
Christophe Rougeaux is an expert in analytics who helps global organizations ensure effective and sustainable management of their analytics, through robust oversight governance. Christophe previously co-led McKinsey's Model Risk Management service line. Since 2024, he is a Model Risk Management Executive at TD Bank Group where he is heading the model validation of the non-retail portfolio and leading strategic AI/Model Governance initiatives.
Visit our website at responsible.ai
In this episode of the Responsible AI Report, Patrick and Betty Louie discuss the evolving landscape of responsible AI, focusing on the importance of developing internal governance frameworks for AI compliance amidst fragmented global regulations. Betty emphasizes the need for companies to establish their own AI principles and policies to navigate the complexities of AI regulation effectively. They also explore the significance of self-regulation and the proactive steps organizations should take to ensure ethical AI use and compliance with emerging regulations.
Takeaways
Learn more at:
https://www.linkedin.com/in/betty-louie-039a1920/
https://www.linkedin.com/company/the-brandtech-group/
https://thebrandtechgroup.com/
Recent work:
AdExchanger: https://www.adexchanger.com/data-driven-thinking/5-tips-for-drafting-an-ethical-generative-ai-policy/ and https://www.adexchanger.com/adexchanger-talks/405939/
Creative Ops: https://creativeops.fm/episode/e19-legal-as-co-pilot-in-accelerating-creatives-ai-adoption-w-betty-louie-of-brandtech-group
BrXnd: https://brxnd.ai/sessions/navigating-legal-risks-in-gen-ai-a-practical-guide-for-companies-with-betty-louie-and-shareen-pathak
Authority Magazine: https://medium.com/authority-magazine/c-suite-perspectives-on-ai-betty-louie-of-the-brandtech-group-on-where-to-use-ai-and-where-to-rely-aa920c35cd83
Betty Louie is a Partner and General Counsel at The Brandtech Group. She has more than 25 years’ experience advising both public and private tech companies, and was previously a partner at a leading international law firm. She has been consistently ranked in Chambers Global and Legal500 since 2012. Betty oversaw Brandtech’s 2023 acquisitions of Jellyfish, a digital media company, and Pencil, a Generative AI platform, and works extensively with major global brands to design robust and ethical AI and Gen AI policies. She spearheaded Brandtech’s green-listing system to enable companies to experiment and explore new Gen AI tools within certain legal, tech, and ethical parameters. She is a leading speaker and industry thought leader.
Visit our website at responsible.ai
In this of the Responsible AI Report, Patrick speaks with Jeff Redel, Managing Director of the Data and AI governance team at ATB Financial. They discuss the evolution of AI agents, the importance of ethical implementation in banking, the skills financial professionals will need in the future, the necessity of human oversight in AI processes, and the regulatory challenges that accompany the rapid advancement of AI technology. Jeff emphasizes the need for a strong foundation in data governance and ethics, as well as the importance of education and adaptability for team members in the face of AI integration.
Takeaways
Learn more at:
https://www.atb.com/personal/
https://www.linkedin.com/in/jeff-redel-2097291/
Jeff Redel is the Managing Director of ATB Financial’s Data & AI Governance team. His team focuses on ethical and responsible AI, data governance excellence, and strategic leadership and vision. Key areas include championing fairness and inclusivity, prioritizing transparency and explainability, upholding privacy and security, promoting responsible AI use, establishing data quality and integrity, ensuring data security and compliance, driving data literacy and accessibility, optimizing data management and architecture, aligning data and AI with business goals, fostering collaboration and communication, promoting a culture of innovation and learning, and building a high-performing team.
Visit our website at responsible.ai
In this episode of the Responsible AI Report, Patrick speaks with Amy Challen, the Global Head of AI at Shell. They discuss the current landscape of AI, including the ethical considerations in AI development, the importance of risk management, and the public discourse surrounding responsible AI. The conversation highlights the need for a balanced approach to AI innovation and the role of leadership in navigating these challenges.
Takeaways
Learn more at:
https://www.shell.com/what-we-do/digitalisation/artificial-intelligence.html
Amy Challen is the Global Head of Artificial Intelligence at Shell, responsible for driving delivery and adoption of AI technologies, including natural language processing, computer vision, and deep reinforcement learning.
She spent the first decade of her career in academia as a researcher in applied econometrics, before joining McKinsey & Company as a strategy consultant. As a consultant she solved real-world problems across diverse functions and industries, for some of the world’s largest organizations, delivering significant commercial value. She joined Shell in 2019.
Visit our website at responsible.ai
In this episode, Patrick speaks with Brian McGowan and Chris Jambor from KPMG about the importance of responsible AI practices. They discuss the limitations of AI models, the development and significance of AI system cards, and how these tools can help mitigate risks associated with AI technologies. The conversation emphasizes the need for a structured approach to AI governance and the role of transparency and accountability in building trust in AI systems.
Takeaways
Learn more at:
https://kpmg.com/xx/en/what-we-do/services/kpmg-trusted-ai.html
Bryan McGowan is a Principal in the KPMG Advisory practice and leader of US Trusted AI for Consulting. In this role, Bryan continues to expand his passion of leveraging technology to drive efficiency, enhance insights, and improve results. Trusted AI combines deep industry expertise across the firm’s Risk Services, Lighthouse, and Cyber businesses with modern technical skills to help business leaders harness the power of AI to accelerate value in a trusted manner—from strategy and design through to implementation and ongoing operations. Bryan also leads the Trusted AI go-to-market efforts for the Risk Services business and co-developed the firm’s Risk Intelligence product suite to help identify, manage, and quantify risks across the enterprise. His primary focus areas are business process improvement, control design and automation, and managing risks associated with emerging technologies. Bryan has over 20 years’ experience running large, complex projects across a variety of industries. This includes supporting clients on their automation and analytics journey for the better part of the last decade—designing and developing bots, RPA, initial AI/ML models, and more.
Chris is a member of the KPMG AI & Digital Innovation Group's Trusted AI Team with a specialized focus on AI literacy and the responsible & ethical uses of AI. Before joining the Trusted AI team, Chris was an AI Strategy Consultant & Analytics Engineer working in industries such as technology, entertainment, healthcare, pharmaceuticals, marketing/advertising, higher education, and cybersecurity.
Visit our website at responsible.ai
For this episode of the Responsible AI Report, Soribel Feliz discusses the complexities of AI regulation, emphasizing the need for a balanced approach that considers both innovation and the rights of creators. She highlights the challenges faced by startups in complying with regulations and the differing impacts of state versus federal policies. The discussion also touches on the evolving landscape of intellectual property rights in the context of AI development.
Takeaways
Learn more at:
https://www.linkedin.com/in/soribel-f-b5242b14/
https://www.linkedin.com/newsletters/responsible-ai-=-inclusive-ai-7046134543027724288/
Soribel Feliz is a thought leader in Responsible AI and AI governance. She started her career as a U.S. diplomat with the Department of State. She also worked for Big Tech companies, Meta and Microsoft, and most recently, worked as a Senior AI and Tech Policy Advisor in the U.S. Senate.
Visit our website at responsible.ai
In this episode of the Responsible AI Report, Patrick speaks with Vyoma Gajjjar about the critical issues surrounding responsible AI, particularly in the context of generative AI and chatbots. They discuss the balance between creating engaging AI interactions and maintaining transparency, the importance of implementing safety measures from the ground up, and the necessity of developing emotional intelligence frameworks to better understand and respond to user emotions. The conversation emphasizes the need for robust safety protocols and regulations to ensure that AI technologies are developed ethically and responsibly.
Takeaways
Learn more at:
https://www.linkedin.com/in/vyomagajjar/
Vyoma Gajjar is an AI Technical Solution Architect at IBM, specializing in generative AI, AI governance, and machine learning. With over a decade of experience, she has developed innovative solutions that emphasize ethical AI practices and responsible innovation across various global industries. Vyoma is a passionate advocate for AI governance and has contributed her expertise as a speaker and mentor in numerous academic and professional settings. She is dedicated to fostering a deeper understanding of AI's impact on society, promoting transparency, and enhancing trust in AI technologies. Vyoma holds a patent in AI and is actively involved in initiatives that drive positive change in the tech industry.
Visit our website at responsible.ai
Director/Producer, Sophie Compton, joins us for episode 05 of the Responsible AI Report! In this conversation, Sophie Compton discusses the implications of AI and deepfakes, particularly in the context of misinformation during elections and the broader societal impacts of deepfake abuse. She emphasizes the importance of consent, the structural issues surrounding deepfake technology, and the need for accountability from tech companies. The discussion also highlights the potential positive uses of AI in storytelling and the cultural implications of deepfake technology on gender equality.
Takeaways
Learn more at:
https://myimagemychoice.org/
@myimagemychoice
Sophie Compton is a documentary director and producer who tells women's stories of injustice and healing. Her work is impact-driven and she runs impact projects alongside each creative piece, amplifying survivor voices. Her projects have been supported by Sundance Institute, International Documentary Association, Impact Partners, Hot Docs, Arts Council England and others. Her debut feature ANOTHER BODY follows a student’s search for justice after discovering deepfake pornography of herself online. It premiered at SXSW 2023, winning the Special Jury Award for Innovation in Storytelling, and played at Hot Docs, Doc Edge, Champs Elysées, Munich, Aegean, DMZ, Woodstock, Mill Valley and New/Next Film Festivals among others, winning multiple Audience Awards. She is the co-founder of #MyImageMyChoice, a cultural movement tackling intimate image abuse. Her second feature HOLLOWAY (in post-production) follows six women returning to the abandoned prison where they were once incarcerated, produced by Grierson and BIFA-winning Beehive Films. Previously, she was Artistic Director of theatre company Power Play, producing/directing six plays including the Fringe First winning FUNERAL FLOWERS, and work at Tate Modern, V&A, Pleasance, Copeland Gallery. As an impact producer she has worked with grassroots organisations, NGOs, governments and press including The White House, World Economic Forum, and NOW THIS on viral content and new legislation and policy.
Visit our website at responsible.ai
In this episode of the Responsible AI Report, Patrick and Megha Sinha discuss the essential components of responsible AI governance. They explore the significant gap between AI ambitions and the resources available for implementing governance frameworks, emphasizing the need for organizations to establish clear ethical guidelines, accountability mechanisms, and cross-functional teams. Megha outlines an eight-step approach to building a responsible AI framework, highlighting the importance of transparency, bias mitigation, and continuous monitoring. The conversation also delves into the critical role of governance structures in ensuring accountability as global AI regulations evolve, and the necessity of incorporating responsible AI thinking from the design phase to prevent ethical and legal violations.
Takeaways
- 97% of organizations have set responsible AI goals, but 48% lack resources.
- Establishing a code of conduct is critical for responsible AI.
- Transparency is essential for building trust in AI systems.
- Governance structures are vital for ensuring accountability.
- Incorporate responsible AI thinking from the start of development.
- Prevent ethical and legal violations by embedding responsible AI early.
- Designing for explainability enhances accountability in AI.
- Continuous monitoring is necessary for responsible AI frameworks.
- Fostering a culture of responsible AI is crucial for success.
- AI governance must adapt to evolving regulations.
Learn more by visiting:
https://www.genpact.com/
https://www.linkedin.com/in/megha-sinha/
Article Referenced: https://www.prnewswire.com/news-releases/97-of-ai-leaders-commit-to-responsible-ai-yet-nearly-half-lack-resources-to-achieve-the-necessary-governance-302252621.html
Megha Sinha is an AI/ML leader with 15 years of expertise in shaping technology strategy and spearheading AI-driven transformations and a Certified AI Governance Professional from IAPP. As the leader of the AI/ML & Responsible AI Platform competency in the Global AI Practice, Megha has built high-performing teams across ML Engineering, ML Ops, LLM Ops, and Responsible AI to architect and scale robust platforms. Her leadership drives the strategic integration of AI technologies, ensuring the delivery of impactful, ethical solutions that align with enterprise goals and industry standards. She successfully spearheaded the end-to-end launch of an enterprise-grade Generative AI Knowledge Management product, driving product strategy, enabling go-to-market (GTM) execution, and establishing competitive pricing models. As a trusted advisor to Client CXOs, she is known for her strategic foresight, strategy realization through right implementation and leadership in technology strategy and AI/ML solution design. Her ability to navigate the complex AI landscape and guide organizations toward measurable business outcomes instills confidence in her clients. Her leadership has enabled successful partnerships with industry bodies like NASSCOM, fostering joint solutions with Dataiku and driving Responsible AI initiatives building partnerships to benefit clients. She has been recognized with the Women in Tech Leadership Award and is a thought leader in AI strategy and responsible AI. With numerous technical publications in IEEE journals, she shapes the conversation around AI scale using ML Ops, LLM Ops, ethics, governance, and the future of technology leadership, positioning her at the forefront of AI-driven business transformation.
Visit our website at responsible.ai
In this episode of the Responsible AI Report, Patrick and Dr. Richard Saldanha discuss the EU's AI Code of Conduct and its collaborative approach to AI governance. They explore the importance of adaptability in regulations, the balance between innovation and safety, and the need for qualified personnel in regulatory bodies. Richard emphasizes the significance of a principles-based approach and the role of collaboration among stakeholders in shaping effective AI regulations.
Takeaways
Learn more by visiting:
1. Referenced article: https://www.ainews.com/p/eu-gathers-experts-to-draft-ai-code-of-practice-for-general-ai-models
2. EU AI Act 2024/1689: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689
3. UK Automated Vehicles Act 2024: https://www.legislation.gov.uk/ukpga/2024/10/contents/enacted
4. Richard's Queen Mary University of London profile: https://www.qmul.ac.uk/sef/staff/richardsaldanha.html
5. Richard's Academic Speakers Bureau profile: https://www.academicspeakersbureau.com/speakers/richard-saldanha
6. The UK Institute of Science and Technology (IST) website: https://istonline.org.uk/
7. IST AI professional accreditation:
https://istonline.org.uk/professional-registration/registered-artificial-intelligence-practitioners/
8. IST AI training: https://istonline.org.uk/ist-artificial-intelligence-training/
Dr. Richard Saldanha is one of the founder members of the Institute of Science and Technology's Artificial Intelligence Special Interest Group in the UK. He is actively involved in the development of the Institute's AI professional accreditation as well as host of its online AI Seminar Series. Richard is a Visiting Lecturer at Queen Mary University of London where he teaches Machine Learning in Finance on the Master’s Degree Programme in the School of Economics and Finance. He is also an Industrial Collaborator in the AI for Control Problems Project at The Alan Turing Institute. Richard's earlier career was in quantitative finance (risk, trading and investments) gaining over two decades of experience working for institutions in the City of London. He is still actively engaged in quantitative finance via Oxquant, a consulting firm he co-heads with Dr Drago Indjic. Richard attended Oriel College, University of Oxford, and holds a doctorate (DPhil) in graph theory and multivariate analysis. He is a Fellow and Chartered Statistician of the Royal Statistical Society; a Science Council Chartered Scientist; a Fellow and Advanced Practitioner in Artificial Intelligence of the Institute of Science and Technology; a Member of the Institution of Engineering and Technology; and has recently joined the Responsibl
Visit our website at responsible.ai
In this episode of the Responsible AI Report, Patrick speaks with Caroline Brzezinski and Dr. Amber Jolley-Paige from mpathic about the intersection of AI and healthcare. They discuss the importance of measuring AI accuracy, the need for standardized testing, acceptable error rates in medical AI, and current trends in AI adoption within the healthcare sector. The conversation emphasizes the critical role of human oversight and expert involvement in ensuring the safety and efficacy of AI tools in medical applications.
Takeaways
Learn more by visiting:
https://mpathic.ai/
https://www.linkedin.com/in/amber-jolley-paige-ph-d-72041b46/
https://www.linkedin.com/in/caraline-7b22588b/
Dr. Jolley is a licensed professional counselor, researcher, and educator with over a decade of experience in the mental health field. As the Vice President of Clinical Product and a founding team member at mpathic, she leads a team that utilizes an evidence-based labeling system to advance natural language processing technologies. Dr. Jolley leverages her extensive clinical, research, and teaching background to develop a conversation and insights engine, providing individuals and organizations with actionable insights for enhanced understanding.
Caraline Bruzinski is a Senior Machine Learning Engineer at mpathic, where she models clinical trial data from therapist-client sessions with a focus on measuring empathy and therapist-patient conversational outcomes. Caraline specializes in refining models to achieve higher accuracy and reliability, developing custom ML models tailored to address specific clinical setting challenges, and conducting statistical analysis to enhance the accuracy and fairness of machine learning outcomes. With a Master’s degree in Computer Science, specifically focusing on AI/ML, from New York University and a background in data engineering, she brings extensive experience from her previous roles, including as Tech Lead at Glossier Inc. There, she developed a recommendation system that boosted sales by over $2M annually.
The Responsible AI Report is produced by the Responsible AI Institute.
Visit our website at responsible.ai
In episode 01 of the Responsible AI Report, Renata Dwan discusses the critical need for global governance of artificial intelligence (AI) and the challenges that arise from differing national perspectives. She emphasizes the importance of collaboration, equity, and transparency in developing effective AI governance frameworks. Dwan outlines strategies for achieving consensus among nations and highlights the role of the UN in facilitating dialogue and cooperation. The discussion also touches on the implications of AI for society and the necessity of addressing market failures and ensuring equitable distribution of AI benefits.
Takeaways
Learn more by visiting:
https://www.un.org/en/
https://www.un.org/techenvoy/
https://www.un.org/techenvoy/global-digital-compact
Renata Dwan is Special Adviser to the UN Secretary-General’s Envoy on Technology where she led support to the elaboration of the Global Digital Compact approved by heads of state at the UN Summit of the Future. Renata has driven multilateral cooperation initiatives for over 25 years within and outside the
UN. As Director of the United Nations Institute for Disarmament Research (UNIDIR) she led initiatives on digital technology governance and arms control. She drove major UN-wide initiatives on UN reform and partnerships, and dialogue on the responsible use of technologies inUN peace operations. Previously, Renata was Deputy Director of Chatham House, the Royal Institute of International Affairs where she oversaw the Institute’s research agenda and digital initiatives. She was
Programme Director at Stockholm International Peace Research Institute (SIPRI) and visiting fellow to the EU Institute for Security Studies. She received her B.A, M.Phil and D.Phil in International Relations from Oxford University, UK. Renata has published widely on international policy and security issues. She is an Irish national.
The Responsible AI Report is produced by the Responsible AI Institute.
Visit our website at responsible.ai
The Responsible AI Report is a brand new podcast, brought to you by the Responsible AI Institute. Each week we will bring you the latest news and trends happening in the responsible AI ecosystem with leading industry experts. Whether it's unpacking promising progress, pressing dilemmas, or regulatory updates, our trailblazing guests will spotlight emerging innovations through a practical lens, helping to implement and advance AI responsibly. We are excited to have you join us!
Visit our website at responsible.ai
En liten tjänst av I'm With Friends. Finns även på engelska.