Welcome to the RAI Report from the Responsible AI Institute. Each week we bring you the latest news and trends happening in the responsible AI ecosystem with leading industry experts. Whether it’s unpacking promising progress, pressing dilemmas, or regulatory updates, our trailblazing guests will spotlight emerging innovations through a practical lens, helping to implement and advance AI responsibly. Support the showVisit out website at responsible.ai.
The podcast Responsible AI Report is created by Responsible AI Institute. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
In this episode of the Responsible AI Report, Patrick speaks with Vyoma Gajjjar about the critical issues surrounding responsible AI, particularly in the context of generative AI and chatbots. They discuss the balance between creating engaging AI interactions and maintaining transparency, the importance of implementing safety measures from the ground up, and the necessity of developing emotional intelligence frameworks to better understand and respond to user emotions. The conversation emphasizes the need for robust safety protocols and regulations to ensure that AI technologies are developed ethically and responsibly.
Takeaways
Learn more at:
https://www.linkedin.com/in/vyomagajjar/
Vyoma Gajjar is an AI Technical Solution Architect at IBM, specializing in generative AI, AI governance, and machine learning. With over a decade of experience, she has developed innovative solutions that emphasize ethical AI practices and responsible innovation across various global industries. Vyoma is a passionate advocate for AI governance and has contributed her expertise as a speaker and mentor in numerous academic and professional settings. She is dedicated to fostering a deeper understanding of AI's impact on society, promoting transparency, and enhancing trust in AI technologies. Vyoma holds a patent in AI and is actively involved in initiatives that drive positive change in the tech industry.
Visit our website at responsible.ai
Director/Producer, Sophie Compton, joins us for episode 05 of the Responsible AI Report! In this conversation, Sophie Compton discusses the implications of AI and deepfakes, particularly in the context of misinformation during elections and the broader societal impacts of deepfake abuse. She emphasizes the importance of consent, the structural issues surrounding deepfake technology, and the need for accountability from tech companies. The discussion also highlights the potential positive uses of AI in storytelling and the cultural implications of deepfake technology on gender equality.
Takeaways
Learn more at:
https://myimagemychoice.org/
@myimagemychoice
Sophie Compton is a documentary director and producer who tells women's stories of injustice and healing. Her work is impact-driven and she runs impact projects alongside each creative piece, amplifying survivor voices. Her projects have been supported by Sundance Institute, International Documentary Association, Impact Partners, Hot Docs, Arts Council England and others. Her debut feature ANOTHER BODY follows a student’s search for justice after discovering deepfake pornography of herself online. It premiered at SXSW 2023, winning the Special Jury Award for Innovation in Storytelling, and played at Hot Docs, Doc Edge, Champs Elysées, Munich, Aegean, DMZ, Woodstock, Mill Valley and New/Next Film Festivals among others, winning multiple Audience Awards. She is the co-founder of #MyImageMyChoice, a cultural movement tackling intimate image abuse. Her second feature HOLLOWAY (in post-production) follows six women returning to the abandoned prison where they were once incarcerated, produced by Grierson and BIFA-winning Beehive Films. Previously, she was Artistic Director of theatre company Power Play, producing/directing six plays including the Fringe First winning FUNERAL FLOWERS, and work at Tate Modern, V&A, Pleasance, Copeland Gallery. As an impact producer she has worked with grassroots organisations, NGOs, governments and press including The White House, World Economic Forum, and NOW THIS on viral content and new legislation and policy.
Visit our website at responsible.ai
In this episode of the Responsible AI Report, Patrick and Megha Sinha discuss the essential components of responsible AI governance. They explore the significant gap between AI ambitions and the resources available for implementing governance frameworks, emphasizing the need for organizations to establish clear ethical guidelines, accountability mechanisms, and cross-functional teams. Megha outlines an eight-step approach to building a responsible AI framework, highlighting the importance of transparency, bias mitigation, and continuous monitoring. The conversation also delves into the critical role of governance structures in ensuring accountability as global AI regulations evolve, and the necessity of incorporating responsible AI thinking from the design phase to prevent ethical and legal violations.
Takeaways
- 97% of organizations have set responsible AI goals, but 48% lack resources.
- Establishing a code of conduct is critical for responsible AI.
- Transparency is essential for building trust in AI systems.
- Governance structures are vital for ensuring accountability.
- Incorporate responsible AI thinking from the start of development.
- Prevent ethical and legal violations by embedding responsible AI early.
- Designing for explainability enhances accountability in AI.
- Continuous monitoring is necessary for responsible AI frameworks.
- Fostering a culture of responsible AI is crucial for success.
- AI governance must adapt to evolving regulations.
Learn more by visiting:
https://www.genpact.com/
https://www.linkedin.com/in/megha-sinha/
Article Referenced: https://www.prnewswire.com/news-releases/97-of-ai-leaders-commit-to-responsible-ai-yet-nearly-half-lack-resources-to-achieve-the-necessary-governance-302252621.html
Megha Sinha is an AI/ML leader with 15 years of expertise in shaping technology strategy and spearheading AI-driven transformations and a Certified AI Governance Professional from IAPP. As the leader of the AI/ML & Responsible AI Platform competency in the Global AI Practice, Megha has built high-performing teams across ML Engineering, ML Ops, LLM Ops, and Responsible AI to architect and scale robust platforms. Her leadership drives the strategic integration of AI technologies, ensuring the delivery of impactful, ethical solutions that align with enterprise goals and industry standards. She successfully spearheaded the end-to-end launch of an enterprise-grade Generative AI Knowledge Management product, driving product strategy, enabling go-to-market (GTM) execution, and establishing competitive pricing models. As a trusted advisor to Client CXOs, she is known for her strategic foresight, strategy realization through right implementation and leadership in technology strategy and AI/ML solution design. Her ability to navigate the complex AI landscape and guide organizations toward measurable business outcomes instills confidence in her clients. Her leadership has enabled successful partnerships with industry bodies like NASSCOM, fostering joint solutions with Dataiku and driving Responsible AI initiatives building partnerships to benefit clients. She has been recognized with the Women in Tech Leadership Award and is a thought leader in AI strategy and responsible AI. With numerous technical publications in IEEE journals, she shapes the conversation around AI scale using ML Ops, LLM Ops, ethics, governance, and the future of technology leadership, positioning her at the forefront of AI-driven business transformation.
Visit our website at responsible.ai
In this episode of the Responsible AI Report, Patrick and Dr. Richard Saldanha discuss the EU's AI Code of Conduct and its collaborative approach to AI governance. They explore the importance of adaptability in regulations, the balance between innovation and safety, and the need for qualified personnel in regulatory bodies. Richard emphasizes the significance of a principles-based approach and the role of collaboration among stakeholders in shaping effective AI regulations.
Takeaways
Learn more by visiting:
1. Referenced article: https://www.ainews.com/p/eu-gathers-experts-to-draft-ai-code-of-practice-for-general-ai-models
2. EU AI Act 2024/1689: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689
3. UK Automated Vehicles Act 2024: https://www.legislation.gov.uk/ukpga/2024/10/contents/enacted
4. Richard's Queen Mary University of London profile: https://www.qmul.ac.uk/sef/staff/richardsaldanha.html
5. Richard's Academic Speakers Bureau profile: https://www.academicspeakersbureau.com/speakers/richard-saldanha
6. The UK Institute of Science and Technology (IST) website: https://istonline.org.uk/
7. IST AI professional accreditation:
https://istonline.org.uk/professional-registration/registered-artificial-intelligence-practitioners/
8. IST AI training: https://istonline.org.uk/ist-artificial-intelligence-training/
Dr. Richard Saldanha is one of the founder members of the Institute of Science and Technology's Artificial Intelligence Special Interest Group in the UK. He is actively involved in the development of the Institute's AI professional accreditation as well as host of its online AI Seminar Series. Richard is a Visiting Lecturer at Queen Mary University of London where he teaches Machine Learning in Finance on the Master’s Degree Programme in the School of Economics and Finance. He is also an Industrial Collaborator in the AI for Control Problems Project at The Alan Turing Institute. Richard's earlier career was in quantitative finance (risk, trading and investments) gaining over two decades of experience working for institutions in the City of London. He is still actively engaged in quantitative finance via Oxquant, a consulting firm he co-heads with Dr Drago Indjic. Richard attended Oriel College, University of Oxford, and holds a doctorate (DPhil) in graph theory and multivariate analysis. He is a Fellow and Chartered Statistician of the Royal Statistical Society; a Science Council Chartered Scientist; a Fellow and Advanced Practitioner in Artificial Intelligence of the Institute of Science and Technology; a Member of the Institution of Engineering and Technology; and has recently joined the Responsibl
Visit our website at responsible.ai
In this episode of the Responsible AI Report, Patrick speaks with Caroline Brzezinski and Dr. Amber Jolley-Paige from mpathic about the intersection of AI and healthcare. They discuss the importance of measuring AI accuracy, the need for standardized testing, acceptable error rates in medical AI, and current trends in AI adoption within the healthcare sector. The conversation emphasizes the critical role of human oversight and expert involvement in ensuring the safety and efficacy of AI tools in medical applications.
Takeaways
Learn more by visiting:
https://mpathic.ai/
https://www.linkedin.com/in/amber-jolley-paige-ph-d-72041b46/
https://www.linkedin.com/in/caraline-7b22588b/
Dr. Jolley is a licensed professional counselor, researcher, and educator with over a decade of experience in the mental health field. As the Vice President of Clinical Product and a founding team member at mpathic, she leads a team that utilizes an evidence-based labeling system to advance natural language processing technologies. Dr. Jolley leverages her extensive clinical, research, and teaching background to develop a conversation and insights engine, providing individuals and organizations with actionable insights for enhanced understanding.
Caraline Bruzinski is a Senior Machine Learning Engineer at mpathic, where she models clinical trial data from therapist-client sessions with a focus on measuring empathy and therapist-patient conversational outcomes. Caraline specializes in refining models to achieve higher accuracy and reliability, developing custom ML models tailored to address specific clinical setting challenges, and conducting statistical analysis to enhance the accuracy and fairness of machine learning outcomes. With a Master’s degree in Computer Science, specifically focusing on AI/ML, from New York University and a background in data engineering, she brings extensive experience from her previous roles, including as Tech Lead at Glossier Inc. There, she developed a recommendation system that boosted sales by over $2M annually.
The Responsible AI Report is produced by the Responsible AI Institute.
Visit our website at responsible.ai
In episode 01 of the Responsible AI Report, Renata Dwan discusses the critical need for global governance of artificial intelligence (AI) and the challenges that arise from differing national perspectives. She emphasizes the importance of collaboration, equity, and transparency in developing effective AI governance frameworks. Dwan outlines strategies for achieving consensus among nations and highlights the role of the UN in facilitating dialogue and cooperation. The discussion also touches on the implications of AI for society and the necessity of addressing market failures and ensuring equitable distribution of AI benefits.
Takeaways
Learn more by visiting:
https://www.un.org/en/
https://www.un.org/techenvoy/
https://www.un.org/techenvoy/global-digital-compact
Renata Dwan is Special Adviser to the UN Secretary-General’s Envoy on Technology where she led support to the elaboration of the Global Digital Compact approved by heads of state at the UN Summit of the Future. Renata has driven multilateral cooperation initiatives for over 25 years within and outside the
UN. As Director of the United Nations Institute for Disarmament Research (UNIDIR) she led initiatives on digital technology governance and arms control. She drove major UN-wide initiatives on UN reform and partnerships, and dialogue on the responsible use of technologies inUN peace operations. Previously, Renata was Deputy Director of Chatham House, the Royal Institute of International Affairs where she oversaw the Institute’s research agenda and digital initiatives. She was
Programme Director at Stockholm International Peace Research Institute (SIPRI) and visiting fellow to the EU Institute for Security Studies. She received her B.A, M.Phil and D.Phil in International Relations from Oxford University, UK. Renata has published widely on international policy and security issues. She is an Irish national.
The Responsible AI Report is produced by the Responsible AI Institute.
Visit our website at responsible.ai
The Responsible AI Report is a brand new podcast, brought to you by the Responsible AI Institute. Each week we will bring you the latest news and trends happening in the responsible AI ecosystem with leading industry experts. Whether it's unpacking promising progress, pressing dilemmas, or regulatory updates, our trailblazing guests will spotlight emerging innovations through a practical lens, helping to implement and advance AI responsibly. We are excited to have you join us!
Visit our website at responsible.ai
En liten tjänst av I'm With Friends. Finns även på engelska.