Sveriges mest populära poddar

Impact AI

Trustworthy AI with Yiannis Kanellopoulos from Code4Thought

24 min • 24 april 2023

The demand for trustworthy AI is increasing across all sectors. Today’s guest is committed to mitigating biases of AI models and ensuring the responsible use of AI technology. Yiannis Kanellopoulos is the Founder and CEO of Code4Thought, the state-of-the-art AI audit solution for trustworthy AI.

In this episode, we discuss what it means for AI to be trustworthy and Yiannis explains the process by which Code4Thought evaluates the trustworthiness of AI models. We discover how biases manifest and how best to mitigate them, as well as the role of explainability in evaluating the trustworthiness of a model. Tune in to hear Yiannis’ advice on what to consider when developing a model and why the trustworthiness of your business solution should never be an afterthought.


Key Points:

  • Yiannis Kanellopoulos’ background; how it led him to create Code4Thought.
  • What Code4Thought does and why it’s important for the future of AI.
  • What it means for AI to be trustworthy.
  • How Code4Thought evaluates the trustworthiness of AI models.
  • Yiannis shares a use case evaluation in the healthcare sphere.
  • Why Code4Thought’s independent perspective is so important.
  • Yiannis explains how biases manifest in AI technology and shares mitigation strategies.
  • The role explainability plays in evaluating the trustworthiness of a model.
  • Why explainability is particularly important for financial services.
  • Simultaneously optimizing accuracy and explainability.
  • What to consider when developing a model.
  • The increasing demand for trustworthy AI in various sectors.
  • Yiannis’ advice for other leaders of AI startups.
  • His vision for Code4Thought in the next three to five years.


Quotes:

“We are building technology for testing and auditing AI systems.” — Yiannis Kanellopoulos


“The team that produces an AI system [is] tasked to solve a business problem. They're optimizing for solving this problem. They're not being optimized to test the model adequately [and] ensure that the model is working properly and can be trusted.” — Yiannis Kanellopoulos


“One can say that by performing an explainability analysis, you can use it to essentially debug the way your model works.” — Yiannis Kanellopoulos


“Don’t try to optimize only the business problem. The quality of your solution [and] the trustworthiness of the solution, should not be an afterthought.” — Yiannis Kanellopoulos


Links:

Yiannis Kanellopoulos on LinkedIn

Yiannis Janellopoulos on Twitter

Code4Thought

Code4Thought on YouTube 

Code4Thought on LinkedIn

Code4Thought on Twitter

Resources for Computer Vision Teams:

LinkedIn – Connect with Heather.

Computer Vision Insights Newsletter – A biweekly newsletter to help bring the latest machine learning and computer vision research to applications in people and planetary health.

Computer Vision Strategy Session – Not sure how to advance your computer vision project? Get unstuck with a clear set of next steps. Schedule a 1 hour strategy session now to advance your project.

Computer Vision Advisory Services – Monthly advisory services to help you strategically plan your CV/ML capabilities, reduce the trial-and-error of model development, and get to market faster.

Förekommer på
00:00 -00:00