Value Driven Data Science: Boost your impact. Earn what you’re worth. Rewrite your career algorithm.
We all know what it means for a human to discriminate against another human, but the concept of a predictive model or an artificial intelligence is relatively new. What does it mean for a model or an AI to discriminate against someone?
In this episode of Value Driven Data Science, Dr Genevieve Hayes is joined by Dr Fei Huang to discuss the importance of considering fairness and avoiding discrimination when developing machine learning models for your business.
Guest Bio
Dr Fei Huang is a senior lecturer in the School of Risk and Actuarial Studies at the University of New South Wales, who has won awards for both her teaching and her research. Her main research interest is predictive modelling and data analytics, and has recently been focussing on insurance discrimination and pricing fairness.
Talking Points
Links
Fei’s papers on fairML and insurance pricing: