AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
Not all algorithms are explainable. So does that mean that it’s ok to not provide any explanation on how your AI system got to the decision it did if you’re using one of those “black box” algorithms? The answer should obviously be no. So, what do you do then when creating Ethical and Responsible AI systems to address this issue around explainable and interpretable AI?