For Humanity: An AI Safety Podcast
Did you know the makers of AI have no idea how to control their technology? They have no clue how to align it with human goals, values and ethics. You know, stuff like, don't kill humans.
This the AI safety podcast for all people, no tech background required. We focus only on the threat of human extinction from AI.
In Episode #2, The Alignment Problem, host John Sherman explores how alarmingly far AI safety researchers are from finding any way to control AI systems, much less their superintelligent children, who will arrive soon enough.