AI Safety Fundamentals: Governance
This report from the Centre for Emerging Technology and Security and the Centre for Long-Term Resilience identifies different levers as they apply to different stages of the AI life cycle. They split the AI lifecycle into three stages: design, training, and testing; deployment and usage; and longer-term deployment and diffusion. It also introduces a risk mitigation hierarchy to rank different approaches in decreasing preference, arguing that “policy interventions will be most effective if they intervene at the point in the lifecycle where risk first arises.”
While this document is designed for UK policymakers, most of its findings are broadly applicable.
Original text:
https://cetas.turing.ac.uk/sites/default/files/2023-08/cetas-cltr_ai_risk_briefing_paper.pdf
Authors:
Ardi Janjeva, Nikhil Mulani, Rosamund Powell, Jess Whittlestone, and Shahar Avi
A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.