The AGI Safety & Alignment Team (ASAT) at Google DeepMind (GDM) is hiring! Please apply to the Research Scientist and Research Engineer roles. Strong software engineers with some ML background should also apply (to the Research Engineer role). Our initial batch of hiring will focus more on hiring engineers, but we expect to continue to use the applications we receive for future hiring this year, which we expect will be more evenly split. Please do apply even if e.g. you’re only available in the later half of this year.
What is ASAT?
ASAT is the primary team at GDM focused on technical approaches to severe harms from AI systems, having evolved out of the Alignment and Scalable Alignment teams. We’re organized around two themes: AGI Alignment (think amplified oversight and mechanistic interpretability) and Frontier Safety (think development and implementation of the Frontier Safety Framework). The leadership team is Anca [...]
---
Outline:
(00:40) What is ASAT?
(01:14) Why should you join?
(03:05) What will we do in the near future?
(05:37) How do you prioritize across research topics?
(09:20) FAQ
(15:44) Apply now!
---
First published:
February 17th, 2025
Source:
https://www.lesswrong.com/posts/wqz5CRzqWkvzoatBG/agi-safety-and-alignment-google-deepmind-is-hiring
Narrated by TYPE III AUDIO.