Hi, I’m Rohin Shah! I work as a Research Scientist on the technical AGI safety team at DeepMind. I completed my PhD at the Center for Human-Compatible AI at UC Berkeley, where I worked on building AI systems that can learn to assist a human user, even if they don’t initially know what the user wants.
I’m particularly interested in big picture questions about artificial intelligence. What techniques will we use to build human-level AI systems? How will their deployment affect the world? What can we do to make this deployment go better? I write up summaries and thoughts about recent work tackling these questions in the Alignment Newsletter.
In the past, I ran the EA UC Berkeley and EA at the University of Washington groups.
It doesn’t seem conservative in practice? Like Vasco, I’d be surprised if aiming for reliable global capacity growth would look like the current GHD portfolio. For example:
Given an inability to help everyone, you’d want to target interventions based on people’s future ability to contribute. (E.g. you should probably stop any interventions that target people in extreme poverty.)
You’d either want to stop focusing on infant mortality, or start interventions to increase fertility. (Depending on whether population growth is a priority.)
You’d want to invest more in education than would be suggested by typical metrics like QALYs or income doublings.
I’d guess most proponents of GHD would find (1) and (2) particularly bad.