[Question] I’m interviewing prolific AI safety researcher Richard Ngo (now at OpenAI and previously DeepMind). What should I ask him?

Next week I’m interviewing Richard Ngo, current AI (Safety) Governance Researcher at OpenAI and previous Research Engineer at DeepMind.

Before that he was doing a PhD in the Philosophy of Machine Learning at Cambridge, on the topic of “to what extent is the development of artificial intelligence analogous to the biological and cultural evolution of human intelligence?”

He is focused on making the development and deployment of AGI more likely to go well and less likely to go badly.

Richard is also a highly prolific contributor to online discussion of AI safety in a range of places, for instance:

What should I ask him?

No comments.