I am a research engineer working on AI safety at DeepMind. Formerly working at Improbable on simulations for decision making
I’m interested in AGI safety, complexity science, software engineering, models and simulations.
I am a research engineer working on AI safety at DeepMind. Formerly working at Improbable on simulations for decision making
I’m interested in AGI safety, complexity science, software engineering, models and simulations.
Regarding AI alignment and existential risk in general, Cummings already has a blog post where he mentions these: https://dominiccummings.com/2019/03/01/on-the-referendum-31-project-maven-procurement-lollapalooza-results-nuclear-agi-safety/
So he is clearly aware and responsive to the these ideas, it would be great to have an EA minded person on his new team to emphasise these.