One potential career option if you are interested in both AI safety and also the psychology of judgment and decision making: work in the EA psychology lab with me and Lucius Caviola. We currently have open positions for research assistants and postdocs. We have job postings here: https://www.eapsychology.org/jobs. What’s more, I have it on Vael’s personal authority that they endorse this use of social science for helping with AI safety. The brief theory of change is something like the following: if the world ends because of AI, there’s a good chance that some people, somewhere along the line, made some pivotal judgment errors that could have been avoided with a better understanding of the kind of judgment errors that are most relevant to AI, AI policy, and AI alignment. We are conducting research on such judgment errors, among other x-risk and EA relevant topics. If you are interested in this kind of thing and are in a career stage where a postdoc or research assistantship would be useful, please apply!
^ Yeah, endorsed! This is work in (3)-- if you’ve got the skills and interests, going to work with Josh and Lucius seems like an excellent opportunity, and they’ve got lots of interesting projects lined up.
One potential career option if you are interested in both AI safety and also the psychology of judgment and decision making: work in the EA psychology lab with me and Lucius Caviola. We currently have open positions for research assistants and postdocs. We have job postings here: https://www.eapsychology.org/jobs. What’s more, I have it on Vael’s personal authority that they endorse this use of social science for helping with AI safety. The brief theory of change is something like the following: if the world ends because of AI, there’s a good chance that some people, somewhere along the line, made some pivotal judgment errors that could have been avoided with a better understanding of the kind of judgment errors that are most relevant to AI, AI policy, and AI alignment. We are conducting research on such judgment errors, among other x-risk and EA relevant topics. If you are interested in this kind of thing and are in a career stage where a postdoc or research assistantship would be useful, please apply!
Thanks for a great post Vael!
^ Yeah, endorsed! This is work in (3)-- if you’ve got the skills and interests, going to work with Josh and Lucius seems like an excellent opportunity, and they’ve got lots of interesting projects lined up.