Two years ago, we ran a survey for everyone interested in improving humanity’s longterm prospects. The results of that survey have now been shared with over 150 organisations and individuals who have been hiring or looking for cofounders.
Today, we’re running a similar survey for everyone interested in working on reducing catastrophic risks from AI. We’re focusing on AI risks because:
We’ve been getting lots of headhunting requests for roles in this space.
It’s our current best guess at the world’s most pressing problem.
Many people are motivated to reduce AI risks without buying into longtermism or effective altruism.
We’re interested in hearing from anyone who wants to contribute to safely navigating the transition to powerful AI systems — including via operations, governance, engineering, technical research, and field-building. This includes people already working at AI safety or EA organisations, and people who filled in the last survey.
By filling in this survey you’ll be sharing information about yourself with over 100 potential employers or cofounders working on reducing catastrophic risks from AI, potentially increasing your chances of getting hired to work in this space. Your responses might also help us match you directly with projects and organisations we’re aware of. Hiring is challenging, especially for new organisations, so filling out this survey could be an extremely valuable use of a few minutes of your time.
Beyond your name, email, and LinkedIn (or CV), every other question is optional. If you have an up-to-date LinkedIn or CV, you can complete the survey in two minutes. You can also provide more information which might be used to connect you with an AI safety project.
We’ll share your responses with organisations working on reducing catastrophic risks from AI — like some of the ones here —when they’re hiring and with individuals looking for a cofounder. We’ll only share your data with people we think are making positive contributions to the field[1], and we’ll ask them not to share your information further. If you wish to access your data, change it, or request that we delete it, you can reach us at census@80000hours.org.
If you have a question, ideas about how we could improve this survey, or you find an error, please comment in this public doc (or comment below if you prefer).
Broadly speaking, this includes teams we think are doing work which helps with AI existential risk. This includes some safety teams at big companies and most safety organisations, but not every team in these categories. It doesn’t include capabilities-focused roles.
Fill out this census of everyone interested in reducing catastrophic AI risks
Two years ago, we ran a survey for everyone interested in improving humanity’s longterm prospects. The results of that survey have now been shared with over 150 organisations and individuals who have been hiring or looking for cofounders.
Today, we’re running a similar survey for everyone interested in working on reducing catastrophic risks from AI. We’re focusing on AI risks because:
We’ve been getting lots of headhunting requests for roles in this space.
It’s our current best guess at the world’s most pressing problem.
Many people are motivated to reduce AI risks without buying into longtermism or effective altruism.
We’re interested in hearing from anyone who wants to contribute to safely navigating the transition to powerful AI systems — including via operations, governance, engineering, technical research, and field-building. This includes people already working at AI safety or EA organisations, and people who filled in the last survey.
By filling in this survey you’ll be sharing information about yourself with over 100 potential employers or cofounders working on reducing catastrophic risks from AI, potentially increasing your chances of getting hired to work in this space. Your responses might also help us match you directly with projects and organisations we’re aware of. Hiring is challenging, especially for new organisations, so filling out this survey could be an extremely valuable use of a few minutes of your time.
Beyond your name, email, and LinkedIn (or CV), every other question is optional. If you have an up-to-date LinkedIn or CV, you can complete the survey in two minutes. You can also provide more information which might be used to connect you with an AI safety project.
We’ll share your responses with organisations working on reducing catastrophic risks from AI — like some of the ones here —when they’re hiring and with individuals looking for a cofounder. We’ll only share your data with people we think are making positive contributions to the field[1], and we’ll ask them not to share your information further. If you wish to access your data, change it, or request that we delete it, you can reach us at census@80000hours.org.
Fill out this survey of everyone interested in working on reducing catastrophic risks from AI.
If you have a question, ideas about how we could improve this survey, or you find an error, please comment in this public doc (or comment below if you prefer).
Broadly speaking, this includes teams we think are doing work which helps with AI existential risk. This includes some safety teams at big companies and most safety organisations, but not every team in these categories. It doesn’t include capabilities-focused roles.