I suspect that one aspect is what I would label as no path, but which we could also just describe as personal fit or as perceived personal tractability.
If you are in the process of studying machine learning an AI as part of a computer science degree, if you have the money for graduate school, if you are connected to the right institutions and have the right signals of competence, then sure: you can apply to work at an AI alignment research organization and go make a contribution[1]. But there are lots of people who don’t have a clear path. Should the nurse[2] who reads a book about AI go back to get another bachelor’s degree in a brand new field? If that person has enough money to support himself/herself for a few years of a study and to pay for tuition, then maybe, but that feels like a big ask.
In writing this quick thought I’m only really thinking about the subset of people who both A) are familiar with the topic, and B) are convinced that it is real and worth working on. There are, of course, lots of people who don’t fall into both of these categories.
But remember that these orgs are super selective. Even if you have a computer science degree, an interest in AI alignment, and decent general work skills (communication, time management, organization, etc.), you have a slim chance of being employed there. I don’t have the exact numbers, but someone internal to an AI research org could maybe provide a rough estimate of “what percentage of reasonably qualified applicants get job offers from us.”
I picked nurse arbitrarily, but you could fill in the blank with some other job or career: bookkeeper, project manager, literary translator, civil engineer, recruiter, etc.
Indeed, I think I’m in the same predicament. Around 2020, pretty much due to bio anchors, I started to think much more about how I could apply myself more to x- and s-risks from AI rather than priorities research. I tried a few options, but found that direct research probably didn’t have sufficiently quick feedback loops to keep my attention for long. What stuck in the end was improving the funding situation through impactmarkets.io, which is already one or two steps removed from the object level work. I imagine if I didn’t have any CS background, it would’ve been even harder to find a suitable angle.
I suspect that one aspect is what I would label as no path, but which we could also just describe as personal fit or as perceived personal tractability.
If you are in the process of studying machine learning an AI as part of a computer science degree, if you have the money for graduate school, if you are connected to the right institutions and have the right signals of competence, then sure: you can apply to work at an AI alignment research organization and go make a contribution[1]. But there are lots of people who don’t have a clear path. Should the nurse[2] who reads a book about AI go back to get another bachelor’s degree in a brand new field? If that person has enough money to support himself/herself for a few years of a study and to pay for tuition, then maybe, but that feels like a big ask.
In writing this quick thought I’m only really thinking about the subset of people who both A) are familiar with the topic, and B) are convinced that it is real and worth working on. There are, of course, lots of people who don’t fall into both of these categories.
But remember that these orgs are super selective. Even if you have a computer science degree, an interest in AI alignment, and decent general work skills (communication, time management, organization, etc.), you have a slim chance of being employed there. I don’t have the exact numbers, but someone internal to an AI research org could maybe provide a rough estimate of “what percentage of reasonably qualified applicants get job offers from us.”
I picked nurse arbitrarily, but you could fill in the blank with some other job or career: bookkeeper, project manager, literary translator, civil engineer, recruiter, etc.
Indeed, I think I’m in the same predicament. Around 2020, pretty much due to bio anchors, I started to think much more about how I could apply myself more to x- and s-risks from AI rather than priorities research. I tried a few options, but found that direct research probably didn’t have sufficiently quick feedback loops to keep my attention for long. What stuck in the end was improving the funding situation through impactmarkets.io, which is already one or two steps removed from the object level work. I imagine if I didn’t have any CS background, it would’ve been even harder to find a suitable angle.