Thanks for posting this—it was an interesting and thoughtful read for me as a community builder.
This summarised some thoughts I’ve had on this topic previously, and the implications on a large scale are concerning at the very least. In my experience, EAs growth over the past couple of years has meant bringing on a lot of people with specific technical expertise (or people who are seeking to gain this expertise) such as those working on AI safety/biorisk/etc, with a skillset that would broadly include mathematics, statistics, logical reasoning, and some level of technical expertise/knowledge of their field. Often (speaking anecdotally here) these would be the type of people who:
are really good at working on detailed problems with defined parameters (eg. software developers)
are very open to hearing things that challenge or further their existing knowledge, and will seek these things out
will be easily persuaded by good arguments (and probably unlikely to push back if they find the arguments mostly convincing)
These people are pretty easy for community builders to deal with because there is a clear, forged pathway defined in EA for these people. Community builders can say, “Go do a PhD in biorisk,” or “There’s a job open at DeepMind, you should apply for it,” and the person will probably go for it.
On the other hand, there are a whole range of people who don’t have the above traits, and instead have one (or more) of the following traits:
prefer broader, messier problems (eg. policy analysts) and are not great at working on detailed problems within defined parameters (or maybe less interested in these types of problems)
are somewhat open to hearing things that challenge or further their existing knowledge, but might not continue to engage if they initially find something off-putting
can be persuaded to accept new arguments, but are more likely to push back, hold onto scepticism for longer, and won’t accept something simply because it is the commonly held view, even if the arguments for it are generally good
These people are harder for community builders to deal with as there is not a clear forged pathway within EA, and they might also be less convinced by the pathways that do exist. (For example, maybe if someone has these traits a community builder might push them towards working in AI policy, but they might not be as convinced that working in AI policy is important, or that they personally can make a big difference in the field, and they won’t be as easily persuaded to apply for jobs in AI policy.) These people might also feel a bit lost when EAs try to push them towards high-impact work—they see the world in greyer terms, they carry more uncertainty, and they are more hesitant to go “all in” on a specified career path.
I think there is a great deal of value that can be derived if EA can find ways to engage with people with these traits, and I also think people with at least one of these traits are probably more likely to fall into the categories that you highlighted in your post – government/policy experts, managers, cause prioritizers (can’t think of a better title here), entrepreneurs, and people with high social/emotional skills. These are people who like big, messy, broad problems and who may generally take more time to accept new ideas and arguments.
In my community-building role, I want to attract and keep more of these people! I don’t have good answers for how to do this (yet), but I think being aware of the issue and trying to figure out some possible ways in which more people with these skills can be brought on board (as well as trying to figure out why EA might be off-putting to some of these people) is a great start.
Thanks for posting this—it was an interesting and thoughtful read for me as a community builder.
This summarised some thoughts I’ve had on this topic previously, and the implications on a large scale are concerning at the very least. In my experience, EAs growth over the past couple of years has meant bringing on a lot of people with specific technical expertise (or people who are seeking to gain this expertise) such as those working on AI safety/biorisk/etc, with a skillset that would broadly include mathematics, statistics, logical reasoning, and some level of technical expertise/knowledge of their field. Often (speaking anecdotally here) these would be the type of people who:
are really good at working on detailed problems with defined parameters (eg. software developers)
are very open to hearing things that challenge or further their existing knowledge, and will seek these things out
will be easily persuaded by good arguments (and probably unlikely to push back if they find the arguments mostly convincing)
These people are pretty easy for community builders to deal with because there is a clear, forged pathway defined in EA for these people. Community builders can say, “Go do a PhD in biorisk,” or “There’s a job open at DeepMind, you should apply for it,” and the person will probably go for it.
On the other hand, there are a whole range of people who don’t have the above traits, and instead have one (or more) of the following traits:
prefer broader, messier problems (eg. policy analysts) and are not great at working on detailed problems within defined parameters (or maybe less interested in these types of problems)
are somewhat open to hearing things that challenge or further their existing knowledge, but might not continue to engage if they initially find something off-putting
can be persuaded to accept new arguments, but are more likely to push back, hold onto scepticism for longer, and won’t accept something simply because it is the commonly held view, even if the arguments for it are generally good
These people are harder for community builders to deal with as there is not a clear forged pathway within EA, and they might also be less convinced by the pathways that do exist. (For example, maybe if someone has these traits a community builder might push them towards working in AI policy, but they might not be as convinced that working in AI policy is important, or that they personally can make a big difference in the field, and they won’t be as easily persuaded to apply for jobs in AI policy.) These people might also feel a bit lost when EAs try to push them towards high-impact work—they see the world in greyer terms, they carry more uncertainty, and they are more hesitant to go “all in” on a specified career path.
I think there is a great deal of value that can be derived if EA can find ways to engage with people with these traits, and I also think people with at least one of these traits are probably more likely to fall into the categories that you highlighted in your post – government/policy experts, managers, cause prioritizers (can’t think of a better title here), entrepreneurs, and people with high social/emotional skills. These are people who like big, messy, broad problems and who may generally take more time to accept new ideas and arguments.
In my community-building role, I want to attract and keep more of these people! I don’t have good answers for how to do this (yet), but I think being aware of the issue and trying to figure out some possible ways in which more people with these skills can be brought on board (as well as trying to figure out why EA might be off-putting to some of these people) is a great start.