I copied this related facebook comment by Kerry Vaughan from 6th September 2018 (from this public thread):
> (This post represents my views and not necessarily the views of everyone at CEA) [for whom Kerry worked at the time]
> [...] I think there are some biases in how the community allocates social status which incentivize people to do things that aren’t their comparative advantage.
> If you want to be cool in EA there are a few things you can do: (1) make sure you’re up to date on whatever the current EA consensus is on relevant topics; (2) work on whatever is the Hot New Thing in EA; and (3) have skills in some philosophical or technical area. Because most people care a lot about social acceptance, people will tend to do the things that are socially incentivized.
> This can cause too many EAs to try to become the shape necessary to work on AI-Safety or clean meat or biosecurity even if that’s not their comparative advantage. In the past these dynamics caused people to make themselves fit the shape of earning to give, research, and movement building (or feeling useless because they couldn’t). In the future, it will probably be something else entirely. And this isn’t just something people are doing on their own—at times it’s been actively encouraged by official EA advice.
> The problem is that following the social incentives in EA sometimes encourages people to have less impact instead of more. Following social incentives (1) disincentivizes people from actually evaluating the ideas for themselves and discourages healthy skepticism about whatever the intellectual consensus happens to be. (2) means that EAs are consistently trying to go into poorly-understood, ill-defined areas with poor feedback loops instead of working in established areas where we know how to generate impact or where they have a comparative advantage. (3) means that we tend to value people who do research more than people who do other types of work (e.g. operations, ETG).
> My view is that we should be praising people who’ve thought hard about the relevant issues and happen to have come to different conclusions than other people in EA. We should be praising people who know themselves, know what their skills are, know what they’re motivated to do, and are working on projects that they’re well-suited for. We should be praising people who run events, work a job and donate, or do accounting for an EA org, as well as people who think about abstract philosophy or computer science.
> CEA and others have taken some steps to help address this problem. Last year’s EA Global theme—Doing Good Together—was designed to highlight the ideas of comparative advantage, of seeing our individual work in the context of the larger movement and of not becoming a community of 1,000 shitty AI Safety researchers. We worked with 80K to communicate the importance of operations management (https://80000hours.org/articles/operations-management/) and CEA ran a retreat specifically for people interested in ops. We also supported the EA Summit because we felt that it was aiming to address some of these issues.
> Yet, there’s more work to be done. If we want to have a major impact on any cause we need to deploy the resources we have as effectively as possible. That means helping people in the community actually figure out their comparative advantage instead of distorting themselves to fit the Hot New Thing. It also means praising people who have found their comparative advantage whatever that happens to be.
I copied this related facebook comment by Kerry Vaughan from 6th September 2018 (from this public thread):
> (This post represents my views and not necessarily the views of everyone at CEA) [for whom Kerry worked at the time]
> [...] I think there are some biases in how the community allocates social status which incentivize people to do things that aren’t their comparative advantage.
> If you want to be cool in EA there are a few things you can do: (1) make sure you’re up to date on whatever the current EA consensus is on relevant topics; (2) work on whatever is the Hot New Thing in EA; and (3) have skills in some philosophical or technical area. Because most people care a lot about social acceptance, people will tend to do the things that are socially incentivized.
> This can cause too many EAs to try to become the shape necessary to work on AI-Safety or clean meat or biosecurity even if that’s not their comparative advantage. In the past these dynamics caused people to make themselves fit the shape of earning to give, research, and movement building (or feeling useless because they couldn’t). In the future, it will probably be something else entirely. And this isn’t just something people are doing on their own—at times it’s been actively encouraged by official EA advice.
> The problem is that following the social incentives in EA sometimes encourages people to have less impact instead of more. Following social incentives (1) disincentivizes people from actually evaluating the ideas for themselves and discourages healthy skepticism about whatever the intellectual consensus happens to be. (2) means that EAs are consistently trying to go into poorly-understood, ill-defined areas with poor feedback loops instead of working in established areas where we know how to generate impact or where they have a comparative advantage. (3) means that we tend to value people who do research more than people who do other types of work (e.g. operations, ETG).
> My view is that we should be praising people who’ve thought hard about the relevant issues and happen to have come to different conclusions than other people in EA. We should be praising people who know themselves, know what their skills are, know what they’re motivated to do, and are working on projects that they’re well-suited for. We should be praising people who run events, work a job and donate, or do accounting for an EA org, as well as people who think about abstract philosophy or computer science.
> CEA and others have taken some steps to help address this problem. Last year’s EA Global theme—Doing Good Together—was designed to highlight the ideas of comparative advantage, of seeing our individual work in the context of the larger movement and of not becoming a community of 1,000 shitty AI Safety researchers. We worked with 80K to communicate the importance of operations management (https://80000hours.org/articles/operations-management/) and CEA ran a retreat specifically for people interested in ops. We also supported the EA Summit because we felt that it was aiming to address some of these issues.
> Yet, there’s more work to be done. If we want to have a major impact on any cause we need to deploy the resources we have as effectively as possible. That means helping people in the community actually figure out their comparative advantage instead of distorting themselves to fit the Hot New Thing. It also means praising people who have found their comparative advantage whatever that happens to be.