I think this is a strong post. It’s been obvious for a long time that the skills and inclinations that make a good philosophy professor or forum poster are not precisely the same as the skills and inclinations that make a good CEO or project manager.
Solving this problem by reducing the influence of analytical discussion in EA, however, would solve the problem at the cost of reducing the distinctiveness of EA as a movement.
What is EA? EA is 1) an existing network of human relationships 2) a large pot of money, 3) a specific set of cultural norms about how weird philosophy nerds talk to each other and, downstream of 3), 4) a specific set of current ideas about how to do the most good.
The world has a very large number of altruistic ecosystems trying to do good. There are literally millions of civic organisations around the globefilled with worthy Haitian pastors. The Catholic Church is a single organisation with 1.3bn members and explicitly altruistic goals. In the EU alone, $13tn is invested in “ESG” funds with explicitly altruistic goals.
My concern is that a “big tent” approach which attempts to unite people around altruistic goals while jettisoning EA’s culture and methods will simply collapse into existing efforts to do good. EA’s unusual leverage comes from the fact that it is a relatively tightly connected group of quite unusual individuals with extremely unusual beliefs.
Underlying the OP is an implied discomfort with the existing distribution of views within EA. If spending resources on averting nuclear war, or global health and wellbeing, or AI, is not in fact the best way to make the world a better place, I would prefer to see a post arguing this explicitly. Peter seems to be imagining that it’s enough simply to build a large enough network of willing and capable volunteers, analogous to starting a company with the idea that once you have hired enough of the very best people across all continents the need to come up with a product will solve itself.
Two points. First, I don’t think jettisoning the EA culture is desirable or even possible. As the movement grows some cultural change is inevitable, and posts about these cultural growing pains are some of the most popular on the forum. What I think is desirable is taking the best of what EA’s culture and people have to offer to help influence and improve all these other altruistic efforts. But doing that means a willingness to partner with and engage a wider group of people and organizations. The entire EA movement does not need to pivot in this way, but it is a direction that I think at least a modest of part of EA should explicitly start experimenting with in the name of maximizing impact. You conduct experiments, collect data, and go from there.
Second. I think that ecosystem building is long and difficult work filled with a lot of very hard decisions. The reason people engage with it (including everyone involved in building the overall EA ecosystem) is because of the large payoffs if you are successful.
I think this is a strong post. It’s been obvious for a long time that the skills and inclinations that make a good philosophy professor or forum poster are not precisely the same as the skills and inclinations that make a good CEO or project manager.
Solving this problem by reducing the influence of analytical discussion in EA, however, would solve the problem at the cost of reducing the distinctiveness of EA as a movement.
What is EA? EA is 1) an existing network of human relationships 2) a large pot of money, 3) a specific set of cultural norms about how weird philosophy nerds talk to each other and, downstream of 3), 4) a specific set of current ideas about how to do the most good.
The world has a very large number of altruistic ecosystems trying to do good. There are literally millions of civic organisations around the globefilled with worthy Haitian pastors. The Catholic Church is a single organisation with 1.3bn members and explicitly altruistic goals. In the EU alone, $13tn is invested in “ESG” funds with explicitly altruistic goals.
My concern is that a “big tent” approach which attempts to unite people around altruistic goals while jettisoning EA’s culture and methods will simply collapse into existing efforts to do good. EA’s unusual leverage comes from the fact that it is a relatively tightly connected group of quite unusual individuals with extremely unusual beliefs.
Underlying the OP is an implied discomfort with the existing distribution of views within EA. If spending resources on averting nuclear war, or global health and wellbeing, or AI, is not in fact the best way to make the world a better place, I would prefer to see a post arguing this explicitly. Peter seems to be imagining that it’s enough simply to build a large enough network of willing and capable volunteers, analogous to starting a company with the idea that once you have hired enough of the very best people across all continents the need to come up with a product will solve itself.
Two points. First, I don’t think jettisoning the EA culture is desirable or even possible. As the movement grows some cultural change is inevitable, and posts about these cultural growing pains are some of the most popular on the forum. What I think is desirable is taking the best of what EA’s culture and people have to offer to help influence and improve all these other altruistic efforts. But doing that means a willingness to partner with and engage a wider group of people and organizations. The entire EA movement does not need to pivot in this way, but it is a direction that I think at least a modest of part of EA should explicitly start experimenting with in the name of maximizing impact. You conduct experiments, collect data, and go from there.
Second. I think that ecosystem building is long and difficult work filled with a lot of very hard decisions. The reason people engage with it (including everyone involved in building the overall EA ecosystem) is because of the large payoffs if you are successful.