Secondly, I find the principles themselves quite handwavey, and more like applause lights than practical statements of intent. What does ârecognition of tradeoffsâ involve doing? It sounds like something that will just happen rather than a principle one might apply. Isnât âscope sensitivityâ basically a subset of the concerns implied by âimpartialityâ? Is something like âdo a counterfactually large amount of goodâ supposed to be implied by impartiality and scope sensitivity? If not, why is it not on the list? If so, why does âscout mindsetâ need to be on the list, when âthinking through stuff carefully and scrupulouslyâ is a prerequisite to effective counterfactual actions?
This poses some interesting questions, and Iâve thought about them a bit, although Iâm still a bit confused.
Letâs start with the definition on effectivealtriusm.org, which seems broadly reasonable:
Effective altruism is a research field and practical community that aims to find the best ways to help others, and put them into practice.
So what EA does is:
find the best ways to help others
put them into practice
So, basically, we are a company with a department that builds solar panels and another that runs photovoltaic power stations using these panels. Both are related but distinct. If the solar panels are faulty, this will affect the power station, but if the power station is built by cutting down primal forest, then the solar panel division is not at fault. Still, it will affect the reputation of the whole organisation, which will affect the solar engineers.
But going back to the points, we could add some questions:
find the best ways to help others
How do we find the best ways to help?
Who are the others?
put them into practice
How do we put them into practice?
1.a seems pretty straightforward: If we have different groups working on this, then the less biased ones (using a scout mindset and being scope sensitive) and the ones using decision-making theories that recognize trade-offs and counterfactuals will fare better. Here, the principles logically follow from the requirements. If you want to make the best solar cells, youâll have to understand the science behind them.
1.b Here, we can see that EA is based on the value of impartiality, but it is not a prerequisite for a group that wants to do good better. If I want to do the most good for my family, then Iâm not impartial, but I still could use some of the methods EAs are using.
2.a Could be done in many different ways. We could commit massive fraud to generate money that we then donate based on the principles described in 1.
In conclusion, I would see EA as:
A research field that aims to find the best ways to help others
A practical community that aims to put the results of 1 into practice
Having worked in startups and finance, I can imagine that there might be ways in which EA ideas could be implemented without honesty, integrity, and compassion cost-effectively. Aside from the risks of this approach, I would also see dropping this value as leading to a very different kind of movement. If weâre willing to piss off the neighbours of the power plant, then this will affect the reputation of the solar researchers.
In describing the history of EA, we could include the different tools and frameworks we have used, such as ITN. But these donât need to be the ones weâll use in the future, so I see everything else as being downstream from the definition above.
Re-Reading Will MacAksillâs Defining Effective Altruism from 2019, I saw that he used a similar approach that resulted in four claims:
The ideas that EA is about maximising and about being science-aligned (understood broadly) are uncontroversial. The two more controversial aspects of the definition are that it is non-normative, and that it is tentatively impartial and welfarist.
He didnât include integrity and collaborative spirit. However, he posted in 2017 that these two are among the guiding principles of CEA and other organisations and key people.
This poses some interesting questions, and Iâve thought about them a bit, although Iâm still a bit confused.
Letâs start with the definition on effectivealtriusm.org, which seems broadly reasonable:
So what EA does is:
find the best ways to help others
put them into practice
So, basically, we are a company with a department that builds solar panels and another that runs photovoltaic power stations using these panels. Both are related but distinct. If the solar panels are faulty, this will affect the power station, but if the power station is built by cutting down primal forest, then the solar panel division is not at fault. Still, it will affect the reputation of the whole organisation, which will affect the solar engineers.
But going back to the points, we could add some questions:
find the best ways to help others
How do we find the best ways to help?
Who are the others?
put them into practice
How do we put them into practice?
1.a seems pretty straightforward: If we have different groups working on this, then the less biased ones (using a scout mindset and being scope sensitive) and the ones using decision-making theories that recognize trade-offs and counterfactuals will fare better. Here, the principles logically follow from the requirements. If you want to make the best solar cells, youâll have to understand the science behind them.
1.b Here, we can see that EA is based on the value of impartiality, but it is not a prerequisite for a group that wants to do good better. If I want to do the most good for my family, then Iâm not impartial, but I still could use some of the methods EAs are using.
2.a Could be done in many different ways. We could commit massive fraud to generate money that we then donate based on the principles described in 1.
In conclusion, I would see EA as:
A research field that aims to find the best ways to help others
A practical community that aims to put the results of 1 into practice
Both governed by the following values:
Impartiality or radical empathy
Good character or collaborative spirit
Those two values seem to me to reflect the boundaries that the movementâs founders, the most engaged actors, and the biggest funders want to see.
Some people are conducting local prioritisation research, which might sometimes be worthwhile from an impartial standpoint, but giving up on impartiality would radically change the premise of EA work.
Having worked in startups and finance, I can imagine that there might be ways in which EA ideas could be implemented without honesty, integrity, and compassion cost-effectively. Aside from the risks of this approach, I would also see dropping this value as leading to a very different kind of movement. If weâre willing to piss off the neighbours of the power plant, then this will affect the reputation of the solar researchers.
In describing the history of EA, we could include the different tools and frameworks we have used, such as ITN. But these donât need to be the ones weâll use in the future, so I see everything else as being downstream from the definition above.
Re-Reading Will MacAksillâs Defining Effective Altruism from 2019, I saw that he used a similar approach that resulted in four claims:
He didnât include integrity and collaborative spirit. However, he posted in 2017 that these two are among the guiding principles of CEA and other organisations and key people.