I was reading Lifeblood by Alex Perry (it details the story of malaria bed nets). The book initially criticizes a lot of aid organizations because Perry claims that the aim of aid should be “for the day it’s no longer needed”. E.g., the goal of the Canadian Cancer Society should be to aim for the day when cancer research is unnecessary because we’ve already figured out how to beat it. However, what aid organizations actually do is expand to fill a whole range of other needs, which is somewhat suboptimal.
In this case, EA is really no exception. Suppose that in the future, we’ve tackled global poverty, animal welfare, and climate change/AI risk/etc. We would just move on to the next most important thing in EA. Of course, EA is separate from classical aid organizations, because it’s closer to a movement/philosophy than a single aid effort. Nevertheless, I still think it might be useful to define “winning” as “alleviating a need for something”. This could be something like “to reach a day when we no longer need to support GiveDirectly [because we’ve already eliminated poverty/destitution/because we’ve reached a quality of wealth redistribution such that nobody is living below X dollars a year].”
On that note, for Effective Altruist organizations, I imagine that ‘not being needed’ means ‘not continuing to be the best use of our resources’, or, ‘have faced significant diminishing marginal returns to additional work’. That said, the condition for an organization to rationally end is different than their success condition.
On obvious point: Most organizations/causes have multiple increasingly-large success conditions. There’s not one ‘success condition’, but a progressive set of improvements. We won’t ‘win’ as an abstract term. I mean, I don’t think Martin Luther King would say that he ‘won’, he accomplished a lot, but things got complicated at the end and there was still a lot to be done; needless to say though, he did quite well.
A better set of questions may be ‘what are some reasonable goals to aim for?’ Then, ‘how can we measure how far we are from those specific goals?’
In completely pragmatic matters, I think that the best goals for us is not legislation, but monetary donations to EA-related causes.
Goal 1: 100m/year
3: 1b/year
4: 10b/year
etc
The ultimate goal for all of us may be a positive-singularity, though that is separate from effective altruism itself and harder to measure. Also, of course the money above would have to be adjusted for quality of the EA org relative to the best.
There is of course, still the question of how good the interventions are and how good the intervention-deciding mechanisms are. However, I feel like measuring / estimating those are quite a bit more challenging and also present a very orthogonal and distinct challenge from raising money. For instance, growing a movement and convincing people in the large would be an ‘EA popularity goal’, which would be measured in money, while finding new research to understand effectiveness would be more of a ‘EA Research Goal’. Two very different things.
I was reading Lifeblood by Alex Perry (it details the story of malaria bed nets). The book initially criticizes a lot of aid organizations because Perry claims that the aim of aid should be “for the day it’s no longer needed”. E.g., the goal of the Canadian Cancer Society should be to aim for the day when cancer research is unnecessary because we’ve already figured out how to beat it. However, what aid organizations actually do is expand to fill a whole range of other needs, which is somewhat suboptimal.
In this case, EA is really no exception. Suppose that in the future, we’ve tackled global poverty, animal welfare, and climate change/AI risk/etc. We would just move on to the next most important thing in EA. Of course, EA is separate from classical aid organizations, because it’s closer to a movement/philosophy than a single aid effort. Nevertheless, I still think it might be useful to define “winning” as “alleviating a need for something”. This could be something like “to reach a day when we no longer need to support GiveDirectly [because we’ve already eliminated poverty/destitution/because we’ve reached a quality of wealth redistribution such that nobody is living below X dollars a year].”
On that note, for Effective Altruist organizations, I imagine that ‘not being needed’ means ‘not continuing to be the best use of our resources’, or, ‘have faced significant diminishing marginal returns to additional work’. That said, the condition for an organization to rationally end is different than their success condition.
On obvious point: Most organizations/causes have multiple increasingly-large success conditions. There’s not one ‘success condition’, but a progressive set of improvements. We won’t ‘win’ as an abstract term. I mean, I don’t think Martin Luther King would say that he ‘won’, he accomplished a lot, but things got complicated at the end and there was still a lot to be done; needless to say though, he did quite well.
A better set of questions may be ‘what are some reasonable goals to aim for?’ Then, ‘how can we measure how far we are from those specific goals?’
In completely pragmatic matters, I think that the best goals for us is not legislation, but monetary donations to EA-related causes.
Goal 1: 100m/year
3: 1b/year
4: 10b/year
etc
The ultimate goal for all of us may be a positive-singularity, though that is separate from effective altruism itself and harder to measure. Also, of course the money above would have to be adjusted for quality of the EA org relative to the best.
There is of course, still the question of how good the interventions are and how good the intervention-deciding mechanisms are. However, I feel like measuring / estimating those are quite a bit more challenging and also present a very orthogonal and distinct challenge from raising money. For instance, growing a movement and convincing people in the large would be an ‘EA popularity goal’, which would be measured in money, while finding new research to understand effectiveness would be more of a ‘EA Research Goal’. Two very different things.