So my best guess of what’s going on here is that Charity Entrepreneurship (CE) or past EAs with a good track record looked at very targeted policy changes specifically selected for their cost-effectiveness: increasing tobacco taxes in Mongolia, improving fish welfare policies in India, increasing foreign aid in Swiss cities, things like that.
So for instance, for the case of CE, my impression isn’t that you become enamoured with one particular cause, but rather than you have many rounds of progressively more intense research, and, crucially, that you start out from a large pool from ideas which you select from. And so (my impression is that) you end up choosing to push for policies that are all of important, neglected and tractable.
But in the case of criminal justice reform, I think that the neglectedness and importance very much come at the expense against tractability, because “harsh on crime” appeal to at least part of the electorate, because felons are not a popular demographic to defend, because many people have a strong intuition that softer policies lead to more crime, etc. Whereas pushing for, idk, tobacco taxation in LIC, or for salt iodization seems much more straightforward.
So the first part of my answer is that I think that the EA policies you are thinking of might be the product of stronger selection effects. I would also agree with points 1. and 2. in your list, when talking about systemic change. I think this should account for most of the disagreement in perspectives, but let me know if not.
Especially if you think “2.” is true as if so that affects some upcoming CEs decision making.
Curious what specifically.
Some more specific and perhaps less important individual points:
For the Rikers policy change costs were $5m to $15m with a 7-50% chance of success.
Yeah, this comes from estimating what percentage of the effort to closing Rikers Open Phil’s funding contributed, and then looking at the chance of success in recent news:
For funding: I found three grants (1, 2, 3), accounting for ~$5M, and my impression is that Open Phil wasn’t literally the only funder, particularly since this is an ongoing, multi-year effort.
For probability: This was informed by reading news articles about Rikers. This is one particular article that I remember reading at the time about this. On the lower side, politicians keep saying that they want to close the prison, but they keep punting this into the future, and creating additional capacity to house prisoners (e.g., building new prisons) seems hard and prone to delays. On the optimistic side, there are specific commitments, specific dates, specific promises.
I think that you could get better probabilities by, e.g., pooling forecasts in Metaculus, rather than having me as a specific forecasts. And looking back, I would be higher than a 7-50% chance of closing Rikers by 2027 (the current date); maybe 20 to 70%
This is 4 orders of magnitude above the CE estimates for new charities and 5 orders of magnitude above the track record of existing EA charities.
1 order of magnitude for weak selection effects, 2 orders of magnitude for choosing an unpopular/politicized cause, 1-3 orders of magnitude for America rather than a LIC, 1 order of magnitude for investing in fuzzy systemic change rather than specific policies like closing Rikers, …
See also Effectiveness is a Conjunction of Multipliers. You’d also have to add a +1 for topic of very broad interest, and so on, but the point is that you can get 5 orders of magnitude pretty quickly.
It seems plausible to me that a good campaign team can keep finding things as effective as the Rikers’ closure, bail reform, etc, without significant diminishing marginal returns.
This seems plausible, but empirically, going through OP grants, not many had as clear cut a pathway to impact as Rikers. Some work in Los Angeles. But a big fraction was closer to the systemic estimate than to the Rikers estimate.
Interesting. If this is correct it suggests that impact focused EAs working on well targeted policy campaigns can be orders of multiple magnitude better than just giving to an existing think tank or policy organisation. Which would suggest that maybe big funders and the EA community should do a lot more to help EAs set up and run small targeted policy campaign groups.
If this is correct it suggests that impact focused EAs working on well targeted policy campaigns can be orders of multiple magnitude better than just giving to an existing think tank or policy organisation. Which would suggest that maybe big funders and the EA community should do a lot more to help EAs set up and run small targeted policy campaign groups
This seems right, but I would still expect new groups to be worse than past top EA policy projects (e.g., this ballot initiative) if the selection effects are weakened.
That is, going from “past EA people who have done that have been very effective” to “if we have more EA people who do this, they will be very effective” doesn’t follow, because the first group could only act/has only acted in cases where the policy initiatives seem very promising ex-ante.
Do you know what the best research (or aggregated subjective beliefs) synthesis we have on the ‘costs of achieving policy change’...
perhaps differentiated by area
and by the economic magnitude of the policy?
My impressions was that Nuno’s
″ $2B to $20B, or 10x to 100x the amount that Open Philanthropy has already spent, would have a 1 to 10% chance of succeeding at that goal”
Seemed plausible, but I suspect that if they had said a 1-3% chance or a 10-50% chance, I might have found these equally plausible. (At least without other benchmarks).
So my best guess of what’s going on here is that Charity Entrepreneurship (CE) or past EAs with a good track record looked at very targeted policy changes specifically selected for their cost-effectiveness: increasing tobacco taxes in Mongolia, improving fish welfare policies in India, increasing foreign aid in Swiss cities, things like that.
So for instance, for the case of CE, my impression isn’t that you become enamoured with one particular cause, but rather than you have many rounds of progressively more intense research, and, crucially, that you start out from a large pool from ideas which you select from. And so (my impression is that) you end up choosing to push for policies that are all of important, neglected and tractable.
But in the case of criminal justice reform, I think that the neglectedness and importance very much come at the expense against tractability, because “harsh on crime” appeal to at least part of the electorate, because felons are not a popular demographic to defend, because many people have a strong intuition that softer policies lead to more crime, etc. Whereas pushing for, idk, tobacco taxation in LIC, or for salt iodization seems much more straightforward.
So the first part of my answer is that I think that the EA policies you are thinking of might be the product of stronger selection effects. I would also agree with points 1. and 2. in your list, when talking about systemic change. I think this should account for most of the disagreement in perspectives, but let me know if not.
Curious what specifically.
Some more specific and perhaps less important individual points:
Yeah, this comes from estimating what percentage of the effort to closing Rikers Open Phil’s funding contributed, and then looking at the chance of success in recent news:
For funding: I found three grants (1, 2, 3), accounting for ~$5M, and my impression is that Open Phil wasn’t literally the only funder, particularly since this is an ongoing, multi-year effort.
For probability: This was informed by reading news articles about Rikers. This is one particular article that I remember reading at the time about this. On the lower side, politicians keep saying that they want to close the prison, but they keep punting this into the future, and creating additional capacity to house prisoners (e.g., building new prisons) seems hard and prone to delays. On the optimistic side, there are specific commitments, specific dates, specific promises.
I think that you could get better probabilities by, e.g., pooling forecasts in Metaculus, rather than having me as a specific forecasts. And looking back, I would be higher than a 7-50% chance of closing Rikers by 2027 (the current date); maybe 20 to 70%
1 order of magnitude for weak selection effects, 2 orders of magnitude for choosing an unpopular/politicized cause, 1-3 orders of magnitude for America rather than a LIC, 1 order of magnitude for investing in fuzzy systemic change rather than specific policies like closing Rikers, …
See also Effectiveness is a Conjunction of Multipliers. You’d also have to add a +1 for topic of very broad interest, and so on, but the point is that you can get 5 orders of magnitude pretty quickly.
This seems plausible, but empirically, going through OP grants, not many had as clear cut a pathway to impact as Rikers. Some work in Los Angeles. But a big fraction was closer to the systemic estimate than to the Rikers estimate.
Interesting. If this is correct it suggests that impact focused EAs working on well targeted policy campaigns can be orders of multiple magnitude better than just giving to an existing think tank or policy organisation. Which would suggest that maybe big funders and the EA community should do a lot more to help EAs set up and run small targeted policy campaign groups.
This seems right, but I would still expect new groups to be worse than past top EA policy projects (e.g., this ballot initiative) if the selection effects are weakened.
That is, going from “past EA people who have done that have been very effective” to “if we have more EA people who do this, they will be very effective” doesn’t follow, because the first group could only act/has only acted in cases where the policy initiatives seem very promising ex-ante.
@weeatquince and all:
Do you know what the best research (or aggregated subjective beliefs) synthesis we have on the ‘costs of achieving policy change’...
perhaps differentiated by area
and by the economic magnitude of the policy?
My impressions was that Nuno’s
″ $2B to $20B, or 10x to 100x the amount that Open Philanthropy has already spent, would have a 1 to 10% chance of succeeding at that goal”
Seemed plausible, but I suspect that if they had said a 1-3% chance or a 10-50% chance, I might have found these equally plausible. (At least without other benchmarks).