An interesting exchange, although I feel like the rebuttal somewhat misrepresents Gabriel’s agrument regarding systemic change. A steelman version of his argument would factor in the quantification bias, pointing out that due to extreme uncertainty in expected value estimation for some systemic change type interventions, something like AMF would usually easily come out on top.
I read him as saying that EA community would not support e.g. the abolishionist movement were it around then, precisely because of the difficulties in EV calculations, and I agree with him on that.
(I also think that OpenPhil does very important work in that direction)
I read him as saying that EA community would not support e.g. the abolishionist movement were it around then, precisely because of the difficulties in EV calculations, and I agree with him on that.
Just as an aside, I’m not sure that’s obvious. John Stuart Mill was a leader in the abolition movement. He was arguably the Peter Singer of those times.
Turning to current issues, ending factory farming is also a cause that likely requires large scale social change through advocacy, and lots of EAs work on that.
Just as an aside, I’m not sure that’s obvious. John Stuart Mill was a leader in the abolition movement. He was arguably the Peter Singer of those times.
The first known argument for legalization of homosexuality in England
Animal rights
Abolishing the death penalty and corporal punishment (including of children)
Separation of church and state
Freedom of speech
precisely because of the difficulties in EV calculation
The extensive work on factory farming is certainly one counterexample, but the interest in artificial intelligence may be a more powerful one on this point.
Might be useful in elucidating why people criticizing EAs always mischaracterize us as not caring about systemic change or harder-to-quantify causes.
Those causes get criticized because of how hard to quantify they are. The relatively neglected thing is recognizing both strands, and arguing for Goldilocks positions between ‘linear clear evidence-backed non-systemic charity’ and ‘far too radical for most interested in systemic change.’
Couldn’t you just counter and say that if EA were around back then and it had just started out trying to figure out what the most good is that they would not support the abolitionist movement because of difficult EV calculations and because they are spending resources elsewhere? However, if the EA community existed back then and had matured a bit to the stage that something like OpenPhil existed back then as well (OpenPhil of course being an EA org for those reading who don’t know) then they would have very likely supported attempts at cost-effectiveness campaigns to support the abolitionist movement.
The EA community like all entities is an entity in flux. I don’t like hearing “If it existed back then then it wouldn’t support the abolishionist movement and therefore it has problems, and this may implicitly imply it is bad because it is thinking in a bad quantification bias naughty way.” This sounds like an unfair mischaracterization to me—especially given that you can just cherry-pick what the EA community was like at a particular time (how much it knows) and how many resources it has specifically so that it wouldn’t support the abolishionist movement and then claim the reason is quantification bias.
What’s better is “if EA existed back then as it existed in 2012/2050/20xy with x resources then it would not support the abolishionist movement” and now the factor of time and resources might very well be a much better explanation for why EA wouldn’t have supported the abolishionist movement, not quantification bias.
Consider the EA community of 2050 that would have decades worth of knowledge built on how to deal with harder to quantify causes.
I suspect that if the EA community of 2050 had the resources of YMCA or United Way and existed in the 18th Century, it would have supported the hell out of the abolitionist movement.
An interesting exchange, although I feel like the rebuttal somewhat misrepresents Gabriel’s agrument regarding systemic change. A steelman version of his argument would factor in the quantification bias, pointing out that due to extreme uncertainty in expected value estimation for some systemic change type interventions, something like AMF would usually easily come out on top.
I read him as saying that EA community would not support e.g. the abolishionist movement were it around then, precisely because of the difficulties in EV calculations, and I agree with him on that.
(I also think that OpenPhil does very important work in that direction)
Just as an aside, I’m not sure that’s obvious. John Stuart Mill was a leader in the abolition movement. He was arguably the Peter Singer of those times.
Turning to current issues, ending factory farming is also a cause that likely requires large scale social change through advocacy, and lots of EAs work on that.
And Bentham was ahead of the curve on:
Abolition of slavery
Calling for legal equality of the sexes
The first known argument for legalization of homosexuality in England
Animal rights
Abolishing the death penalty and corporal punishment (including of children)
Separation of church and state
Freedom of speech
The extensive work on factory farming is certainly one counterexample, but the interest in artificial intelligence may be a more powerful one on this point.
Perhaps “systemic change bias” needs to be coined, or something to that effect, to be used in further debates.
Might be useful in elucidating why people criticizing EAs always mischaracterize us as not caring about systemic change or harder-to-quantify causes.
Those causes get criticized because of how hard to quantify they are. The relatively neglected thing is recognizing both strands, and arguing for Goldilocks positions between ‘linear clear evidence-backed non-systemic charity’ and ‘far too radical for most interested in systemic change.’
Couldn’t you just counter and say that if EA were around back then and it had just started out trying to figure out what the most good is that they would not support the abolitionist movement because of difficult EV calculations and because they are spending resources elsewhere? However, if the EA community existed back then and had matured a bit to the stage that something like OpenPhil existed back then as well (OpenPhil of course being an EA org for those reading who don’t know) then they would have very likely supported attempts at cost-effectiveness campaigns to support the abolitionist movement.
The EA community like all entities is an entity in flux. I don’t like hearing “If it existed back then then it wouldn’t support the abolishionist movement and therefore it has problems, and this may implicitly imply it is bad because it is thinking in a bad quantification bias naughty way.” This sounds like an unfair mischaracterization to me—especially given that you can just cherry-pick what the EA community was like at a particular time (how much it knows) and how many resources it has specifically so that it wouldn’t support the abolishionist movement and then claim the reason is quantification bias.
What’s better is “if EA existed back then as it existed in 2012/2050/20xy with x resources then it would not support the abolishionist movement” and now the factor of time and resources might very well be a much better explanation for why EA wouldn’t have supported the abolishionist movement, not quantification bias.
Consider the EA community of 2050 that would have decades worth of knowledge built on how to deal with harder to quantify causes.
I suspect that if the EA community of 2050 had the resources of YMCA or United Way and existed in the 18th Century, it would have supported the hell out of the abolitionist movement.