I sort-off bounced of this one Richard. Iâm not a professor of moral philosophy, so some of what I say below may seem obviously wrong/âstupid/âincorrectâbut I think that were I a philosophy professor I would be able to shape it into a stronger objection than it might appear on first glance.
Now, when people complain that EA quantifies things (like cross-species suffering) that allegedly âcanât be precisely quantified,â what theyâre effectively doing is refusing to consider that thing at all.
I donât think this would pass an ideological Turing Test. I think what people who make this claim are saying is often that previous attempts to quantify the good precisely have ended up having morally bad consequences. Given this history, perhaps our takeaway shouldnât be âthey werenât precise enough in their quantificationâ and should be more âperhaps precise quantification isnât the right way to go about ethicsâ.
Because the realistic alternative to EA-style quantitative analysis is vibes-based analysis: just blindly going with whatâs emotionally appealing at a gut level.
Again, I donât think this is true. Would you say that before the publication of Famine, Affluence, and Morality that all moral philosophy was just âvibes-based analysisâ? I think, instead, all of moral reasoning is in some sense âvibes-basedâ and the quantification of EA is often trying to present arguments for the EA position.
To state it more clearly, what we care about is moral decision-making, not the quantification of moral decisions. And most decisions that have been made or have ever been made have been done so without quantification. What matters is the moral decisions we make, and the reasons we have for those decisions/âvalues, not what quantitative value we place on said decisions/âvalues.
the question that properly guides our philanthropic deliberations is not âHow can I be sure to do some good?â but rather, âHow can I (permissibly) do the most (expected) good?â
I guess Iâm starting to bounce of this because I now view this as a big moral commitment which I think goes beyond simple beneficentrism. Another view, for example, would be a contractualism, where what âdoing goodâ means is substantially different from what you describe here, but perhaps thatâs a base metaethical debate.
Itâs very conventional to think, âPrioritizing global health is epistemically safe; you really have to go out on a limb, and adopt some extreme views, in order to prioritize the other EA stuff.â This conventional thought is false. The truth is the opposite. You need to have some really extreme (near-zero) credence levels in order to prevent ultra-high-impact prospects from swamping more ordinary forms of do-gooding.
I think this is confusing two forms of âextremeâ. Like in one sense the default âanimals have little-to-no moral worthâ view is extreme for setting the moral value of animals so low as to be near zero (and confidently so at that). But I think the âextremeâ in your first sentence refers to âextreme from the point of view of societyâ.
Furthermore, if we argue that quantifying expected value in quantitative models is the right way to do moral reasoning (as opposed to sometimes being a tool), then you donât have to accept the âeven a 1% chance is enoughâ, I could just decline to find a tool that produces such dogmatism at 1% acceptable. You could counter with âyour default/âstatus-quo morality is dogmaticâ, which sure. But it doesnât convince me to accept strong longtermism any more, and Iâve already read a fair bit about it (though I accept probably not as much as you).
While youâre at it, take care to avoid the conventional dogmatism that regards ultra-high-impact as impossible.
One manâs âconventional dogmatismâ could be reframed as âthe accurate observation that people with totalising philosophies promising ultra-high-impact have a very bad track record that have often caused harm and those with similar philosophies ought to be viewed with suspicionâ
Sorry if the above was a bit jumbled. It just seemed this post was very unlike your recent Good Judgement with Numbers post, which I clicked with a lot more. This one seems to be you, instead of rejecting the âAll or Nothingâ Assumption, actually going âall inâ on quantitative reasoning. Perhaps it was the tone with which it was written, but it really didnât seem to actually engage with why people have an aversion to over-quantification of moral reasoning.
Thanks for the feedback! Itâs probably helpful to read this in conjunction with âGood Judgment with Numbersâ, because the latter post gives a fuller picture of my view whereas this one is specifically focused on why a certain kind of blind dismissal of numbers is messed up.
(A general issue I often find here is that when Iâm explaining why a very specific bad objection is bad, many EAs instead want to (mis)read me as suggesting that nothing remotely in the vicinity of the targeted position could possibly be justified, and then complain that my argument doesnât refute thisâvery different - âsteelmanâ position that they have in mind. But Iâm not arguing against the position that we should sometimes be concerned about over-quantification for practical reasons. How could I? I agree with it! Iâm arguing against the specific position specified in the post, i.e. holding that different kinds of values canâtâliterally, canât, like, in principleâbe quantified.)
I think this is confusing two forms of âextremeâ.
Iâm actually trying to suggest that my interlocutor has confused these two things. Thereâs whatâs conventional vs socially extreme, and thereâs whatâs epistemically extreme, and they arenât the same thing. Thatâs my whole point in that paragraph. It isnât necessarily epistemically safe to do whatâs socially safe or conventional.
I sort-off bounced of this one Richard. Iâm not a professor of moral philosophy, so some of what I say below may seem obviously wrong/âstupid/âincorrectâbut I think that were I a philosophy professor I would be able to shape it into a stronger objection than it might appear on first glance.
I donât think this would pass an ideological Turing Test. I think what people who make this claim are saying is often that previous attempts to quantify the good precisely have ended up having morally bad consequences. Given this history, perhaps our takeaway shouldnât be âthey werenât precise enough in their quantificationâ and should be more âperhaps precise quantification isnât the right way to go about ethicsâ.
Again, I donât think this is true. Would you say that before the publication of Famine, Affluence, and Morality that all moral philosophy was just âvibes-based analysisâ? I think, instead, all of moral reasoning is in some sense âvibes-basedâ and the quantification of EA is often trying to present arguments for the EA position.
To state it more clearly, what we care about is moral decision-making, not the quantification of moral decisions. And most decisions that have been made or have ever been made have been done so without quantification. What matters is the moral decisions we make, and the reasons we have for those decisions/âvalues, not what quantitative value we place on said decisions/âvalues.
I guess Iâm starting to bounce of this because I now view this as a big moral commitment which I think goes beyond simple beneficentrism. Another view, for example, would be a contractualism, where what âdoing goodâ means is substantially different from what you describe here, but perhaps thatâs a base metaethical debate.
I think this is confusing two forms of âextremeâ. Like in one sense the default âanimals have little-to-no moral worthâ view is extreme for setting the moral value of animals so low as to be near zero (and confidently so at that). But I think the âextremeâ in your first sentence refers to âextreme from the point of view of societyâ.
Furthermore, if we argue that quantifying expected value in quantitative models is the right way to do moral reasoning (as opposed to sometimes being a tool), then you donât have to accept the âeven a 1% chance is enoughâ, I could just decline to find a tool that produces such dogmatism at 1% acceptable. You could counter with âyour default/âstatus-quo morality is dogmaticâ, which sure. But it doesnât convince me to accept strong longtermism any more, and Iâve already read a fair bit about it (though I accept probably not as much as you).
One manâs âconventional dogmatismâ could be reframed as âthe accurate observation that people with totalising philosophies promising ultra-high-impact have a very bad track record that have often caused harm and those with similar philosophies ought to be viewed with suspicionâ
Sorry if the above was a bit jumbled. It just seemed this post was very unlike your recent Good Judgement with Numbers post, which I clicked with a lot more. This one seems to be you, instead of rejecting the âAll or Nothingâ Assumption, actually going âall inâ on quantitative reasoning. Perhaps it was the tone with which it was written, but it really didnât seem to actually engage with why people have an aversion to over-quantification of moral reasoning.
Thanks for the feedback! Itâs probably helpful to read this in conjunction with âGood Judgment with Numbersâ, because the latter post gives a fuller picture of my view whereas this one is specifically focused on why a certain kind of blind dismissal of numbers is messed up.
(A general issue I often find here is that when Iâm explaining why a very specific bad objection is bad, many EAs instead want to (mis)read me as suggesting that nothing remotely in the vicinity of the targeted position could possibly be justified, and then complain that my argument doesnât refute thisâvery different - âsteelmanâ position that they have in mind. But Iâm not arguing against the position that we should sometimes be concerned about over-quantification for practical reasons. How could I? I agree with it! Iâm arguing against the specific position specified in the post, i.e. holding that different kinds of values canâtâliterally, canât, like, in principleâbe quantified.)
Iâm actually trying to suggest that my interlocutor has confused these two things. Thereâs whatâs conventional vs socially extreme, and thereâs whatâs epistemically extreme, and they arenât the same thing. Thatâs my whole point in that paragraph. It isnât necessarily epistemically safe to do whatâs socially safe or conventional.