Academic philosopher, co-editor of utilitarianism.net, blogs at https://rychappell.substack.com/
Richard Y Chappell
Here’s how I picture the axiological anti-realist’s internal monologue:
“The point of liberal intuitions is to prevent one person from imposing their beliefs on others. I care about my axiological views, but, since I have these liberal intuitions, I do not feel compelled to impose my views on others. There’s no tension here.”By contrast, here’s how I picture the axiological realist:
“I have these liberal intuitions that make me uncomfortable with the thought of imposing my views on others. At the same time, I know what the objectively correct axiology is, so, if I, consequentialist-style, do things that benefit others according to the objectively correct axiology, then there’s a sense in which that will be better for them than if I didn’t do it. Perhaps this justifies going against the common-sense principles of liberalism, if I’m truly certain enough and am not self-deceiving here? So, I’m kind of torn...”
Right, this tendentious contrast is just what I was objecting to. I could just as easily spin the opposite picture:
(1) A possible anti-realist monologue: “I find myself with some liberal intuitions; I also have various axiological views. Upon reflection, I find that I care more about preventing suffering (etc.) than I do about abstract tolerance or respect for autonomy, and since I’m an anti-realist I don’t feel compelled to abide by norms constraining my pursuit of what I most care about.”
(2) A possible realist monologue: “The point of liberal norms is to prevent one person from imposing their beliefs on others. I’m confident about what the best outcomes would be, considered in abstraction from human choice and agency, but since it would be objectively wrong and objectionable to pursue these ends via oppressive or otherwise illicit means, I’ll restrict myself to permissible means of promoting the good. There’s no tension here.”
The crucial question is just what practical norms one accepts (liberal or otherwise). Proposing correlations between other views and bad practical norms strikes me as an unhelpful—and rather bias-prone—distraction.
Thanks for writing this! I find it really striking how academic critics of longtermism (both Thorstad and Schwitzgebel spring to mind here) don’t adequately consider model uncertainty. It’s something I also tried to flag in my old post on ‘X-risk agnosticism’.
Tarsney’s epistemic challenge paper is so much better, precisely because he gets into higher-order uncertainty (over possible values for the crucial parameter “r” which includes the persisting risk of extinction, in the far future, despite our best efforts).
In general (whether realist or anti-realist), there is “no clear link” between axiological certainty and oppressive behavior, precisely because there are further practical norms (e.g. respect for rights, whether instrumentally or non-instrumentally grounded) that mediate between evaluation and action.
You suggest that it “seems only intuitive/natural” that an anti-realist should avoid being “too politically certain that what they believe is what everyone ought to believe.” I’m glad to hear that you’re naturally drawn to liberal tolerance. But many human beings evidently aren’t! It’s a notorious problem for anti-realism to explain how it doesn’t just end up rubber-stamping any values whatsoever, even authoritarian ones.
Moral realists can hold that liberal tolerance is objectively required as a practical norm, which seems more robustly constraining than just holding it as a personal preference. So the suggestion that “moral realism” is “problematic” here strikes me as completely confused. You’re implicitly comparing a realist authoritarian with an anti-realist liberal, but all the work is being done by the authoritarian/liberal contrast, not the realist/antirealist one. If you hold fixed people’s first-order views, not just about axiology but also about practical norms, then their metaethics makes no further difference.
That said, I very much agree about the “weirdness” of turning to philosophical uncertainty as a solution. Surely philosophical progress (done right) is a good thing, not a moral threat. But I think that just reinforces my alternative response that empirical uncertainty vs overconfidence is the real issue here. (Either that, or—in some conceivable cases, like an authoritarian AI—a lack of sufficient respect for the value of others’ autonomy. But the problem with someone who wrongly disregards others’ autonomy is not that they ought to be “morally uncertain”, but that they ought to positively recognize autonomy as a value. That is, they problematically lack sufficient confidence in the correct values. It’s of course unsurprising that having bad moral views would be problematic!)
We just wrote a textbook on the topic together (the print edition of utilitarianism.net)! In the preface, we briefly relate our different attitudes here: basically, I’m much more confident in the consequentialism part, but sympathetic to various departures from utilitarian (and esp. hedonistic) value theory, whereas Will gives more weight to non-consequentialist alternatives (more for reasons of peer disagreement than any intrinsic credibility, it seems), but is more confident that classical hedonistic utilitarianism is the best form of consequentialism.
I agree it’d be fun for us to explore the disagreement further sometime!
This is really sad news. I hope everyone working there has alternative employment opportunities (far from a given in academia!).
I was shocked to hear that the philosophy department imposed a freeze on fundraising in 2020. That sounds extremely unusual, and I hope we eventually learn more about the reasons behind this extraordinary institutional hostility. (Did the university shoot itself in the financial foot for reasons of “academic politics”?)
A minor note on the forward-looking advice: “short-term renewable contracts” can have their place, especially for trying out untested junior researchers. But you should be aware that it also filters out mid-career academics (especially those with family obligations) who could potentially bring a lot to a research institution, but would never leave a tenured position for short-term one. Not everyone who is unwilling to gamble away their academic career is thereby a “careerist” in the derogatory sense.
I don’t necessarily disagree with any of that, but the fact that you asserted it implicates you think it has some kind of practical relevance which is where I might want to disagree.
I think it’s fundamentally dishonest (a kind of naive instrumentalism in its own right) to try to discourage people from having true beliefs because of faint fears that these beliefs might correlate with bad behavior.
I also think it’s bad for people to engage in “moral profiling” (cf. racial profiling), spreading suspicion about utilitarians in general based on very speculative fears of this sort.
I just think it’s very obvious that if you’re worried about naive instrumentalism, the (morally and intellectually) correct response is to warn against naive instrumentalism, not other (intrinsically innocuous) views that you believe to be correlated with the mistake.
[See also: The Dangers of a Little Knowledge, esp. the “Should we lie?” section.]
fwiw, I wouldn’t generally expect “high confidence in utilitarianism” per se to be any cause for concern. (I have high confidence in something close to utilitarianism—in particular, I have near-zero credence in deontology—but I can’t imagine that anyone who really knows how I think about ethics would find this the least bit practically concerning.)
Note that Will does say a bit in the interview about why he doesn’t view SBF’s utilitarian beliefs as a major explanatory factor here (the fraud was so obviously negative EV, and the big lesson he took from the Soltes book on white-collar crime was that such crime tends to be more the result of negligence and self-deception than deliberate, explicit planning to that end).
I basically agree with the lessons Will suggests in the interview, about the importance of better “governance” and institutional guard-rails to disincentivize bad behavior, along with warning against both “EA exceptionalism” and SBF-style empirical overconfidence (in his ability to navigate risk, secure lasting business success without professional accounting support or governance, etc.).
I think it would be a big mistake to conflate that sort of “overconfidence in general” with specifically moral confidence (e.g. in the idea that we should fundamentally always prefer better outcomes over worse ones). It’s just very obvious that you can have the latter without the former, and it’s the former that’s the real problem here.
[See also: ‘The Abusability Objection’ at utilitarianism.net]
Yes, I agree it seems important to have marketers and PR people to craft persuasive messaging for mass audiences. That’s not what I’m trying to do here, and nor do I think it would make any sense for me to shift into PR—it wouldn’t be a good personal fit. My target audience is academics and “academic-adjacent” audiences, and as a philosopher my goal is to make clear what’s philosophically justified, not to manipulate anyone through non-rational means. I think this is an important role, for reasons explained in some of the footnotes to my posts there. But I also agree it’s not the only important role, and it would plausibly be good for EA to additionally have more mass-market appeal. It takes all sorts.
fyi, I weakly downvoted this because (i) you seem like you’re trying to pick a fight and I don’t think it’s productive; there are familiar social ratcheting effects that incentivize exaggerated rhetoric on race and gender online, and I don’t think we should encourage that. (There was nothing in my comment that invited this response.) (ii) I think you’re misrepresenting Trace. (iii) The “expand your moral circle” comment implies, falsely, that the only reason one could have for tolerating someone with bad views is that you don’t care about those harmed by their bad views.
I did not mean the reference to Trace to function as a conversation opener. (Quite the opposite!) I’ve now edited my original comment to clarify the relevant portion of the tweet. But if anyone wants to disagree with Trace, maybe start a new thread for that rather than replying to me. Thanks!
I’d just like to clarify that my blogroll should not be taken as a list of “worthy figure[s] who [are] friend[s] of EA”! They’re just blogs I find often interesting and worth reading. No broader moral endorsement implied!
fwiw, I found TracingWoodgrains’ thoughts here fairly compelling.
ETA, specifically:
I have little patience with polite society, its inconsistencies in which views are and are not acceptable, and its games of tug-of-war with the Overton Window. My own standards are strict and idiosyncratic. If I held everyone to them, I’d live in a lonely world, one that would exclude many my own circles approve of. And if you wonder whether I approve of something, I’m always happy to chat.
Thanks, that’s very helpful! I do want my points to be forceful, but I take your point that overdoing it can be counterproductive. I’ve now slightly moderated that sentence to instead read, “Wenar is here promoting a general approach to practical reasoning that is systematically biased (and predictably harmful as a result): a plain force for ill in the world.”
Right, that’s why I also take care to emphasize that responsible criticism is (pretty much) always possible, and describe in some detail how one can safely criticize “Good Things” without being susceptible to charges of moral misdirection.
Thanks, that’s helpful feedback. I guess I was too focused on making it concise, rather than easily understood.
This is an important point. People often confuse harm/benefit asymmetries with doing/allowing asymmetries. Wenar’s criticism seems to rest on the latter, not the former. Note that if all indirect harms are counted within the constraint against causing harm, almost all actions would be prohibited. (And on any plausible restriction, e.g. to “direct harms”, it would no longer be true that charities do harm. Wenar’s concerns involve very indirect effects. I think it’s very unlikely that there’s any consistent and plausible way to count these as having disproportionate moral weight. To avoid paralysis, such unintended indirect effects just need to be weighed in aggregate, balancing harms done against harms prevented.)
I found it a bit hard to discern what constructive points he was trying to make amidst all the snark. But the following seemed like a key passage in the overall argument:
Making responsible choices, I came to realize, means accepting well-known risks of harm. Which absolutely does not mean that “aid doesn’t work.” There are many good people in aid working hard on the ground, often making tough calls as they weigh benefits and costs. Giving money to aid can be admirable too—doctors, after all, still prescribe drugs with known side effects. Yet what no one in aid should say, I came to think, is that all they’re doing is improving poor people’s lives.
… This expert tried to persuade Ord that aid was much more complex than “pills improve lives.” Over dinner I pressed Ord on these points—in fact I harangued him, out of frustration and from the shame I felt at my younger self. Early on in the conversation, he developed what I’ve come to think of as “the EA glaze.”… Ord, it seemed, wanted to be the hero—the hero by being smart—just as I had. Behind his glazed eyes, the hero is thinking, “They’re trying to stop me.”
Putting aside the implicit status games and weird psychological projection, I don’t understand what practical point Wenar is trying to make here. If the aid is indeed net good, as he seems to grant, then “pills improve lives” seems like the most important insight not to lose sight of. And if someone starts “haranguing” you for affirming this important insight, it does seem like it could come across as trying to prevent that net good from happening. (I don’t see any reason to personalize the concern, as about “stopping me”—that just seems blatantly uncharitable.)
It sounds like Wenar just wants more public affirmations of causal complexity to precede any claim about our potential to do good? But it surely depends on context whether that’s a good idea. Too much detail, especially extraneous detail that doesn’t affect the bottom line recommendation, could easily prove distracting and cause people (like, seemingly, Wenar himself) to lose sight of the bottom line of what matters most here.
So that section just seemed kind of silly. There was a more reasonable point mixed in with the unreasonable in the next section:
GiveWell still doesn’t factor in many well-known negative effects of aid… Today GiveWell’s front page advertises only the number of lives it thinks it has saved. A more honest front page would also display the number of deaths it believes it has caused.
The initial complaint here seems fine: presumably GiveWell could (marginally) improve their cost-effectiveness models by trying to incorporate various risks or costs that it sounds like they currently don’t consider. Mind you, if nobody else has any better estimates, then complaining that the best-grounded estimates in the world aren’t yet perfect seems a bit precious. Then the closing suggestion that they prominently highlight expected deaths (from indirect causes like bandits killing people while trying to steal charity money) is just dopey. Ordinary readers would surely misread that as suggesting that the interventions were somehow directly killing people. Obviously the better-justified display is the net effect in lives saved. But we’re not given any reason to expect that GiveWell’s current estimates here are far off.
Q: Does Wenar endorse inaction?
Wenar’s “most important [point] to make to EAs” (skipping over his weird projection about egotism) is that “If we decide to intervene in poor people’s lives, we should do so responsibly—ideally by shifting our power to them and being accountable for our actions.”
The overwhelmingly thrust of Wenar’s article—from the opening jab about asking EAs “how many people they’ve killed”, to the conditional I bolded above—seems to be to frame charitable giving as a morally risky endeavor, in contrast to the implicit safety of just doing nothing and letting people die.
I think that’s a terrible frame. It’s philosophically mistaken: letting people die from preventable causes is not a morally safe or innocent alternative (as is precisely the central lesson of Singer’s famous article). And it seems practically dangerous to publicly promote this bad moral frame, as he is doing here. The most predictable consequence is to discourage people from doing “riskily good” things like giving to charity. Since he seems to grant that aid is overall good and admirable, it seems like by his own lights he should regard his own article as harmful. It’s weird.
(If he just wants to advocate for more GiveDirectly-style anti-paternalistic interventions that “shift our power to them”, that seems fine but obviously doesn’t justify the other 95% of the article.)
There was meant to be an “all else equal” clause in there (as usually goes without saying in these sorts of thought experiments) -- otherwise, as you say, the verdict wouldn’t necessarily indicate underlying non-utilitarian concerns at all.
Perhaps easiest to imagine if you modify the thought experiment so that your psychology (memories, “moral muscles”, etc.) will be “reset” after making the decision. I’m talking about those who would insist that you still ought to save the one over the two even then—no matter how the purely utilitarian considerations play out.
It’s fine to offer recommendations within suboptimal cause areas for ineffective donors. But I’m talking about worldview diversification for the purpose of allocating one’s own (or OpenPhil’s own) resources genuinely wisely, given one’s (or: OP’s) warranted uncertainty.
It’s always better for a view to be justified than to be unjustified? (Makes it more likely to be true, more likely to be what you would accept on further / idealized reflection, etc.)
The vast majority of worldviews do not warrant our assent. Worldview diversification is a way of dealing with the sense that there is more than one that is plausibly well-justified, and warrants our taking it “into account” in our prioritization decisions. But there should not be any temptation to extend this to every possible worldview. (At the limit: some are outright bad or evil. More moderately: others simply have very little going for them, and would not be worth the opportunity costs.)
I was replying to your sentence, “I’d guess most proponents of GHD would find (1) and (2) particularly bad.”
Realistically, it is almost never in an academic’s professional interest to write a reply paper (unless they are completely starved of original ideas). Referees are fickle, and if the reply isn’t accepted at the original journal, very few other journals will even consider it, making it a bad time investment. (A real “right of reply”—where the default expectation switches from ‘rejection’ to ‘acceptance’—might change the incentives here.)
Example: early in my career, I wrote a reply to an article that was published in Ethics. The referees agreed with my criticisms, and rejected my reply on the grounds that this was all obvious and the original paper never should have been published. I learned my lesson and now just post replies to my blog since that’s much less time-intensive (and probably gets more readers anyway).