Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog
10% Pledge #54 with GivingWhatWeCan.orgRichard Y Chappellšø
What do you mean by āmaximizationā? I think itās important to distinguish between:
(1) Hegemonic maximization: the (humanly infeasible) idea that every decision in your life should aim to do the most impartial good possible.
(2) Maximizing within specific decision contexts: insofar as youāre trying to allocate your charity budget (or altruistic efforts more generally), you should try to get the most bang for your buck.
As I understand it, EA aims to be maximizing in the second sense only. (Hence the norm around donating 10%, not some incredibly demanding standard.)
On the broader themes, a lot of what youāre pointing to is potential conflicts between ethics and self-interest, and I think itās pretty messed up to use the language of psychological āhealthā to justify a wanton disregard for ethics. Maybe itās partly a cultural clash, and when you say things like āAll perspectives are valid,ā you really mean them in a non-literal sense?
Iād like to see more basic public philosophy arguing for effective altruism and against its critics. (I obviously do this a bunch, and am puzzled that there isnāt more of it, particularly from philosophers whoāunlike meāare actually employed by EA orgs!)
One way that EAIF could help with this is by reaching out to promising candidates (well-respected philosophers who seem broadly sympathetic to EA principles) to see whether they could productively use a course buyout to provide time for EA-related public philosophy. (This could of course include constructively criticizing EA, or suggesting ways to improve, in addition toāwhat I tend to see as the higher priorityādrawing attention to apt EA criticisms of ordinary moral thought and behavior and ways that everyone else could clearly improve by taking these lessons on board.)
A specific example that springs to mind is Richard Pettigrew. He independently wrote an excellent, measured criticism of Leif Wenarās nonsense, and also reviewed the Crary et al volume in a top academic journal (Mind, iirc). Heās a very highly-regarded philosopher, and Iād love to see him engage more with EA ideas. Maybe a course buyout from EAIF could make that happen? Seems worth exploring, in any case.
QB: How Much do FuĀture GenĀerĀaĀtions MatĀter?
Sounds like a good move! In my experience (both as an author and a reader), Substack is very simple and convenient, and the network effects (e.g. obtaining new readers via substackās ārecommendationsā feature) are much larger than I would have predicted in advance.
My claim is not ātoo strongly statedā: it accurately states my view, which you havenāt even shown to be incorrect (let alone āunfairā or not ādefensibleāāboth significantly higher bars to establish than merely being incorrect!)
Itās always easier to make weaker claims, but that raises the risk of failing to make an important true claim that was worth making. Cf. epistemic cheems mindset.
Iām happy for people to argue that there are even better options than GW out there. (Iād agree!) But thatās very emphatically not what Wenar was doing in that article.
Maybe I spoke too soon: it āseems unfairā to characterize Wenarās WIRED article as ādiscouraging life-saving aidā? (A comment that is immediately met with two agree votes!) The pathology lives on.
Thanks for the link. (Iād much rather people read that than Wenarās confused thoughts.)
Hereās the bit I take to represent the ācore issueā:
If everyone thinks in terms of something like āapproximate shares of moral creditā, then this can help in coordinating to avoid situations where a lot of people work on a project because it seems worth it on marginal impact, but it would have been better if theyād all done something different.
Can you point to textual evidence that Wenar is actually gesturing at anything remotely in this vicinity? The alternative interpretation (which I think is better supported by the actual text) is that heās (i) conceptually confused about moral credit in a way that is deeply unreasonable, (ii) thinking about how to discredit EA, not how to optimize coordination, and (iii) simply happened to say something that vaguely reminds you of your own, much more reasonable, take.
If Iām right about (i)-(iii), then I donāt think itās accurate to characterize him as āin some reasonable way gesturing at the core issue.ā
Did you read my linked article on moral misdirection? Disavowing full-blown aid skepticism is compatible with discouraging life-saving aid, in the same way that someone who disavows xenophobia but then spends all their time writing sensationalist screeds about immigrant crime and other āharmsā caused by immigrants is very obviously discouraging immigration whatever else they might have said.
ETA: I just re-read the WIRED article. Heās clearly discouraging people from donating to GiveWellās recommendations. This will predictably result in more people dying. I donāt see how you can deny this. Do you really think that general audiences reading his WIRED article will be no less likely to donate to effective charities as a result?
Yeah, I donāt particularly mind this letter (though I see a lot more value in the critiques from Nielsen, NunoSempere, and Benjamin Ross Hoffman). Iām largely reacting to your positive annotated comments about the WIRED piece.
That said, I really donāt think Wenar is (even close to) āsubstantively correctā on his āshare of the totalā argument. The context is debating how much good EA-inspired donations have done. He seems to think the answer should be discounted by all the other (non-EA?) people involved in the causal chain, or that maybe only the final step should count (!?). Thatās silly. The relevant question is counterfactual. When co-ordinating with others, you might want to assess a collective counterfactual rather than an individual counterfactual, to avoid double-counting (I take it that something along these lines is your intended steelman?); but that seems pretty distant from Wenarās confused reasoning about the impact of philanthropic donations.
Ok, I guess itās worth thinking about different audiences here. Something thatās largely tendentious nonsense but includes something of a fresh (for you) perspective could be overall epistemically beneficial for you (since you donāt risk getting sucked in by the nonsense, and might have new thoughts inspired by the āfreshnessā), while being extremely damaging to a general audience who take it at face value (wonāt think of the relevant āsteelmanā improvements), and have no exposure to, or understanding of, the āother sideā.
I saw a bunch of prominent academic philosophers sharing the WIRED article with a strong vibe of āThis shows how we were right to dismiss EA all along!ā I can only imagine what a warped impression the typical magazine reader would have gotten from it. The anti-GiveWell stuff, especially, struck me as incredibly reckless and irresponsible for an academic to write for a general audience (for the reasons I set out in my critique). So, at least with regard to the WIRED article, Iād encourage you to resist any inference from āwell I found it pretty helpfulā to āit wasnāt awful.ā Smart people can have helpful thoughts sparked by awful, badly-reasoned texts!
Iām really surprised youāre so positive towards his āshare of the totalā assumptions (like, he seems completely unaware of Parfitās refutation, and is pushing the 1st mistake in a very naive way, not anything like the āfor purposes of co-ordinationā steelman that you seem to have in mind). And Iām especially baffled that you had a positive view of his nearest test. This was at the heart of my critique of his WIRED article:
Emphasizing minor, outweighed costs of good things (e.g. vaccines) is a classic form that [moral misdirection] can takeā¦ People are very prone to status-quo bias, and averse to salient harms. If you go out of your way to make harms from action extra-salient, while ignoring (far greater) harms from inaction, this will very predictably lead to worse decisionsā¦ Note that his ādearest testā does not involve vividly imagining your dearest ones suffering harm as a result of your inaction; only action. Wenar is here promoting a general approach to practical reasoning that is systematically biased (and predictably harmful as a result): a plain force for ill in the world.
Can you explain what advantage Wenarās biased test has over the more universal imaginative exercises recommended by R.M. Hare and others?
[P.S. I agree that the piece as a whole probably shouldnāt have negative karma, but I wouldnāt want it to have high karma either; it doesnāt strike me as worth positively recommending.]
Iāll add: something I appreciated about your (Leifās) letter is the section setting out your views on āgood judgmentā. I agree that thatās an important topic, and I think itās helpful for people to set out their views on how to develop it.
In case youāre not aware of it, I recently wrote a second post critiquing an aspect of your WIRED articleāGood Judgment with Numbersāthis time critiquing what I took to be an excessively dismissive attitude towards quantitative tools in your writing. (I agree, of course, that people should not blindly follow EV calculations.)
As before, Iād welcome substantive engagement with this critique, if you have any further thoughts on the topic.
Hi Leif, I appreciate your sharing this here, and hope that it reflects a willingness to engage in two-way communication (i.e., considering and engaging with our criticisms, as we have considered and engaged with yours). As I replied on twitter:
Something I found puzzling about the WIRED article was that it combined extreme omission bias (urging readers to vividly imagine possible harms from action, but not more likely harms from inaction) with an apparent complete lack of concern for the moral risks posed by your act of publicly discouraging life-saving philanthropy. For example, itās likely that more children will die of malaria as a result of anti-EA advocacy (if it persuades some readers), and I donāt understand why you arenāt more concerned about that potential harm.
Itās unfortunate that you dismiss criticism as āextremeā rather than engaging with the serious philosophical points that were made. (Again, Iād especially highlight the status quo bias implicit in your ātestsā that imagine harms from action while ignoring harms from inaction.)
The point of my anti-vax analogy is not name-calling, but to make clear how it can be irresponsible to discourage ex ante helpful actions by highlighting rare risks of harmful unintended side-effects. Your argument seemed to share this problematic structure.
These are substantive philosophical criticisms that are worth engaging with, not just dismissing as āextremeā.
This is a key point, and EJT helpfully shared on twitter an excerpt from Reasons and Persons in which Parfit clearly explains the fallacy behind the āshare of the total viewā that Wenar seems to be uncritically assuming here. (ETA: this the first of Parfitās famous āFive mistakes in moral mathematicsā; one of the most important parts of arguably the most important work of 20th century moral philosophy.)
This is foundational stuff for the philosophical tradition upon which EA draws, and casts an ironic light on Wenarās later criticism: āThe crucial-but-absent Socratic meta-question is, āDo I know enough about what Iām talking about to make recommendations that will be high stakes for other peopleās lives?āā
Yeah, itās a good question. Iād like to see an in-depth investigation of possible ripple effects from GHD, since I donāt think Iām in an especially good position to evaluate that. Iām basically just working from a very broad and vague intuition that humans are the ultimate resource, and GHD preserves and improves that resource in an especially clear and direct way.
Besides economic growth, I would guess that helping to sustain the population is a distinctive all-purpose instrumental value here, thatās hard to achieve by other means.
I think itās important to exercise judgment here. Many (e.g. political) communities reflexively dismiss criticism, in ways that are epistemically irresponsible and lead them into pathology. Itās important to be more open-minded than that, and to have a general stance of being open to the possibility that one is wrong.
But thereās also a very real risk (sometimes realized, IMO, on this forum) of people going too far in the opposite direction and reflexively accept criticism as apt or reasonable when it plainly isnāt. (This can take the form of downvoting or pushback against those of us who explain why a criticism is bad/āunreasonable.) Sometimes people enact a sort of performative open-mindedness which calls on them to welcome anti-EA criticism and reject criticism of that criticism more or less independently of the actual content or its merits. I find that very annoying.
(An example: when I first shared my āWhy Not Effective Altruism?ā draft, the feedback here seemed extremely negative and discouragingāsome even accused me of bad faith! -- because people didnāt like that I was criticizing the critics of EA. Now that itās published, many seem to appreciate the paper and agree that itās helpful. shrug.)
My sense is that this problem isnāt as bad now as in early 2023 when EA was going through a ridiculous self-flagellation phase.
Yeah, thatās more in line with what I would expect. (Except the first sentence may be a bit hasty. Many first-world couples delay parenting until their 30s. If a child dies, they may not be able to have anotherāesp. since a significant period of grieving may be necessary before they were even willing to.)
Why would being under a bednet reduce fertility? Two things that could make sense:
(1) The authorsā hypothesis of a mere timing shift, as fertility temporarily increases as a result of better health, followed by a (presumably similarly temporary) compensatory reduction in the immediately subsequent years, perhaps from the new parents stabilizing on their preferred family size. As noted, this hypothesis does not imply reduced total fertility.
(2) If some families stabilize on their preferred family size by (eventually) having an extra baby in the event that a previous one dies tragically early, then fertility (total births) could be expected to drop slightly as a result of life-saving interventions, but not to the point of exceeding the number of lives saved (or reducing total population).
In the absence of a plausible explanation that should lead us to view the outcome in question as especially likely, randomly positing a systematic negative population effect seems unreasonable to me. Anything is possible, of course. But selectively raising unsupported possibilities to salience just to challenge others to rule them out is a bad way to approach longtermist analysis, in my view. (Basically, the slight risk of negative fertility effects is outweighed by the expected gain in population, but common habits of thought overweight salient ārisksā in a way that makes this dialectical method especially distorting.) See also: Itās Not Wise to be Clueless.
I mean, itās undeniable that the best thing is best. Itās not like thereās some (coherent) alternative view that denies this. So I take it the real question is how much pressure one should feel towards doing the impartial best (at cost of significant self-sacrifice); whether the maximum should be viewed as the baseline for minimal acceptability, and anything short of it constitutes failure, or whether we rather aim to normalize something more modest and simply celebrate further good beyond that point as an extra bonus.
I can see pathologies in both directions here. I donāt think it makes sense to treat perfection as the baseline, such that any realistic outcome automatically qualifies as failure. For anyone to think that way would seem quite confused. (Which is not to deny that it can happen.) But also, it would seem a bit pathological to refuse to celebrate moral saints? Like, obviously there is something very impressive about moral heroism and extreme altruism that goes beyond what I personally would be willing to sacrifice for others? I think the crucial thing is just to frame it positively rather than negatively, and donāt get confused about where the baseline or zero-point properly lies.