Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog
10% Pledge #54 with GivingWhatWeCan.orgRichard Y Chappellšø
rich people in general, who by and large I think make absolutely terrible decisions about charity
I think this follows from a more general fact about people. If anything, I would guess that thereās a positive correlation between wealth and EA values: that a higher (though still depressingly low) proportion of wealthy people donate to effective causes than is true of the general population? Would be interesting to see actual data, though.
QuesĀtionĀing BenefiĀcence: Four PhilosoĀphers on EffecĀtive AltruĀism and DoĀing Good
Iām happy for folks to read the article and judge for themselves. The author briefly references some reasonable ideas in the course of building up a fundamentally unreasonable thesis: that āThe problem [with effective altruism] is that weāve stretched optimization beyond its optimal limits,ā and that sometimes donating to the local homeless over EA charities will better serve āthe real value you hold dear [that is, helping people].ā
They most clearly exhibit the fallacy I warn against (āsome tradeoffs are unclear, therefore you might as well be an ineffective altruistā) in this passage criticizing attempted optimization:
In your case, youāre trying to optimize how much you help others, and you believe that means focusing on the neediest. But āneediestā according to what definition of needy? You could assume that financial need is the only type that counts, so you should focus first on lifting everyone out of extreme poverty, and only then help people in less dire straits. But are you sure that only the brute poverty level matters?
ā¦ if you want to optimize, you need to be able to run an apples-to-apples comparison ā to calculate how much good different things do in a single currency, so you can pick the best option. But because helping people isnāt reducible to one thing ā itās lots of incommensurable things, and how to rank them depends on each personās subjective philosophical assumptions ā trying to optimize in this domain will mean you have to artificially simplify the problem. You have to pretend thereās no such thing as oranges, only apples.
I also think their discussion of integrity is fundamentally confused:
It sounds like thatās what youāre feeling when you pass a person experiencing homelessness and ignore them. Ignoring them makes you feel bad because it alienates you from the part of you that is moved by this personās suffering ā that sees the orange but is being told there are only apples. That core part of you is no less valuable than the optimizing part, which you liken to your ābrain.ā Itās not dumber or more irrational. Itās the part that cares deeply about helping people, and without it, the optimizing part would have nothing to optimize!
Itās not apples and oranges. Itās just helping people you can see vs helping people who are out of sight, and so less emotionally engaging. Those shouldnāt be different valuesāas the author themselves says at the start, thereās just the one value of helping people, and different strategies for how to achieve that. What they donāt acknowledge is that the strategy of prioritizing more salient /ā emotionally-engaging people is less effective at helping, even if itās more effective at indulging your emotional needs. Calling the emotional bias āintegrityā is not philosophically helpful or illuminating. Itās muddled thinking, running cover for blatant bias.
Agreed!
Fair question! I donāt know the answer. But Iād be surprised if the two came apart too sharply in this case (even though, as you rightly note, they can drastically diverge in principle). My sense is that GiveWell aims to recommend relatively āsafeā bets, rather than a āhits-basedā EV-maximizing approach. (I think itās important to be transparent when recommending the latter, just because I take it many people are not in fact so comfortable with pursuing that strategy, even if I think they ought to be.)
I actually think thatās fine. You can always look it up if youāre interested in the details, but for the casual consumer of charity-evaluation information, the bottom-line best estimate is the info thatās decision-relevant, not the uncertainty range. I think itās completely fine for people to share core info like this without simultaneously sharing all the fine print. Just like itās OK for public health experts to promote simple pro-vax messaging that doesnāt include all the fine print.
(See moral misdirection for my principled account of when it is or isnāt OK to leave out information.)
Absent these ranges, I see these claims repeated all over the place as if $5000 really is an objectively correct answer and not a rough estimate.
Here you just seem to be repeating the mistake of assuming that presenting a best estimate without also presenting the uncertainty range is thereby to present it as certain. I disagree with that interpretative norm. There is no āas ifā being presented. Thatās on you.
Not sure why this got tagged as āCommunityā. Itās not about the community, but about applying EA principles, substantive issues in applied decision theory, and associated mistakes in the reasoning of many critics of effective altruism. (Maybe an overzealous bot didnāt like the joking footnote reference to Kamala Harrisās ācoconut treeā line, and it got mischaracterized as political?)
Editāfixed now, thanks mods!
Good JudgĀment with Numbers
My central objection to Thorstadās work on this is the failure to properly account for uncertainty. Attempting to exclusively model a most-plausible scenario, and draw dismissive conclusions about longtermist interventions based solely on that, fails to reflect best practices about how to reason under conditions of uncertainty. (Iāve also raised this criticism against Schwitzgebelās negligibility argument.) You need to consider the full range of possible models /ā scenarios!
Itās essentially fallacious to think that āplausibly incorrect modeling assumptionsā undermine expected value reasoning. High expected value can still result from regions of probability space that are epistemically unlikely (or reflect āplausibly incorrectā conditions or assumptions). If thereās even a 1% chance that the relevant assumptions hold, just discount the output value accordingly. Astronomical stakes are not going to be undermined by lopping off the last two zeros.
Tarsneyās Epistemic Challenge to Longtermism is so much better at this. As he aptly notes, as long as youāre on board with orthodox decision theory (and so donāt disproportionately discount or neglect low-probability possibilities), and not completely dogmatic in refusing to give any credence at all to the longtermist-friendly assumptions (robust existential security after time of perils, etc.), reasonable epistemic worries ultimately arenāt capable of undermining the expected value argument for longtermism.
(These details can still be helpful for getting better-refined EV estimates, of course. But thatās very different from presenting them as an objection to the whole endeavor.)
UtiliĀtarĀiĀanism.net Updates
Just to expand on the above, Iāve written a new blog postāItās OK to Read Anyoneāthat explains (i) why I wonāt personally engage in intellectual boycotts [obviously the situation is different for organizations, and Iām happy for them to make their own decisions!], and (ii) what it is in Hananiaās substack writing that I personally find valuable and worth recommending to other intellectuals.
fyi, the recording is now available, and (upon reviewing it) Iāve expanded upon my other comments in a new post at Good Thoughts. (Iād be curious to hear from anyone who has a strikingly different impression of the debate than I had.)
Right, youād also have to oppose healthcare expansion, vaccines (against lethal illnesses), pandemic mitigation efforts, etc. I guess if you really believed it, you would take the results (more early death) to have positive expected value. Itās a deeply misanthropic thesis. So itās probably worth getting clearer on why it isnāt ultimately credible, despite initial appearances.
If you can stipulate (e.g. in a thought experiment) that the consequences of coercion are overall for the best, then I favor it in that case. I just have a very strong practical presumption (see: principled proceduralism) that liberal options tend to have higher expected value in real life, once all our uncertainty (and fallibility) is fully taken into account.
Maybe also worth noting (per my other comment in this thread) that Iām optimistic about the long-term value of humanity and human innovation. So, putting autonomy considerations aside, if I could either encourage people to have more kids or fewer, I think more is better (despite the short-term costs to animal welfare).
(1) If building human capacity has positive long-term ripple effects (e.g. on economic growth), these could be expected to swamp any temporary negative externalities.
(2) Itās also not clear that increasing population increases meat-eating in equilibrium. Presumably at some point in our technological development, the harms of factory-farming will be alleviated (e.g. by the development of affordable clean meat). Adding more people to the current generation moves forward both meat eating and economic & technological development. It doesnāt necessarily change the total number of meat-eaters who exist prior to our civ developing beyond factory farming.
But also: people (including those saved via GHD interventions) plausibly still ought to offset the harms caused by their diets. (Investing resources to speed up the development of clean meat, for example, seems very good.)
I think the idea is to reduce the future population of meat-eaters by encouraging contraceptive use, so kind of the opposite (in terms of total population) of saving lives.
(I have to say, the idea that we should positively prefer future people to not exist sounds pretty uncomfortable to me, and certainly less appealing than supporting people in making whatever reproductive decisions they personally prefer, which would include both contraceptive and fertility/āchild support.)
Interesting, thanks for the link! I agree that being a useful social ally and doing whatās morally best can come apart, and that people are often (lamentably) more interested in the former.
Yeah, that seems right as a potential āfailure modeā for explicit ethics taken to extremes. But of course it needs to be weighed against the potential failures of implicit ethics, like providing cover for not actually doing any good.
I attempt to surveyāand addressāwhat I see as the main criticisms in my academic paper, āWhy Not Effective Altruism?ā (summarized here). But itās not as comprehensive as an online FAQ could be.