Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog
10% Pledge #54 with GivingWhatWeCan.orgRichard Y Chappellšø
rich people in general, who by and large I think make absolutely terrible decisions about charity
I think this follows from a more general fact about people. If anything, I would guess that thereās a positive correlation between wealth and EA values: that a higher (though still depressingly low) proportion of wealthy people donate to effective causes than is true of the general population? Would be interesting to see actual data, though.
Iām happy for folks to read the article and judge for themselves. The author briefly references some reasonable ideas in the course of building up a fundamentally unreasonable thesis: that āThe problem [with effective altruism] is that weāve stretched optimization beyond its optimal limits,ā and that sometimes donating to the local homeless over EA charities will better serve āthe real value you hold dear [that is, helping people].ā
They most clearly exhibit the fallacy I warn against (āsome tradeoffs are unclear, therefore you might as well be an ineffective altruistā) in this passage criticizing attempted optimization:
In your case, youāre trying to optimize how much you help others, and you believe that means focusing on the neediest. But āneediestā according to what definition of needy? You could assume that financial need is the only type that counts, so you should focus first on lifting everyone out of extreme poverty, and only then help people in less dire straits. But are you sure that only the brute poverty level matters?
ā¦ if you want to optimize, you need to be able to run an apples-to-apples comparison ā to calculate how much good different things do in a single currency, so you can pick the best option. But because helping people isnāt reducible to one thing ā itās lots of incommensurable things, and how to rank them depends on each personās subjective philosophical assumptions ā trying to optimize in this domain will mean you have to artificially simplify the problem. You have to pretend thereās no such thing as oranges, only apples.
I also think their discussion of integrity is fundamentally confused:
It sounds like thatās what youāre feeling when you pass a person experiencing homelessness and ignore them. Ignoring them makes you feel bad because it alienates you from the part of you that is moved by this personās suffering ā that sees the orange but is being told there are only apples. That core part of you is no less valuable than the optimizing part, which you liken to your ābrain.ā Itās not dumber or more irrational. Itās the part that cares deeply about helping people, and without it, the optimizing part would have nothing to optimize!
Itās not apples and oranges. Itās just helping people you can see vs helping people who are out of sight, and so less emotionally engaging. Those shouldnāt be different valuesāas the author themselves says at the start, thereās just the one value of helping people, and different strategies for how to achieve that. What they donāt acknowledge is that the strategy of prioritizing more salient /ā emotionally-engaging people is less effective at helping, even if itās more effective at indulging your emotional needs. Calling the emotional bias āintegrityā is not philosophically helpful or illuminating. Itās muddled thinking, running cover for blatant bias.
Agreed!
Fair question! I donāt know the answer. But Iād be surprised if the two came apart too sharply in this case (even though, as you rightly note, they can drastically diverge in principle). My sense is that GiveWell aims to recommend relatively āsafeā bets, rather than a āhits-basedā EV-maximizing approach. (I think itās important to be transparent when recommending the latter, just because I take it many people are not in fact so comfortable with pursuing that strategy, even if I think they ought to be.)
I actually think thatās fine. You can always look it up if youāre interested in the details, but for the casual consumer of charity-evaluation information, the bottom-line best estimate is the info thatās decision-relevant, not the uncertainty range. I think itās completely fine for people to share core info like this without simultaneously sharing all the fine print. Just like itās OK for public health experts to promote simple pro-vax messaging that doesnāt include all the fine print.
(See moral misdirection for my principled account of when it is or isnāt OK to leave out information.)
Absent these ranges, I see these claims repeated all over the place as if $5000 really is an objectively correct answer and not a rough estimate.
Here you just seem to be repeating the mistake of assuming that presenting a best estimate without also presenting the uncertainty range is thereby to present it as certain. I disagree with that interpretative norm. There is no āas ifā being presented. Thatās on you.
Not sure why this got tagged as āCommunityā. Itās not about the community, but about applying EA principles, substantive issues in applied decision theory, and associated mistakes in the reasoning of many critics of effective altruism. (Maybe an overzealous bot didnāt like the joking footnote reference to Kamala Harrisās ācoconut treeā line, and it got mischaracterized as political?)
Editāfixed now, thanks mods!
My central objection to Thorstadās work on this is the failure to properly account for uncertainty. Attempting to exclusively model a most-plausible scenario, and draw dismissive conclusions about longtermist interventions based solely on that, fails to reflect best practices about how to reason under conditions of uncertainty. (Iāve also raised this criticism against Schwitzgebelās negligibility argument.) You need to consider the full range of possible models /ā scenarios!
Itās essentially fallacious to think that āplausibly incorrect modeling assumptionsā undermine expected value reasoning. High expected value can still result from regions of probability space that are epistemically unlikely (or reflect āplausibly incorrectā conditions or assumptions). If thereās even a 1% chance that the relevant assumptions hold, just discount the output value accordingly. Astronomical stakes are not going to be undermined by lopping off the last two zeros.
Tarsneyās Epistemic Challenge to Longtermism is so much better at this. As he aptly notes, as long as youāre on board with orthodox decision theory (and so donāt disproportionately discount or neglect low-probability possibilities), and not completely dogmatic in refusing to give any credence at all to the longtermist-friendly assumptions (robust existential security after time of perils, etc.), reasonable epistemic worries ultimately arenāt capable of undermining the expected value argument for longtermism.
(These details can still be helpful for getting better-refined EV estimates, of course. But thatās very different from presenting them as an objection to the whole endeavor.)
Just to expand on the above, Iāve written a new blog postāItās OK to Read Anyoneāthat explains (i) why I wonāt personally engage in intellectual boycotts [obviously the situation is different for organizations, and Iām happy for them to make their own decisions!], and (ii) what it is in Hananiaās substack writing that I personally find valuable and worth recommending to other intellectuals.
fyi, the recording is now available, and (upon reviewing it) Iāve expanded upon my other comments in a new post at Good Thoughts. (Iād be curious to hear from anyone who has a strikingly different impression of the debate than I had.)
Right, youād also have to oppose healthcare expansion, vaccines (against lethal illnesses), pandemic mitigation efforts, etc. I guess if you really believed it, you would take the results (more early death) to have positive expected value. Itās a deeply misanthropic thesis. So itās probably worth getting clearer on why it isnāt ultimately credible, despite initial appearances.
If you can stipulate (e.g. in a thought experiment) that the consequences of coercion are overall for the best, then I favor it in that case. I just have a very strong practical presumption (see: principled proceduralism) that liberal options tend to have higher expected value in real life, once all our uncertainty (and fallibility) is fully taken into account.
Maybe also worth noting (per my other comment in this thread) that Iām optimistic about the long-term value of humanity and human innovation. So, putting autonomy considerations aside, if I could either encourage people to have more kids or fewer, I think more is better (despite the short-term costs to animal welfare).
(1) If building human capacity has positive long-term ripple effects (e.g. on economic growth), these could be expected to swamp any temporary negative externalities.
(2) Itās also not clear that increasing population increases meat-eating in equilibrium. Presumably at some point in our technological development, the harms of factory-farming will be alleviated (e.g. by the development of affordable clean meat). Adding more people to the current generation moves forward both meat eating and economic & technological development. It doesnāt necessarily change the total number of meat-eaters who exist prior to our civ developing beyond factory farming.
But also: people (including those saved via GHD interventions) plausibly still ought to offset the harms caused by their diets. (Investing resources to speed up the development of clean meat, for example, seems very good.)
I think the idea is to reduce the future population of meat-eaters by encouraging contraceptive use, so kind of the opposite (in terms of total population) of saving lives.
(I have to say, the idea that we should positively prefer future people to not exist sounds pretty uncomfortable to me, and certainly less appealing than supporting people in making whatever reproductive decisions they personally prefer, which would include both contraceptive and fertility/āchild support.)
Interesting, thanks for the link! I agree that being a useful social ally and doing whatās morally best can come apart, and that people are often (lamentably) more interested in the former.
Yeah, that seems right as a potential āfailure modeā for explicit ethics taken to extremes. But of course it needs to be weighed against the potential failures of implicit ethics, like providing cover for not actually doing any good.
Everyone has the right to life. That implies everyone who wants to live has the guarantee from society they can do it, even if the cause of otherwise not living is natural (example: dying by ageing).
Thatās not what is ordinarily meant by āthe right to lifeā. (See Judy Thomsonās famous paper, āA Defense of Abortionā, which argues that the right to life is really just the right not to be killed unjustly. It is not violated by, e.g., unplugging yourself from someone who depends upon your organs to live.)
I think we should want society to offer just those rights that would best promote overall flourishing. A guarantee against premature death obviously doesnāt meet those criteria. (Suppose we could save one personās life at the cost of trillions of dollars, leaving nothing for education or other important āquality of lifeā improvements.)
More generally, you seem to be thinking of death as an absolutely bad thing: something to be avoided at all costs. That seems mistaken to me. Death is better understood as a merely comparative harm: a shorter happy life is not as good as a longer happy life would be (all else equal). But thatās no reason at all to prefer that the short happy life never exist at all.
Iām not making any claims either way about that. Iām just pointing out (contra Matthew) that it is clearly not āirrelevant spamā. Your objections are substantive, not procedural. Folks who want to censor views they find offensive should be honest about what theyāre doing, not pretend that theyāre just filtering out viagra ads.
You elsewhere link to this post as a āclear example of a post that would be banned under the rulesā. That post includes the following argument:
People act like genetic engineering would be some sort of horrifying mad science project to create freakish mutant supermen who can shoot acid out of their eyes. But I would be pretty happy if it could just make everyone do as well as Ashkenazi Jews. The Ashkenazim I know are mostly well-off, well-educated, and live decent lives. If genetic engineering could give those advantages to everyone, it would easily qualify as the most important piece of social progress in history, even before we started giving people the ability to shoot acid out of their eyes.
The post concludes, āEAās existing taboos are preventing it from answering questions like these, and as new taboos are accepted, the effectiveness of the movement will continue to wain.ā
You may well judge this to be wrong, as a substantive matter. But I donāt understand how anyone could seriously claim that this is āoff topic and irrelevant to EA.ā (The effectiveness of the movement is obviously a matter of relevant concern for EA.) Peopleās tendency to dishonestly smuggle substantive judgments under putatively procedural grounds is precisely why Iām so suspicious of such calls for censorship.
Topical relevance is independent of the position one takes on a topic, so the rule youāre suggesting also implies that condemnations of race science are spam and should be deleted. (I think Iād be fine with a consistently applied rule of that form. But itās clearly not the OPās position.)
I attempt to surveyāand addressāwhat I see as the main criticisms in my academic paper, āWhy Not Effective Altruism?ā (summarized here). But itās not as comprehensive as an online FAQ could be.