Academic philosopher, co-editor of utilitarianism.net, blogs at https://rychappell.substack.com/
Richard Y Chappell
EA “Worldviews” Need Rethinking
Review of The Good It Promises, the Harm It Does
Doing Good Effectively is Unusual
Reflections on Vox’s “How effective altruism let SBF happen”
Puzzles for Everyone
This is a very unfortunate situation, but as a general piece of life advice for anyone reading this: expressions of interest are not commitments and should not be “interpreted”—let alone acted upon! -- as such.
For example, within academia, a department might express interest in having Prof X join their department. But there’s no guarantee it will work out. And if Prof. X prematurely quit their existing job, before having a new contract in hand, they would be taking a massive career risk!
(I’m not making any comment on the broader issues raised here; I sympathize with all involved over the unfortunate miscommunication. Just thought it was important to emphasize this particular point. Disclosure: I’ve recently had positive experiences with EAIF.)
A trilogy on anti-philanthropic misdirection
Great post. I strongly agree with the core point.
Regarding the last section: it’d be an interesting experiment to add a “democratic” community-controlled fund to supplement the existing options. But I wouldn’t want to lose the existing EA funds, with their vetted expert grantmakers. I personally trust (and agree with) the “core EAs” more than the “concerned EAs”, and would be less inclined to donate to a fund where the latter group had more influence. But by all means, let a thousand flowers bloom—folks could then direct their donations to the fund that’s managed as they think best.
[ETA: Just saw that Jason has already made a similar point.]
forced to watch money get redirected from the Global South to AI researchers.
I don’t think this is a healthy way of framing disagreements about cause prioritization. Imagine if a fan of GiveDirectly started complaining about GiveWell’s top charities for “redirecting money from the wallets of world’s poorest villagers...” Sounds almost like theft! Except, of course, that the “default” implicitly attributed here is purely rhetorical. No cause has any prior claim to the funds. The only question is where best to send them, and this should be determined in a cause neutral way, not picking out any one cause as the privileged “default” that is somehow robbed of its due by any or all competing candidates that receive funding.
Of course, you’re free to feel frustrated when others disagree with your priorities. I just think that the rhetorical framing of “redirected” funds is (i) not an accurate way to think about the situation, and (ii) potentially harmful, insofar as it seems apt to feed unwarranted grievances. So I’d encourage folks to try to avoid it.
The Nietzschean Challenge to Effective Altruism
Naïve vs Prudent Utilitarianism
Effective Altruists want to effectively help others. I think it makes perfect sense for this to be an umbrella movement that includes a range of different cause areas. (And literally saving the world is obviously a legitimate area of interest for altruists!)
Cause-specific movements are great, but they aren’t a replacement for EA as a cause-neutral movement to effectively do good.
The Strange Shortage of Moral Optimizers
This is really sad news. I hope everyone working there has alternative employment opportunities (far from a given in academia!).
I was shocked to hear that the philosophy department imposed a freeze on fundraising in 2020. That sounds extremely unusual, and I hope we eventually learn more about the reasons behind this extraordinary institutional hostility. (Did the university shoot itself in the financial foot for reasons of “academic politics”?)
A minor note on the forward-looking advice: “short-term renewable contracts” can have their place, especially for trying out untested junior researchers. But you should be aware that it also filters out mid-career academics (especially those with family obligations) who could potentially bring a lot to a research institution, but would never leave a tenured position for short-term one. Not everyone who is unwilling to gamble away their academic career is thereby a “careerist” in the derogatory sense.
Review of Animal Liberation Now
Meat Externalities
Yglesias on EA and politics
Great post! I like your ‘1 in a million’ threshold as a heuristic, or perhaps sufficient condition for being non-Pascalian. But I think that arbitrarily lower probabilities could also be non-Pascalian, so long as they are sufficiently “objective” or robustly grounded.
Quick argument for this conclusion: just imagine scaling up the voting example. It seems worth voting in any election that significantly affects N people, where your chance of making a (positive) difference is inversely proportional (say within an order of magnitude of 1/N, or better). So long as scale and probability remain approximately inversely proportional, it doesn’t seem to make a difference to the choice-worthiness of voting what the precise value of N is here.
Crucially, there are well-understood mechanisms and models that ground these probability assignments. We’re not just making numbers up, or offering a purely subjective credence. Asteroid impacts seem similar. We might have robust statistical models, based on extensive astronomical observation, that allow us to assign a 1/trillion chance of averting extinction through some new asteroid-tracking program, in which case it seems to me that we should clearly take those expected value calculations at face value and act accordingly. However tiny the probabilities may be, if they are well-grounded, they’re not “Pascalian”.
Pascalian probabilities are instead (I propose) ones that lack robust epistemic support. They’re more or less made up, and could easily be “off” by many, many orders of magnitude. Per Holden Karnofsky’s argument in ‘Why we can’t take explicit expected value estimates literally’, Bayesian adjustments would plausibly mandate massively discounting these non-robust initial estimates (roughly in proportion to their claims to massive impact), leading to low adjusted expected value after all.
I like the previous paragraph as a quick solution to “Pascal’s mugging”. But even if you don’t think it works, I think this distinction between robust vs non-robustly grounded probability estimates may serve to distinguish intuitively non-Pascalian vs Pascalian tiny-probability gambles.
Conclusion: small probabilities are not Pascalian if they are either (i) not ridiculously tiny, or (ii) robustly grounded in evidence.
I’m really surprised by how common it is for people’s thoughts to turn in this direction! (cf. this recent twitter thread) A few points I’d stress in reply:
(1) Pro-natalism just means being pro-fertility in general; it doesn’t mean requiring reproduction every single moment, or no matter the costs.
(2) Assuming standard liberal views about the (zero) moral status of the non-conscious embryo, there’s nothing special about abortion from a pro-natalist perspective. It’s just like any other form of family planning—any other moment when you refrain from having a child but could have done otherwise.
(3) Violating people’s bodily autonomy is a big deal; even granting that it’s good to have more kids all else equal, it’s hard to imagine a realistic scenario in which “forced birth” would be for the best, all things considered. (For example, it’s obviously better for people to time their reproductive choices to better fit with when they’re in a position to provide well for their kids. Not to mention the Freakonomics stuff about how unwanted pregnancies, if forced to term, result in higher crime rates in subsequent decades.)
In general, we should just be really, really wary about sliding from “X is good, all else equal” to “Force everyone to do X, no matter what!” Remember your J.S. Mill, everyone! Utilitarians should be liberal.
I found it a bit hard to discern what constructive points he was trying to make amidst all the snark. But the following seemed like a key passage in the overall argument:
Putting aside the implicit status games and weird psychological projection, I don’t understand what practical point Wenar is trying to make here. If the aid is indeed net good, as he seems to grant, then “pills improve lives” seems like the most important insight not to lose sight of. And if someone starts “haranguing” you for affirming this important insight, it does seem like it could come across as trying to prevent that net good from happening. (I don’t see any reason to personalize the concern, as about “stopping me”—that just seems blatantly uncharitable.)
It sounds like Wenar just wants more public affirmations of causal complexity to precede any claim about our potential to do good? But it surely depends on context whether that’s a good idea. Too much detail, especially extraneous detail that doesn’t affect the bottom line recommendation, could easily prove distracting and cause people (like, seemingly, Wenar himself) to lose sight of the bottom line of what matters most here.
So that section just seemed kind of silly. There was a more reasonable point mixed in with the unreasonable in the next section:
The initial complaint here seems fine: presumably GiveWell could (marginally) improve their cost-effectiveness models by trying to incorporate various risks or costs that it sounds like they currently don’t consider. Mind you, if nobody else has any better estimates, then complaining that the best-grounded estimates in the world aren’t yet perfect seems a bit precious. Then the closing suggestion that they prominently highlight expected deaths (from indirect causes like bandits killing people while trying to steal charity money) is just dopey. Ordinary readers would surely misread that as suggesting that the interventions were somehow directly killing people. Obviously the better-justified display is the net effect in lives saved. But we’re not given any reason to expect that GiveWell’s current estimates here are far off.
Q: Does Wenar endorse inaction?
Wenar’s “most important [point] to make to EAs” (skipping over his weird projection about egotism) is that “If we decide to intervene in poor people’s lives, we should do so responsibly—ideally by shifting our power to them and being accountable for our actions.”
The overwhelmingly thrust of Wenar’s article—from the opening jab about asking EAs “how many people they’ve killed”, to the conditional I bolded above—seems to be to frame charitable giving as a morally risky endeavor, in contrast to the implicit safety of just doing nothing and letting people die.
I think that’s a terrible frame. It’s philosophically mistaken: letting people die from preventable causes is not a morally safe or innocent alternative (as is precisely the central lesson of Singer’s famous article). And it seems practically dangerous to publicly promote this bad moral frame, as he is doing here. The most predictable consequence is to discourage people from doing “riskily good” things like giving to charity. Since he seems to grant that aid is overall good and admirable, it seems like by his own lights he should regard his own article as harmful. It’s weird.
(If he just wants to advocate for more GiveDirectly-style anti-paternalistic interventions that “shift our power to them”, that seems fine but obviously doesn’t justify the other 95% of the article.)