Academic philosopher, co-editor of utilitarianism.net, blogs at https://rychappell.substack.com/
Richard Y Chappell
This is a very unfortunate situation, but as a general piece of life advice for anyone reading this: expressions of interest are not commitments and should not be “interpreted”—let alone acted upon! -- as such.
For example, within academia, a department might express interest in having Prof X join their department. But there’s no guarantee it will work out. And if Prof. X prematurely quit their existing job, before having a new contract in hand, they would be taking a massive career risk!
(I’m not making any comment on the broader issues raised here; I sympathize with all involved over the unfortunate miscommunication. Just thought it was important to emphasize this particular point. Disclosure: I’ve recently had positive experiences with EAIF.)
Great post. I strongly agree with the core point.
Regarding the last section: it’d be an interesting experiment to add a “democratic” community-controlled fund to supplement the existing options. But I wouldn’t want to lose the existing EA funds, with their vetted expert grantmakers. I personally trust (and agree with) the “core EAs” more than the “concerned EAs”, and would be less inclined to donate to a fund where the latter group had more influence. But by all means, let a thousand flowers bloom—folks could then direct their donations to the fund that’s managed as they think best.
[ETA: Just saw that Jason has already made a similar point.]
forced to watch money get redirected from the Global South to AI researchers.
I don’t think this is a healthy way of framing disagreements about cause prioritization. Imagine if a fan of GiveDirectly started complaining about GiveWell’s top charities for “redirecting money from the wallets of world’s poorest villagers...” Sounds almost like theft! Except, of course, that the “default” implicitly attributed here is purely rhetorical. No cause has any prior claim to the funds. The only question is where best to send them, and this should be determined in a cause neutral way, not picking out any one cause as the privileged “default” that is somehow robbed of its due by any or all competing candidates that receive funding.
Of course, you’re free to feel frustrated when others disagree with your priorities. I just think that the rhetorical framing of “redirected” funds is (i) not an accurate way to think about the situation, and (ii) potentially harmful, insofar as it seems apt to feed unwarranted grievances. So I’d encourage folks to try to avoid it.
Effective Altruists want to effectively help others. I think it makes perfect sense for this to be an umbrella movement that includes a range of different cause areas. (And literally saving the world is obviously a legitimate area of interest for altruists!)
Cause-specific movements are great, but they aren’t a replacement for EA as a cause-neutral movement to effectively do good.
This is really sad news. I hope everyone working there has alternative employment opportunities (far from a given in academia!).
I was shocked to hear that the philosophy department imposed a freeze on fundraising in 2020. That sounds extremely unusual, and I hope we eventually learn more about the reasons behind this extraordinary institutional hostility. (Did the university shoot itself in the financial foot for reasons of “academic politics”?)
A minor note on the forward-looking advice: “short-term renewable contracts” can have their place, especially for trying out untested junior researchers. But you should be aware that it also filters out mid-career academics (especially those with family obligations) who could potentially bring a lot to a research institution, but would never leave a tenured position for short-term one. Not everyone who is unwilling to gamble away their academic career is thereby a “careerist” in the derogatory sense.
Great post! I like your ‘1 in a million’ threshold as a heuristic, or perhaps sufficient condition for being non-Pascalian. But I think that arbitrarily lower probabilities could also be non-Pascalian, so long as they are sufficiently “objective” or robustly grounded.
Quick argument for this conclusion: just imagine scaling up the voting example. It seems worth voting in any election that significantly affects N people, where your chance of making a (positive) difference is inversely proportional (say within an order of magnitude of 1/N, or better). So long as scale and probability remain approximately inversely proportional, it doesn’t seem to make a difference to the choice-worthiness of voting what the precise value of N is here.
Crucially, there are well-understood mechanisms and models that ground these probability assignments. We’re not just making numbers up, or offering a purely subjective credence. Asteroid impacts seem similar. We might have robust statistical models, based on extensive astronomical observation, that allow us to assign a 1/trillion chance of averting extinction through some new asteroid-tracking program, in which case it seems to me that we should clearly take those expected value calculations at face value and act accordingly. However tiny the probabilities may be, if they are well-grounded, they’re not “Pascalian”.
Pascalian probabilities are instead (I propose) ones that lack robust epistemic support. They’re more or less made up, and could easily be “off” by many, many orders of magnitude. Per Holden Karnofsky’s argument in ‘Why we can’t take explicit expected value estimates literally’, Bayesian adjustments would plausibly mandate massively discounting these non-robust initial estimates (roughly in proportion to their claims to massive impact), leading to low adjusted expected value after all.
I like the previous paragraph as a quick solution to “Pascal’s mugging”. But even if you don’t think it works, I think this distinction between robust vs non-robustly grounded probability estimates may serve to distinguish intuitively non-Pascalian vs Pascalian tiny-probability gambles.
Conclusion: small probabilities are not Pascalian if they are either (i) not ridiculously tiny, or (ii) robustly grounded in evidence.
I’m really surprised by how common it is for people’s thoughts to turn in this direction! (cf. this recent twitter thread) A few points I’d stress in reply:
(1) Pro-natalism just means being pro-fertility in general; it doesn’t mean requiring reproduction every single moment, or no matter the costs.
(2) Assuming standard liberal views about the (zero) moral status of the non-conscious embryo, there’s nothing special about abortion from a pro-natalist perspective. It’s just like any other form of family planning—any other moment when you refrain from having a child but could have done otherwise.
(3) Violating people’s bodily autonomy is a big deal; even granting that it’s good to have more kids all else equal, it’s hard to imagine a realistic scenario in which “forced birth” would be for the best, all things considered. (For example, it’s obviously better for people to time their reproductive choices to better fit with when they’re in a position to provide well for their kids. Not to mention the Freakonomics stuff about how unwanted pregnancies, if forced to term, result in higher crime rates in subsequent decades.)
In general, we should just be really, really wary about sliding from “X is good, all else equal” to “Force everyone to do X, no matter what!” Remember your J.S. Mill, everyone! Utilitarians should be liberal.
I don’t understand why he said it at the time
Doesn’t the first sentence of his old email make this part fairly clear? It sounds like he’s talking about the classic edgelord thing of enjoying the tension between intuitive repugnance and (what he took to be) logical truth on a strictly literal reading, when divorced of all subtext (which is presumably not what any reasonable person would ordinarily take the claims in question to communicate). Perhaps similar to how many philosophers find logical paradoxes invigorating. (Cf. Scott Alexander’s classic post on related issues.)
That’s not to defend it, and I agree his apology isn’t sufficiently clear about why his particular example was so egregiously poorly-chosen. But it does strike me as most likely stemming from neuro-atypicality rather than racist intent, for whatever that’s worth. (Many understandably care more about racist effects than racist intent, but I mention the latter here since you seem be to be asking about Bostrom’s motivations, and that does seem relevant to assessments of blameworthiness.)
- 12 Jan 2023 17:36 UTC; 5 points) 's comment on [Linkpost] Nick Bostrom’s “Apology for an Old Email” by (
- 12 Jan 2023 18:26 UTC; 2 points) 's comment on [Linkpost] Nick Bostrom’s “Apology for an Old Email” by (
Hi Matthew, thanks for clarifying that! I owe you an apology. The quoted passage jumped out at me as illustrating a trend that I was finding irksome about the volume as a whole, but I wasn’t careful enough to double-check that my editorializing was a fair representation of your article in particular. I’ll update my post with a correction.
I think it’s important to also take into account the moral risks of refusing funding for family planning specifically because you want others to have more unintended pregnancies. On broadly Kantian-inspired views, for example, this would plausibly qualify as objectionably treating people as mere means.
FWIW, I favour interventions that give people more control over their lives, including reproductive autonomy, along with making it easier for people to have more kids when they’re ready and they positively want this.
- 21 Dec 2022 16:32 UTC; 6 points) 's comment on A Case for Voluntary Abortion Reduction by (
The paradox of open-mindedness
We want to be open-minded, but not so open-minded that our brains fall out. So we should be open to high-quality critiques, but not waste our time on low quality ones. My general worry with this post is that it doesn’t distinguish between the two. There seems a background assumption that EAs dismiss anti-capitalist or post-colonial critiques because we’re just closed-minded, rather than because those critiques are bad. I’m not so sure that you can just assume this!
Doing EA Lefter?
Another general worry I have about “Doing EA Better”, and perhaps especially this post, is the extent to which it seems to be implicitly pushing an agenda of “be more generically leftist, and less analytical”. If my impression here is mistaken, feel free to clarify this (and maybe add more political diversity to your list of recommended “deep critiques”—should we be as open to Hanania’s “anti-woke” stuff as to Crary et al?).
Insofar as the general message is, in effect, “think in ways that are less distinctive of EA”, whether this is good or bad advice will obviously depend on whether EA-style thinking is better or worse than the alternatives. Presumably most of us are here because we think it’s better. So that makes “be less distinctively EA” a hard sell, especially without firm evidence that the alternatives are better.
Some of this feels to me like, “Stop being you! Be this other person instead.” I don’t like this advice at all.
I wonder if it’s possible to separate out some of the more neutral advice/suggestions from the distracting “stop thinking in traditional analytic style” advice?
- 24 Feb 2023 21:48 UTC; 4 points) 's comment on “EA is very open to some kinds of critique and very not open to others” and “Why do critical EAs have to use pseudonyms?” by (
I wrote a substack post, ‘Text, Subtext, and Miscommunication’, that touches on some relevant issues.
It’s indisputable that some lives are more instrumentally valuable (to save) than others. So if you hold that all lives are equally intrinsically valuable, it follows that some lives are all-things-considered more valuable to save than others (due to having the same intrinsic value, but more instrumental value).
To avoid that “uncomfortable”-sounding conclusion, you would need to reject the second premise (that all lives are equally intrinsically valuable). That is, you would have to claim that some lives are intrinsically more valuable than others. And that is surely a much more uncomfortable conclusion!
I think we should conclude from this that there’s actually nothing remotely morally objectionably about saying that some lives are more valuable to save for purely instrumental reasons. The thing to avoid is to claim that some lives are intrinsically more important. It “sounds bad” to say “some lives are more valuable to save than others” because it sounds like you’re claiming that some lives are inherently more valuable than others. So it’s important to explicitly cancel the implicature by adding the “for purely instrumental reasons” clause.
But once clarified, it’s a perfectly innocuous claim. Anyone who still thinks it sounds bad at that point needs to think more clearly.
I would guess that they’re people, and I always prefer for people to have a more accurate impression of valuable ideas (all else equal). Some might then decide to learn more about those ideas, and act upon them in valuable ways.
(I’m not suggesting that anyone prioritize this sort of community-building over other work that may be more pressing for them. But it seems weird to dismiss it entirely.)
I like the central points that (i) even weak assumptions suffice to support catastrophic risk reduction as a public policy priority, and (ii) it’s generally better (more effective) to argue from widely-accepted assumptions than from widely-rejected ones.
But I worry about the following claim:
There are clear moral objections against pursuing democratically unacceptable policies
This seems objectionably conservative, and would seem to preclude any sort of “systemic change” that is not already popular. Closing down factory farms, for example, is clearly “democratically unacceptable” to the current electorate. But it would be ridiculous to claim that there are “clear moral objections” to vegan abolitionist political activism.
Obviously the point of such advocacy is to change what is (currently) regarded as “democratically (un)acceptable”. If the advocacy succeeds, then the result is no longer democratically unacceptable. If the advocacy fails, then it isn’t implemented. In neither case is there any obvious moral objection to advocating, within a democracy, for what you think is true and good.
Let me separately register that I think it reflects poorly on the authors of the post that they didn’t pause to acknowledge this commonsense point. Zealous legalism is pretty unpleasant to be around, and I would hate for this sort of thing (more witch-hunt than whistleblowing, IMO) to become a standard part of EA culture.
Whistleblowers should try to expose wrongdoing, not legalistic “gotchas”. Headline references to “breaking the law” strongly implicate that one is talking about serious, morally justified laws (like against murder, fraud, etc.). But I recall reading that there are so many obscure and arbitrary laws on the books that probably anyone has unwitting broken dozens of them without ever realizing it. So if you’re going to go after people for doing nothing wrong but for making themselves vulnerable to legalistic coercion, I think it’s important to be clear on this distinction.
Upvoted despite disagreeing, since I think this is an important question to explore. But I’m puzzled by the following claim:
from where I stand, someone who is giving half their salary to the “altruistic cause” of having community events and recruiting more people isn’t effective altruism.
Obviously the motivation for community-building is not that the community is an end in itself, but instrumental: more people “joining EA”, taking the GWWC pledge and/or going into directly high-impact work, means indirectly causing more good for all the other EA causes that we ultimately care about. Without addressing this head-on, I’m not sure which of the following you mean:
(1) An empirical disagreement: You deny that EA community-building is instrumentally effective for (indirectly) helping other, first-order EA causes.
(2) A moral/conceptual disagreement: You deny that indirectly causing good counts as altruism.
Can you clarify which of these you have in mind?
It’s worth flagging the obvious solution of supporting raising taxes on billionaires while allowing them to donate instead thanks to the charitable tax deduction. (I mention this in the comments to my post on Billionaire Philanthropy, which Dylan Matthews cites and draws upon for the “Given that the billionaires do exist, what else would you rather they spend money on?” argument.)
P.S. Speaking as a New Zealander, I’m pretty confident that most of my compatriots believe that American billionaires should pay more taxes!
I didn’t vote, but was (mildly) annoyed to find the linked post is partly pay-walled, which makes the link-post feel uncomfortably spammy? I expect it would get a more positive reception if the content was reproduced on the forum, rather than directing people to a paid subscription.
I found it a bit hard to discern what constructive points he was trying to make amidst all the snark. But the following seemed like a key passage in the overall argument:
Putting aside the implicit status games and weird psychological projection, I don’t understand what practical point Wenar is trying to make here. If the aid is indeed net good, as he seems to grant, then “pills improve lives” seems like the most important insight not to lose sight of. And if someone starts “haranguing” you for affirming this important insight, it does seem like it could come across as trying to prevent that net good from happening. (I don’t see any reason to personalize the concern, as about “stopping me”—that just seems blatantly uncharitable.)
It sounds like Wenar just wants more public affirmations of causal complexity to precede any claim about our potential to do good? But it surely depends on context whether that’s a good idea. Too much detail, especially extraneous detail that doesn’t affect the bottom line recommendation, could easily prove distracting and cause people (like, seemingly, Wenar himself) to lose sight of the bottom line of what matters most here.
So that section just seemed kind of silly. There was a more reasonable point mixed in with the unreasonable in the next section:
The initial complaint here seems fine: presumably GiveWell could (marginally) improve their cost-effectiveness models by trying to incorporate various risks or costs that it sounds like they currently don’t consider. Mind you, if nobody else has any better estimates, then complaining that the best-grounded estimates in the world aren’t yet perfect seems a bit precious. Then the closing suggestion that they prominently highlight expected deaths (from indirect causes like bandits killing people while trying to steal charity money) is just dopey. Ordinary readers would surely misread that as suggesting that the interventions were somehow directly killing people. Obviously the better-justified display is the net effect in lives saved. But we’re not given any reason to expect that GiveWell’s current estimates here are far off.
Q: Does Wenar endorse inaction?
Wenar’s “most important [point] to make to EAs” (skipping over his weird projection about egotism) is that “If we decide to intervene in poor people’s lives, we should do so responsibly—ideally by shifting our power to them and being accountable for our actions.”
The overwhelmingly thrust of Wenar’s article—from the opening jab about asking EAs “how many people they’ve killed”, to the conditional I bolded above—seems to be to frame charitable giving as a morally risky endeavor, in contrast to the implicit safety of just doing nothing and letting people die.
I think that’s a terrible frame. It’s philosophically mistaken: letting people die from preventable causes is not a morally safe or innocent alternative (as is precisely the central lesson of Singer’s famous article). And it seems practically dangerous to publicly promote this bad moral frame, as he is doing here. The most predictable consequence is to discourage people from doing “riskily good” things like giving to charity. Since he seems to grant that aid is overall good and admirable, it seems like by his own lights he should regard his own article as harmful. It’s weird.
(If he just wants to advocate for more GiveDirectly-style anti-paternalistic interventions that “shift our power to them”, that seems fine but obviously doesn’t justify the other 95% of the article.)