Trying to better understand the practical epistemology of EA, and how we can improve upon it.
Violet Hour
I disagree with this inference. If I’d heard that (say) supportive feminist tweets were routinely getting fewer retweets than tweets critical of feminism, I don’t think I’d believe that feminists were “definitely doing things wrong PR-wise”. Tweet numbers could be relevant evidence, given some wider context, like “there’s a social trend where the most controversial and peripheral feminist ideas get disproportionately promulgated, at the expense of more central and popular ideas”, but I’m not convinced EA is in a similar situation.
I don’t have a view on whether buying Wytham was a good idea, but I do agree with Owen that we should “let decisions be guided less by what we think looks good, and more by what we think is good”. I want people to act on important ideas, and I think it’s bad when people are turned away from important ideas — but one important idea I want to spread is Owen’s, where we emphasize the virtue of performing actions you can ultimately stand behind, even if the action has bad optics.
The short answer is: I think the norm delivers meaningfully different verdicts for certain ways of cashing out ‘act consequentialism’, but I imagine that you (and many other consequentialists) are going to want to say that the ‘Practical Kantian’ norm is compatible with act consequentialism. I’ll first discuss the practical question of deontic norms and EA’s self-conception, and then respond to the more philosophical question.
1.
If I’m right about your view, my suggested Kantian spin would (for you) be one way among many to talk about deontic norms, which could be phrased in more explicitly act-consequentialist language. That said, I still think there’s an argument for EA as a whole making deontic norms more central to its self-conception, as opposed to a conception where some underlying theory of the good is more central. EA is trying to intervene on people’s actions, after all, and your underlying theory of the good (at least in principle) underdetermines your norms for action. So, to me, it seems better to just directly highlight the deontic norms we think are valuable. EA is not a movement of moral theorists qua moral theorists, we’re a movement of people trying to do stuff that makes the world better. Even as a consequentialist, I guess that you’re only going to want involvement with a movement that shares broadly similar views to you about the action-relevant implications of consequentialism.
I want to say that I also think there should be clear public work outlining how the various deontic norms we endorse in EA clearly follow from consequentialist theories. Otherwise, I can see internal bad actors (or even just outsiders) thinking that statements about the importance of deontological norms are just about ‘brand management’, or whatever. I think it’s important to have a consistent story about the ways in which our deontic norms related to our more foundational principles, both so that outsiders don’t feel like they’re being misled about what EA is about, and so that we have really explicit grounds on which to condemn certain behaviors as legitimately and unambiguously violating norms that we care about.
(Also, independently, I’ve (e.g.) met many people in EA who seem to flit between ‘EUT is the right procedure for practical decision-making’ and ‘EUT is an underratedly useful tool’ — even aside from discussions of side-constraints, I don’t think we have a clear conception as to what our deontic norms are, and I think this would independently beneficial. For instance, I think it would be good to have a clearer account of the procedures that really drive our prioritization decisions).2.
On a more philosophical level, I believe that various puzzle cases in decision theory help motivate the case for treating maxims as the appropriate evaluative focal point wrt rational decision-making, rather than acts. Here are some versions of act consequentialism that I think will diverge from the Practical Kantian norm:
Kant+CDT tells you to one-box in the standard Newcomb problem, whereas Consequentialism+CDT doesn’t.
Consequentialism+EDT is vulnerable XOR blackmail, whereas Kant+CDT isn’t.
Perhaps there is a satisfying decision theory which, combined with act-consequentialism, provides you with (what I believe to be) the right answers to decision-theoretic puzzle cases, though I’m currently not convinced. I think I might also disagree with you about the implications of collective action problems for consequentialism (though I agree that what you describe as “The Rounding to Zero Fallacy” and “The First-Increment Fallacy” are legitimate errors), but I’d want to think more about those arguments before saying anything more.
Thanks for the comment, this is a useful source. I agree that SBF’s actions violated “high standards of honesty” (as well as, um, more lenient ones), and don’t seem like the actions of a good citizen.
Still, I’ll note that I still feel hesitant about claims like “Sam violated the principles of the EA community”, because your cited quote is not the only way that EA is defined. I agree that we can find accounts of EA under which Sam violated those principles. Relative to those criteria, it would be correct to say “Sam violated EA principles”. Thus, I think you and I both agree that saying things like “Sam acted in accordance with EA principles” would be wrong.However, I have highlighted other accounts of “what EA is about”, under which I think it’s much harder to say that Sam straightforwardly violated those principles — accounts which place more emphasis on the core idea of maximization. And my intuitions about when it was appropriate to make claims of the form ‘Person X violated the principles of this community’ requires something close to unanimity, prior to X’s action, of what the core principles actually are, and what they commit to you. Due to varying accounts of what EA ‘is’, or ‘about’, I reject the claim that Sam violated EA principles for much the same reason that I reject claims like Sam acted in accordance with them. So, I still think I stand behind my indeterminacy claim.
I’m unsure where we disagree. Do you think you have more lenient standards than me for when we should talk about ‘violating norms’, or do you think that (some of?) the virtues listed in your quote are core EA principles, and close-to-unanimously agreed upon?
(No problem with self-linking, I appreciate it!)
Also, I think there’s an adequate Kantian response to your example. Am I missing something?Part of the relevant context for your action includes, by stipulation, you knowing that the needed 50 votes have been cast. This information changes your context, and thus you reason “what action would rational agents in this situation (my epistemic state included) perform to best achieve my ends” — in this case, as the 50 votes have been cast, you don’t cast another.
So, I act on the basis of maxims, but changes in my epistemic state can still appropriately inform my decision-making.
Upvoted, but I disagree with this framing.
I don’t think our primary problem was with flawed probability assignments over some set of explicitly considered hypotheses. If I were to continue with the probabilistic metaphor, I’d be much more tempted to say that we erred in our formation of the practical hypothesis space — that is, the finite set of hypothesis that the EA community considered to be salient, and worthy of extended discussion.
Afaik (and at least in my personal experience), very few EAs seemed to think the potential malfeasance of FTX was an important topic to discuss. Because the topics weren’t salient, few people bothered to assign explicit probabilities. To me, the fact that we weren’t focusing on the right topics, in some nebulous sense, is more concerning than the ways in which we erred in assigning probabilities to claims within our practical hypothesis space.
Thanks for the suggestion! Done now. :)
Some brief, off-the-cuff sociological reflections on the Bostrom email:
EA will continue to be appealing to those with an ‘edgelord’ streak. It’s worth owning up to this, and considering how to communicate going forward in light of that fact.
I think some of the reaction to the importance of population-level averages is indicative of an unhealthy attitude towards population-level averages.
I also think the ‘epistemic integrity’ angle is important.
Each consideration is discussed below.
1.
I think basically everyone here agrees that white people shouldn’t be using (or mentioning) racial slurs. I think you should also avoid generics. I think that you very rarely gain anything, even from a purely epistemic point of view, from saying (for example): “men are more stupid than women”.
EA skews young, and does a lot of outreach on university campuses. I also think that EA will continue to be attractive to people who like to engage in the world via a certain kind of communication, and I think many people interested in EA are likely to be drawn to controversial topics. I think this is unavoidable. Given that it’s unavoidable, it’s worth being conscious of this, and directly tackling what’s to be gained (and lost) from certain provocative modes of communication, in what contexts.
Lead poisoning affects IQ. The Flint water crisis affected an area that was majority African American, and we have strong evidence that lead poisoning affects IQ. Here’s one way of ‘provocatively communicating’ certain facts.
I don’t like this statement and think it is true.
Deliberately provocative communication probably does have its uses, but it’s a mode of communication that can be in tension with nuanced epistemics, as well as kindness. If I’m to get back in touch with my own old edgelord streak for a moment, I’d say that one (though obviously not the major) benefit of EA is the way it can transform ‘edgelord energy’ into something that can actually make the world better.
I think, as with SBF, there’s a cognitive cluster which draws people towards both EA, and towards certain sorts of actions most of us wish to reflectively disavow. I think it’s reasonable to say: “EA messaging will appeal (though obviously will not only appeal) disproportionately to a certain kind of person. We recognize the downsides in this, and here’s what we’re doing in light of that.”
2.
There are fewer women than men in computer science. This doesn’t give you any grounds — once a woman says “I’m interested in computer science”, to be like “oh, well, but maybe she’s lying, or maybe it’s a joke, or … ”
Why? You have evidence that means you don’t need to rely on such coarse data! You don’t need to rely on population-level averages! To the extent that you do, or continue to treat such population-averages as highly salient in the face of more relevant evidence, I think you should be criticized. I think you should be criticized because it’s a sign that your epistemics have been infected by pernicious stereotypes, which makes you worse at understanding the world, in addition to being more likely to cause harm when interacting in that world.
3.
You should be careful to believe true things, even when they’re inconvenient.
On the epistemic level, I actually think that we’re not in an inconvenient possible world wrt the ‘genetic influence on IQ’, partially because I think certain conceptual discussions of ‘heritability’ are confused, and partially because I think that it’s obviously reasonable to look at the (historically quite recent!) effects of slavery, and conclude “yeaah, I’m not sure I’d expect the data we have to look all that different, conditioned on the effects of racism causing basically all of the effects we currently see”.
But, fine, suppose I’m in an inconvenient possible world. I could be faced with data that I’d hate to see, and I’d want to maintain epistemic integrity.
One reason I personally found Bostrom’s email sad was that I sensed a missing mood. To support this, here’s an intuition pump that might be helpful: suppose you’re back in the early days of EA, working for $15k in the basement of an estate agent. You’ve sacrificed a lot to do something weird, sometimes you feel a bit on the defensive, and you worry that people aren’t treating you with the seriousness you deserve. Then, someone comes along, says they’ve run some numbers, and told you that EA was more racist than other cosmopolitan groups, and — despite EA’s intention to do good — is actually far more harmful to the world than other comparable groups. Suppose further that we also ran surveys and IQ tests, and found that EA is also more stupid and unattractive than other groups. I wouldn’t say:
Instead, I’d communicate the information, if I thought it was important, in a careful and nuanced way. If I saw someone make the unqualified statement quoted above, I wouldn’t personally wish to entrust that person with promoting my best interests, or with leading an institute directed towards the future of humanity.
I raise this example not because I wish to opine on contemporary Bostrom, based on his email twenty-six years ago. I bring this example up because, while (like 𝕮𝖎𝖓𝖊𝖗𝖆) I’m glad that Bostrom didn’t distort his epistemics in the face of social pressure, I think it’s reasonable to think (like Habiba, apologies if this is an unfair phrase) that Bostrom didn’t take ownership for his previously missing mood, and communicate why his subsequent development leads him to now repudiate what he said.
I don’t want to be unnecessarily punitive towards people who do shitty things. That’s not kindness. But I also want to be part of a community that promotes genuinely altruistic standards, including a fair sense of penance. With that in mind, I think it’s healthy for people to say: “we accept that you don’t endorse your earlier remark (Bostrom originally apologized within 24 hours, after all), but we still think your apology misses something important, and we’re a community that wants people who are currently involved to meet certain standards.”