Hey, Arden from 80k here -
It’d take more looking into stuff/thinking to talk about the other points, but I wanted to comment on something quickly: thank you for pointing out that the philosophy phd career profile and the competitiveness of the field wasn’t sufficiently highlighted on the GPR problem profile . We’ve now added a note about it in the “How to enter” section.
I wrote the career review when I’d first started at 80k, and for me it was just an oversight not to link to it and its points more prominently on the GPR problem profile.
I strongly agree with some parts of this post, in particular:
I think integrity is extremely important, and I like that this post reinforces that.
I think it’s a great point that EA seems like it could be very bitterly divided indeed, and appreciating that we haven’t as well as thinking about why (despite our various different beliefs) seems like a great exercise. It does seem like we should try to maintain those features.
On the other hand, I disagree with some of it—and thought I’d push back especially given that there isn’t much pushback in the comments here:
I think this is misleading in that I’d guess the strongest current we face is toward greater moderation and pluralism, rather than radicalism. As a community and as individuals, some sources of pressure in a ‘moderation’ direction include:
As individuals, the desire to be liked by and get along with others, including people inside and outside of EA
As individuals that have been raised in a mainstream ethical environment (most of us), a natural pluralism and strong attraction to common sense morality
The desire to live a normal life full of the normal recreational, familial, and cultural stuff
As a community, wanting to seem less weird to the rest of the world in order to be able to attract and/or work with people who are (currently) unfamiliar with the EA community.
Implicit and explicit pressure from another against weirdness so that we don’t embarrass one another/hurt EA’s reputation
Fear of being badly wrong in a way that feels less excusable because it’s not the case that everyone else is also badly wrong in the same way
Whatever else is involved in the apparent phenomenon where as a community gets bigger, it often becomes less unique
We do face some sources of pressure away from pluralism and moderation, but they seem fewer and weaker to me:
The desire to seem hardcore that you mentioned
Something about a desire for interestingness/feeling interesting/specialness (possible overlap with the above)
Selection effects—EA tends to attract people who are really into consistency and following arguments wherever they lead (though I’d guess this is getting weaker over time bc of the above effects).
Maybe other things?
I do agree that we should try hard to guard against bad maximising—but I think we also need to make sure we remember what is really important about maximising in the face of pressure not to.
Also, moral and empirical uncertainty strongly favour moderation and pluralism—so I agree that it’s good to have reservations about EA ideas (though primarily in the same way it’s good to have reservations about a lot of ideas). I do not want to think of those ideas as separate from or in tension with the core ideas of EA. I think it would be better to think of them as an important part of the ideas of EA.
Somewhat speculating: I also wonder if the two problems you cite at the top are actually sort of a problem and a solution:
Maybe EA is avoiding the dangers of maximisation (insofar as we are) exactly because we are trying to maximize something we’re confused about. Since we’re confused about what ‘the good’ is, we’re constantly hedging our bets; since we’re unsure how to achieve the good, we go for robust approaches and try a variety of approaches and try not to alienate people who can help us figure out what the good is and how to make it happen. This uncertainty reduces the risks of maximisation greatly. Analogy: Stuart Russel’s strategy to make AI safe by making it unsure about its goals.