I strongly agree with some parts of this post, in particular:
I think integrity is extremely important, and I like that this post reinforces that.
I think it’s a great point that EA seems like it could be very bitterly divided indeed, and appreciating that we haven’t as well as thinking about why (despite our various different beliefs) seems like a great exercise. It does seem like we should try to maintain those features.
On the other hand, I disagree with some of it—and thought I’d push back especially given that there isn’t much pushback in the comments here:
I think it’s a bad idea to embrace the core ideas of EA without limits or reservations; we as EAs need to constantly inject pluralism and moderation. That’s a deep challenge for a community to have—a constant current that we need to swim against.
I think this is misleading in that I’d guess the strongest current we face is toward greater moderation and pluralism, rather than radicalism. As a community and as individuals, some sources of pressure in a ‘moderation’ direction include:
As individuals, the desire to be liked by and get along with others, including people inside and outside of EA
As individuals that have been raised in a mainstream ethical environment (most of us), a natural pluralism and strong attraction to common sense morality
The desire to live a normal life full of the normal recreational, familial, and cultural stuff
As a community, wanting to seem less weird to the rest of the world in order to be able to attract and/or work with people who are (currently) unfamiliar with the EA community.
Implicit and explicit pressure from another against weirdness so that we don’t embarrass one another/hurt EA’s reputation
Fear of being badly wrong in a way that feels less excusable because it’s not the case that everyone else is also badly wrong in the same way
Whatever else is involved in the apparent phenomenon where as a community gets bigger, it often becomes less unique
We do face some sources of pressure away from pluralism and moderation, but they seem fewer and weaker to me:
The desire to seem hardcore that you mentioned
Something about a desire for interestingness/feeling interesting/specialness (possible overlap with the above)
Selection effects—EA tends to attract people who are really into consistency and following arguments wherever they lead (though I’d guess this is getting weaker over time bc of the above effects).
Maybe other things?
I do agree that we should try hard to guard against bad maximising—but I think we also need to make sure we remember what is really important about maximising in the face of pressure not to.
Also, moral and empirical uncertainty strongly favour moderation and pluralism—so I agree that it’s good to have reservations about EA ideas (though primarily in the same way it’s good to have reservations about a lot of ideas). I do not want to think of those ideas as separate from or in tension with the core ideas of EA. I think it would be better to think of them as an important part of the ideas of EA.
Somewhat speculating: I also wonder if the two problems you cite at the top are actually sort of a problem and a solution:
If you’re maximizing X, you’re asking for trouble by default. You risk breaking/downplaying/shortchanging lots of things that aren’t X, which may be important in ways you’re not seeing. Maximizing X conceptually means putting everything else aside for X—a terrible idea unless you’re really sure you have the right X. (This idea vaguely echoes some concerns about AI alignment, e.g., powerfully maximizing not-exactly-the-right-thing is something of a worst-case event.)
EA is about maximizing how much good we do. What does that mean? None of us really knows. EA is about maximizing a property of the world that we’re conceptually confused about, can’t reliably define or measure, and have massive disagreements about even within EA. By default, that seems like a recipe for trouble.
Maybe EA is avoiding the dangers of maximisation (insofar as we are) exactly because we are trying to maximize something we’re confused about. Since we’re confused about what ‘the good’ is, we’re constantly hedging our bets; since we’re unsure how to achieve the good, we go for robust approaches and try a variety of approaches and try not to alienate people who can help us figure out what the good is and how to make it happen. This uncertainty reduces the risks of maximisation greatly. Analogy: Stuart Russel’s strategy to make AI safe by making it unsure about its goals.
I strongly agree with some parts of this post, in particular:
I think integrity is extremely important, and I like that this post reinforces that.
I think it’s a great point that EA seems like it could be very bitterly divided indeed, and appreciating that we haven’t as well as thinking about why (despite our various different beliefs) seems like a great exercise. It does seem like we should try to maintain those features.
On the other hand, I disagree with some of it—and thought I’d push back especially given that there isn’t much pushback in the comments here:
I think this is misleading in that I’d guess the strongest current we face is toward greater moderation and pluralism, rather than radicalism. As a community and as individuals, some sources of pressure in a ‘moderation’ direction include:
As individuals, the desire to be liked by and get along with others, including people inside and outside of EA
As individuals that have been raised in a mainstream ethical environment (most of us), a natural pluralism and strong attraction to common sense morality
The desire to live a normal life full of the normal recreational, familial, and cultural stuff
As a community, wanting to seem less weird to the rest of the world in order to be able to attract and/or work with people who are (currently) unfamiliar with the EA community.
Implicit and explicit pressure from another against weirdness so that we don’t embarrass one another/hurt EA’s reputation
Fear of being badly wrong in a way that feels less excusable because it’s not the case that everyone else is also badly wrong in the same way
Whatever else is involved in the apparent phenomenon where as a community gets bigger, it often becomes less unique
We do face some sources of pressure away from pluralism and moderation, but they seem fewer and weaker to me:
The desire to seem hardcore that you mentioned
Something about a desire for interestingness/feeling interesting/specialness (possible overlap with the above)
Selection effects—EA tends to attract people who are really into consistency and following arguments wherever they lead (though I’d guess this is getting weaker over time bc of the above effects).
Maybe other things?
I do agree that we should try hard to guard against bad maximising—but I think we also need to make sure we remember what is really important about maximising in the face of pressure not to.
Also, moral and empirical uncertainty strongly favour moderation and pluralism—so I agree that it’s good to have reservations about EA ideas (though primarily in the same way it’s good to have reservations about a lot of ideas). I do not want to think of those ideas as separate from or in tension with the core ideas of EA. I think it would be better to think of them as an important part of the ideas of EA.
Somewhat speculating: I also wonder if the two problems you cite at the top are actually sort of a problem and a solution:
Maybe EA is avoiding the dangers of maximisation (insofar as we are) exactly because we are trying to maximize something we’re confused about. Since we’re confused about what ‘the good’ is, we’re constantly hedging our bets; since we’re unsure how to achieve the good, we go for robust approaches and try a variety of approaches and try not to alienate people who can help us figure out what the good is and how to make it happen. This uncertainty reduces the risks of maximisation greatly. Analogy: Stuart Russel’s strategy to make AI safe by making it unsure about its goals.