It’s kind of funny for me to hear about people arguing that weirdness is a necessary part of EA. To me, EA concepts are so blindingly straightforward (“we should try to do as much good with donations as possible”, “long-term impacts are more important than short-term impacts”, “even things that have a small probability of happening are worth tackling if they are impactful enough”) that you have to actively modify your rhetoric to make them seem weird.
Strongly agree with all of the points you brought up—especially on AI Safety. I was quite skeptical for a while until someone gave me an example of AI risk that didn’t sound like it was exaggerated for effect, to which my immediate reaction was “Yeah, that seems… really scarily plausible”.
It seems like there are certain principles that have a ‘soft’ and a ‘hard’ version—you list a few here. The soft ones are slightly fuzzy concepts that aren’t objectionable, and the hard ones are some of the tricky outcomes you come to if you push them. Taking a couple of your examples:
Soft: We should try to do as much good with donations as possible
Hard: We will sometimes guide time and money away from things that are really quite important, because they’re not the most important
Soft: Long-term impacts are more important than short-term impacts
Hard: We may pass up interventions with known and high visible short-term benefits in favour of those with long-term impacts that may not be immediately obvious
This may seem obvious, but to people who aren’t familiar, leading with the soft ones on the basis that the hard ones will come up soon enough if someone is interested or does their research will be more effective in giving a positive impression than jumping straight to the hard stuff. But I see a lot more jumping than would be justified. I can see why, but if you were trying to persuade someone to join or have a good opinion of your political party, would you lead with ‘we should invest in public services’ or ‘you should pay more taxes’?
It’s kind of funny for me to hear about people arguing that weirdness is a necessary part of EA. To me, EA concepts are so blindingly straightforward (“we should try to do as much good with donations as possible”, “long-term impacts are more important than short-term impacts”, “even things that have a small probability of happening are worth tackling if they are impactful enough”) that you have to actively modify your rhetoric to make them seem weird.
Strongly agree with all of the points you brought up—especially on AI Safety. I was quite skeptical for a while until someone gave me an example of AI risk that didn’t sound like it was exaggerated for effect, to which my immediate reaction was “Yeah, that seems… really scarily plausible”.
It seems like there are certain principles that have a ‘soft’ and a ‘hard’ version—you list a few here. The soft ones are slightly fuzzy concepts that aren’t objectionable, and the hard ones are some of the tricky outcomes you come to if you push them. Taking a couple of your examples:
Soft: We should try to do as much good with donations as possible
Hard: We will sometimes guide time and money away from things that are really quite important, because they’re not the most important
Soft: Long-term impacts are more important than short-term impacts
Hard: We may pass up interventions with known and high visible short-term benefits in favour of those with long-term impacts that may not be immediately obvious
This may seem obvious, but to people who aren’t familiar, leading with the soft ones on the basis that the hard ones will come up soon enough if someone is interested or does their research will be more effective in giving a positive impression than jumping straight to the hard stuff. But I see a lot more jumping than would be justified. I can see why, but if you were trying to persuade someone to join or have a good opinion of your political party, would you lead with ‘we should invest in public services’ or ‘you should pay more taxes’?