Opposing view: I don’t think these are real concerns. The Future of Animal Consciousness Research citation boils down to “what if research in animal cognition is one day suppressed due to being labeled speciesist”—that’s not a realistic worry. The vox thinkpeice emphasizes that we are in fact efficiently saving lives—I see no critiques there that we haven’t also internally voiced to ourselves, as a community. I don’t think it’s realistic to expect coverage of us not to include these critiques, regardless of political climate. According to google search, the only folks even discussing that paper are long-termist EAs. I don’t think AI alignment is any more politically polarized except as a special case of “vague resentment towards silicon valley elites” in general.
Sensible people on every part of the political spectrum will agree that animal and human EA interventions are good or at least neutral. The most controversial it gets is that people will disagree with the implication that they are best ways to do good...and why not? We internally often disagree on that too. Most people won’t understand ai alignment enough to have an opinion beyond vague ideas about tech and tech-people. Polarization is occurring, but none of this constitutes evidence regarding political polarization’s potential effect on EA.
I think this comment provides a useful perspective. And your second paragraph sounds to me like highlighting that EA’s largely “pull the rope sideways”, in Robin Hanson’s terms:
The policy world can thought of as consisting of a few Tug-O-War “ropes” set up in this high dimensional policy space. [...]
If, however, you actually want to improve policy, if you have a secure enough position to say what you like, and if you can find a relevant audience, then prefer to pull policy ropes sideways. Few will bother to resist such pulls, and since few will have considered such moves, you have a much better chance of identifying a move that improves policy.
If I wanted to argue against your perspective, I’d say something like “We indeed don’t have strong evidence of political polarisation’s effect on EA. But it will necessarily be the case that we don’t have such evidence until the patterns we’re worried about have already started, and likely reached a point where it’s much harder to stop them than it would be to prevent them now. So even if we’re in a world where polarisation will be a real problem for EA, your critique could be raised for long enough to delay work on the problem. And it’s therefore worth at least scoping out the problem in advance, even if we must rely on analogies and speculative arguments.”
If I wanted to argue against that, I’d probably say something about the analogies and speculative arguments being relatively weak (even for analogies and speculative arguments). And something about how scoping out this problem with a post like this could itself pose risks of increasing partisanship/polarisation within EA, or of drawing a “culture wars spotlight” towards EA.
Overall, I feel fairly unsure which perspective I’d lean towards. Though I do very tentatively feel that this post may have had a higher level/proportion of support than I expected, given the quality of arguments and analogies made.
Opposing view: I don’t think these are real concerns. The Future of Animal Consciousness Research citation boils down to “what if research in animal cognition is one day suppressed due to being labeled speciesist”—that’s not a realistic worry. The vox thinkpeice emphasizes that we are in fact efficiently saving lives—I see no critiques there that we haven’t also internally voiced to ourselves, as a community. I don’t think it’s realistic to expect coverage of us not to include these critiques, regardless of political climate. According to google search, the only folks even discussing that paper are long-termist EAs. I don’t think AI alignment is any more politically polarized except as a special case of “vague resentment towards silicon valley elites” in general.
Sensible people on every part of the political spectrum will agree that animal and human EA interventions are good or at least neutral. The most controversial it gets is that people will disagree with the implication that they are best ways to do good...and why not? We internally often disagree on that too. Most people won’t understand ai alignment enough to have an opinion beyond vague ideas about tech and tech-people. Polarization is occurring, but none of this constitutes evidence regarding political polarization’s potential effect on EA.
I think this comment provides a useful perspective. And your second paragraph sounds to me like highlighting that EA’s largely “pull the rope sideways”, in Robin Hanson’s terms:
(Relevant, more recent Hanson post: To Oppose Polarization, Tug Sideways.)
If I wanted to argue against your perspective, I’d say something like “We indeed don’t have strong evidence of political polarisation’s effect on EA. But it will necessarily be the case that we don’t have such evidence until the patterns we’re worried about have already started, and likely reached a point where it’s much harder to stop them than it would be to prevent them now. So even if we’re in a world where polarisation will be a real problem for EA, your critique could be raised for long enough to delay work on the problem. And it’s therefore worth at least scoping out the problem in advance, even if we must rely on analogies and speculative arguments.”
If I wanted to argue against that, I’d probably say something about the analogies and speculative arguments being relatively weak (even for analogies and speculative arguments). And something about how scoping out this problem with a post like this could itself pose risks of increasing partisanship/polarisation within EA, or of drawing a “culture wars spotlight” towards EA.
Overall, I feel fairly unsure which perspective I’d lean towards. Though I do very tentatively feel that this post may have had a higher level/proportion of support than I expected, given the quality of arguments and analogies made.