Here’s a VERY unharitable idea (that I hope will not be removed because it could be true, and if so might be useful for EAs to think about):
Others have pointed to the rationalist transplant versus EA native divide. I can’t help but feel that this is a big part of the issue we’re seeing here.
I would guess that the average “EA native” is motivated primarily by their desire to do good. They might have strong emotions regarding human happiness and suffering, which might bias them against a letter using prima facie hurtful language. They are also probably a high decoupler and value stuff like epistemic integrity—after all, EA breaks from intuitive morality a lot—but their first impulses are to consider consequences and goodness.
I would guess that the average “rationalist transplant” is motivated primarily by their love of epistemic integrity and the like. They might have a bias in favor of violating social norms, which might bias them in favor of a letter using hurtful language. They probably also value social welfare (they wouldn’t be here if they didn’t) but their first impulses favor finding a norm-breaking truth. It may even be a somewhat deolontogical impulse: it’s good to challenge social norms in search of truth, independent of whether it creates good consequences.
I believe the EA native impulse seems more helpful to the EA cause than the rationalist impulse.
And I worry the rationalist impulse may even be actively harmful if it dilutes EA’s core values. For example, in this post a rationalist transplant describes themself as motivated by status instead of morality. This seems very bad to me.
Again, I recognize that this is a VERY uncharitable view. I’d like to hasten to say that there are probably a great many rationalist-transplants whose commitment to advancing social welfare are equal to or greater than mine, as an EA native. My argument is about group averages, not individual characteristics.
...
Okay, yes, I found that last sentence really enjoyable to write, guilty as charged
This looks like retconning of history. EA and rationalism go way back, and the entire premise of EA is that determining what makes more good through “rationalist”, or more precisely, consequentialist lens is moral. There is no conflict of principles.
The quality of discussion on the value of tolerating Bostrom’s (or anyone else’s ) opinions on race&IQ is incredibly low, and the discussion is informed by emotion rather than even trivial consequentialist analysis. The failure to approach this issue analytically is a failure both by Rationalist and by old-school EA standards.
I’m arguing not for a “conflict of principles” but a conflict of impulses/biases. Anecdotally, I see a bias for believing that the truth is probably norm-violative in rationalist communities. I worry that this biases some people such that their analysis fails to be sufficiently consequentialist, as you describe.
Here’s a VERY unharitable idea (that I hope will not be removed because it could be true, and if so might be useful for EAs to think about):
Others have pointed to the rationalist transplant versus EA native divide. I can’t help but feel that this is a big part of the issue we’re seeing here.
I would guess that the average “EA native” is motivated primarily by their desire to do good. They might have strong emotions regarding human happiness and suffering, which might bias them against a letter using prima facie hurtful language. They are also probably a high decoupler and value stuff like epistemic integrity—after all, EA breaks from intuitive morality a lot—but their first impulses are to consider consequences and goodness.
I would guess that the average “rationalist transplant” is motivated primarily by their love of epistemic integrity and the like. They might have a bias in favor of violating social norms, which might bias them in favor of a letter using hurtful language. They probably also value social welfare (they wouldn’t be here if they didn’t) but their first impulses favor finding a norm-breaking truth. It may even be a somewhat deolontogical impulse: it’s good to challenge social norms in search of truth, independent of whether it creates good consequences.
I believe the EA native impulse seems more helpful to the EA cause than the rationalist impulse.
And I worry the rationalist impulse may even be actively harmful if it dilutes EA’s core values. For example, in this post a rationalist transplant describes themself as motivated by status instead of morality. This seems very bad to me.
Again, I recognize that this is a VERY uncharitable view. I’d like to hasten to say that there are probably a great many rationalist-transplants whose commitment to advancing social welfare are equal to or greater than mine, as an EA native. My argument is about group averages, not individual characteristics.
...
Okay, yes, I found that last sentence really enjoyable to write, guilty as charged
This looks like retconning of history. EA and rationalism go way back, and the entire premise of EA is that determining what makes more good through “rationalist”, or more precisely, consequentialist lens is moral. There is no conflict of principles.
The quality of discussion on the value of tolerating Bostrom’s (or anyone else’s ) opinions on race&IQ is incredibly low, and the discussion is informed by emotion rather than even trivial consequentialist analysis. The failure to approach this issue analytically is a failure both by Rationalist and by old-school EA standards.
I’m arguing not for a “conflict of principles” but a conflict of impulses/biases. Anecdotally, I see a bias for believing that the truth is probably norm-violative in rationalist communities. I worry that this biases some people such that their analysis fails to be sufficiently consequentialist, as you describe.