This looks like retconning of history. EA and rationalism go way back, and the entire premise of EA is that determining what makes more good through “rationalist”, or more precisely, consequentialist lens is moral. There is no conflict of principles.
The quality of discussion on the value of tolerating Bostrom’s (or anyone else’s ) opinions on race&IQ is incredibly low, and the discussion is informed by emotion rather than even trivial consequentialist analysis. The failure to approach this issue analytically is a failure both by Rationalist and by old-school EA standards.
I’m arguing not for a “conflict of principles” but a conflict of impulses/biases. Anecdotally, I see a bias for believing that the truth is probably norm-violative in rationalist communities. I worry that this biases some people such that their analysis fails to be sufficiently consequentialist, as you describe.
This looks like retconning of history. EA and rationalism go way back, and the entire premise of EA is that determining what makes more good through “rationalist”, or more precisely, consequentialist lens is moral. There is no conflict of principles.
The quality of discussion on the value of tolerating Bostrom’s (or anyone else’s ) opinions on race&IQ is incredibly low, and the discussion is informed by emotion rather than even trivial consequentialist analysis. The failure to approach this issue analytically is a failure both by Rationalist and by old-school EA standards.
I’m arguing not for a “conflict of principles” but a conflict of impulses/biases. Anecdotally, I see a bias for believing that the truth is probably norm-violative in rationalist communities. I worry that this biases some people such that their analysis fails to be sufficiently consequentialist, as you describe.