This might be less than perfectly charitable, but my subjective impression of the past year or so of EA work is something like: ~Neartermists focusing on global poverty: “Look at our efforts towards eradicating tuberculosis! While you’re here, don’t forget to take a look at what the Lead Exposure Elimination Project has been doing.” ~Neartermists focusing on animal welfare: “Here are the specific policy changes we’ve advocated for that will vastly reduce the amount of suffering necessary for eggs. In terms of more speculative things, we think shrimp might have moral value? Huge implications if true.” ~Longtermists focusing on existential risk: “so incidentally here’s some racist emails of ours” “also we stole billions of dollars” ”actually there were two separate theft incidents” ″also we haven’t actually done anything about existential risk. you can’t hold that against us though because our plans that didn’t work still had positive EV”
I recognize that there are many longtermists and existential-risk-oriented people who are making genuine efforts to solve important problems, and I don’t want to discount that. But I also think that it’s important to make sure that as effective altruists we are actually doing things that make the world better, and separately, it (uncharitably) feels like some longtermists are doing unethical things and then dragging the rest of the movement down with them.
Here’s a VERY unharitable idea (that I hope will not be removed because it could be true, and if so might be useful for EAs to think about):
Others have pointed to the rationalist transplant versus EA native divide. I can’t help but feel that this is a big part of the issue we’re seeing here.
I would guess that the average “EA native” is motivated primarily by their desire to do good. They might have strong emotions regarding human happiness and suffering, which might bias them against a letter using prima facie hurtful language. They are also probably a high decoupler and value stuff like epistemic integrity—after all, EA breaks from intuitive morality a lot—but their first impulses are to consider consequences and goodness.
I would guess that the average “rationalist transplant” is motivated primarily by their love of epistemic integrity and the like. They might have a bias in favor of violating social norms, which might bias them in favor of a letter using hurtful language. They probably also value social welfare (they wouldn’t be here if they didn’t) but their first impulses favor finding a norm-breaking truth. It may even be a somewhat deolontogical impulse: it’s good to challenge social norms in search of truth, independent of whether it creates good consequences.
I believe the EA native impulse seems more helpful to the EA cause than the rationalist impulse.
And I worry the rationalist impulse may even be actively harmful if it dilutes EA’s core values. For example, in this post a rationalist transplant describes themself as motivated by status instead of morality. This seems very bad to me.
Again, I recognize that this is a VERY uncharitable view. I’d like to hasten to say that there are probably a great many rationalist-transplants whose commitment to advancing social welfare are equal to or greater than mine, as an EA native. My argument is about group averages, not individual characteristics.
...
Okay, yes, I found that last sentence really enjoyable to write, guilty as charged
This looks like retconning of history. EA and rationalism go way back, and the entire premise of EA is that determining what makes more good through “rationalist”, or more precisely, consequentialist lens is moral. There is no conflict of principles.
The quality of discussion on the value of tolerating Bostrom’s (or anyone else’s ) opinions on race&IQ is incredibly low, and the discussion is informed by emotion rather than even trivial consequentialist analysis. The failure to approach this issue analytically is a failure both by Rationalist and by old-school EA standards.
I’m arguing not for a “conflict of principles” but a conflict of impulses/biases. Anecdotally, I see a bias for believing that the truth is probably norm-violative in rationalist communities. I worry that this biases some people such that their analysis fails to be sufficiently consequentialist, as you describe.
This might be less than perfectly charitable, but my subjective impression of the past year or so of EA work is something like:
~Neartermists focusing on global poverty: “Look at our efforts towards eradicating tuberculosis! While you’re here, don’t forget to take a look at what the Lead Exposure Elimination Project has been doing.”
~Neartermists focusing on animal welfare: “Here are the specific policy changes we’ve advocated for that will vastly reduce the amount of suffering necessary for eggs. In terms of more speculative things, we think shrimp might have moral value? Huge implications if true.”
~Longtermists focusing on existential risk: “so incidentally here’s some racist emails of ours”
“also we stole billions of dollars”
”actually there were two separate theft incidents”
″also we haven’t actually done anything about existential risk. you can’t hold that against us though because our plans that didn’t work still had positive EV”
I recognize that there are many longtermists and existential-risk-oriented people who are making genuine efforts to solve important problems, and I don’t want to discount that. But I also think that it’s important to make sure that as effective altruists we are actually doing things that make the world better, and separately, it (uncharitably) feels like some longtermists are doing unethical things and then dragging the rest of the movement down with them.
Here’s a VERY unharitable idea (that I hope will not be removed because it could be true, and if so might be useful for EAs to think about):
Others have pointed to the rationalist transplant versus EA native divide. I can’t help but feel that this is a big part of the issue we’re seeing here.
I would guess that the average “EA native” is motivated primarily by their desire to do good. They might have strong emotions regarding human happiness and suffering, which might bias them against a letter using prima facie hurtful language. They are also probably a high decoupler and value stuff like epistemic integrity—after all, EA breaks from intuitive morality a lot—but their first impulses are to consider consequences and goodness.
I would guess that the average “rationalist transplant” is motivated primarily by their love of epistemic integrity and the like. They might have a bias in favor of violating social norms, which might bias them in favor of a letter using hurtful language. They probably also value social welfare (they wouldn’t be here if they didn’t) but their first impulses favor finding a norm-breaking truth. It may even be a somewhat deolontogical impulse: it’s good to challenge social norms in search of truth, independent of whether it creates good consequences.
I believe the EA native impulse seems more helpful to the EA cause than the rationalist impulse.
And I worry the rationalist impulse may even be actively harmful if it dilutes EA’s core values. For example, in this post a rationalist transplant describes themself as motivated by status instead of morality. This seems very bad to me.
Again, I recognize that this is a VERY uncharitable view. I’d like to hasten to say that there are probably a great many rationalist-transplants whose commitment to advancing social welfare are equal to or greater than mine, as an EA native. My argument is about group averages, not individual characteristics.
...
Okay, yes, I found that last sentence really enjoyable to write, guilty as charged
This looks like retconning of history. EA and rationalism go way back, and the entire premise of EA is that determining what makes more good through “rationalist”, or more precisely, consequentialist lens is moral. There is no conflict of principles.
The quality of discussion on the value of tolerating Bostrom’s (or anyone else’s ) opinions on race&IQ is incredibly low, and the discussion is informed by emotion rather than even trivial consequentialist analysis. The failure to approach this issue analytically is a failure both by Rationalist and by old-school EA standards.
I’m arguing not for a “conflict of principles” but a conflict of impulses/biases. Anecdotally, I see a bias for believing that the truth is probably norm-violative in rationalist communities. I worry that this biases some people such that their analysis fails to be sufficiently consequentialist, as you describe.
I’m not aware of the two separate theft incidents (or forgot about one), can you tell me more about them?
SBF
Avraham Eisenberg (with the Mango Markets exploit, which he has now been arrested for)
Thanks; what has Avraham done that makes him longtermist? Did he / does he identify as longtermist?