[...]I noticed that most of my belief in AI risk was caused by biased thinking: self-aggrandizing motivated reasoning, misleading language, and anchoring on unjustified probability estimates.
Thank you so much for your reflection and honesty on this. Although I think concerns about the safe development of AI are very legitimate, I have long been concerned that the speculative, sci-fi nature of AI x-risks gives cover to a lot of bias. More cynically, I think grasping AI risk and thinking about it from a longtermist perspective is a great way to show off how smart and abstract you are while (theoretically) also having the most moral impact possible.
I just think identifying with x-risk and hyperastronomical estimates of utility/disutility is meeting a suspicious number of emotional and intellectual needs. If we could see the impact of our actions to mitigate AI risk today, motivated reasoning might not be such a problem. But longtermist issues are those where we really can’t afford self-serving biases, because it won’t necessarily show. I’m really glad to see someone speaking up about this, particularly from their own experience.
I guess it depends on how narrowly you define EA. I think of evaluating states of pleasure/suffering, affective forecasting, and decision-making as common EA topics. My argument is related to a hedonistic utilitarian argument against preference utilitarianism, but I don’t often hear people taking on shortcomings of the remembering self the way they do preferences. Usually the remembering self is held out as a superior perspective on life because it’s out of the moment, when I argue it’s just as selfish as the experiencing self. In fact, it’s just another kind of experiencing self that wants different things.
Others have said this, but you’re getting at whether the movement should prioritize growth and easy assimiliation or maintaining high fidelity to its values. So far most of the core favors the high fidelity model. Personally, I agree, because EA won’t be as effective or could even be destructive if EA as a movement is not anchored in its values. But we miss out on people who don’t have that somewhat extreme, values-driven bent, which is a terrible loss for EA as a community.
Even at the level of organizing at Harvard, I feel torn between seeing our club’s value as spreading some good values on campus (more watered down outreach) or incubating the next generation of high-power EAs (a few intense, targeted waves of outreach). I worry that we unintentionally select for a lot of baggage when we select the intense, highly values-driven people, and that the more the entire movement does that, the more blind we are to it.
Dealing with my own mental health issues has convinced me of just how much unhappiness it causes and just how complicated it can be to address. It’s not appealing like the ~$3,500 to AMF = 1 counterfactual life equation is. I think the feeling that there aren’t good intervention options is the reason that mental health doesn’t rank higher as a cause in EA, at least not for donating, rather than longtermism vs. presentism. I’m kind of presentist, and personally I think mental health is up there for most important cause, but I have just never been confident enough in a mental health charity to donate to it. (I’m checking out StrongMinds, though—thanks for the suggestion.) I would donate to the CBT apps, if they were charities. They are the only intervention that is scalable and tractable enough in this space to really count for EA, imo. Or if someone started a campaign to add trace Lithium to the water, I would help. Other than that, I think we just need to develop more scalable interventions, which is not exactly tractable!
I would love to see EA take on the challenge of incubating mental health charities and vaunt mental health as a cause more. Thanks for your role in promoting it :)
Do you know why 300 words per minute was chosen? I think I’m below that and I know I’m not a slow reader. I feel like estimates that help you decide how to spend your time should be a little more generous. (But maybe I *am* a slow reader, idk.)
Of all the academic, activist, and Silicon Valley-type communties I belong to, EA is the most inclusive to (US) conservative ideas. It’s not a very high compliment, but still. The strong free market bent of EA takes most of its members away from mainstream liberal economic policies, i.e. being in favor of globalization (though this issue keeps switching sides). And people tend not to feel any shame about supporting a “conservative” policy if they arrived at it through reason and evidence.
What I do notice is contempt for the culture of American conservatism, beyond even equating it with racism and sexism. Aesthetic horror at the use of guns and big trucks, derision at the idea that anyone could believe Fundamentalist Christianity, considering suburban or rural family-centered life to be lame, condescendingly asserting that the majority of conservatives vote outside of their interests (read: because they are too dumb and driven by fear and hatred to see that we know what’s best for them), everything to do with Trump...
I think the cultural stuff is a big blindspot in EA and the most significant way in which we lack needed diversity, but I’m very hopeful that with essays like this, EAs will be open to looking at conservatives differently. And I hope so, because, stripped of culture war baggage, we could use their perspective.
Thank you :)
Thanks :) Haha, yeah, when I hit the 5 post limit I realized maybe I shouldn’t be treating this like an archive… It honestly didn’t occur to me that the posts would spam people if I just got ’em up as quickly as possible! Still figuring out how the forum works, haha.