For people working on AI safety and other existential risks, what emotionally motivates you to work? What gets you up in the morning?
Here are some possibilities I’ve thought of:
1. Curiosity-driven, with curiosity directed towards endorsed sub-areas (e.g. likes coding, math, problem-solving)
2. Deeply internalized consequences (harnessing fear of death, or deeply wanting positive worlds)
3. Social motivations (wanting status or success, likes working as part of a collective, other)
Follow-up: Say I’m afraid of internalizing responsibility for working on important, large problems. Assuming you’ve solved this, what kind of narratives do you have or what strategies do you use?
My colleague Michelle wrote some related thoughts here.
https://forum.effectivealtruism.org/posts/3k4H3cyiHooTyLY6p/why-i-find-longtermism-hard-and-what-keeps-me-motivated
Personally:
5% internalized consequences
45% intellectual curiosity
50% status
I’m sort of joking. Really, I think it’s that “motivation” is at least a couple things. In the grand scheme of things, I tell myself “this research is important”. Then day to day, I think “I’ve decided to do this research, so now I should get to work”. Then every once in a while, I become very unmotivated, and I think about what’s actually at stake here, and also about the fact that some Very Important People I Respect tell me this is important.
Good question. I think I’m maybe quarterway there to be internally/emotionally driven to do what I can to prevent the worst possible AI failures, but re this
I always thought it would be a great thing if my emotional drives would line up more with the goals that I deliberately thought through to be likely the most important. It would feel more coherent, it would give me more drive and focus on what matters, and downregulate things like some social motivations that I don’t endorse fully. I suppose one might be worried that it’s overwhelming, but that hasn‘t been a thing for me so far. I wonder if humans mostly deal okay with great responsibilities, which is my spontaneous impression.
(btw, I really enjoyed reading your PhD retrospective, nice to see your name pop up here! I’m doing a PhD in CogSci and could relate to a lot)
I’m mostly concerned with S-risks, i.e. risks of astronomical suffering. I view it as a more rational form of Pascal’s Wager, and as a form of extreme longtermist self-interest. Since there is still a >0% chance of some form of afterlife or a bad form of quantum immortality existing, raising awareness of S-risks and donating to S-risk reduction organizations like the Center on Long-Term Risk and the Center for Reducing Suffering likely reduces my risk of going to “hell”. See The Dilemma of Worse Than Death Scenarios.
I don’t directly work on x-risks at the moment, apart from some of my time sharing resources and giving information or career advice relevant to it to people in EA Philippines.
But if I did, the importance and drive of making sure the world doesn’t become extinct or dystopian would be what would motivate me to work. Also, all the work done by other EAs in other causes would be for naught if we end up becoming extinct or lock in a bad future this century.
I’ve seen this argument elsewhere, and still don’t find it convincing. “All” seems hyperbolic. Much longtermist work to improve the quality of posthumans’ lives does become irrelevant if there won’t be any posthumans. But animal welfare, poverty reduction, mental health, and probably some other causes I’m forgetting will still have made an important (if admittedly smaller-scale) difference by relieving their beneficiaries’ suffering.
What is your opinion of EFILism? It’s basically an extremist form of antinatalism that argues that total extinction of the biosphere would be a good thing, since all life going extinct means that suffering would no longer exist.
Don’t have the time to read into it but I think that total extinction of the biosphere would very likely not be a good thing.