I think personally I had a sort of amplifying feedback loop between longtermism and assigning a “significant” credence to HoH (not actually sure what credence I assign to it, but it probably at least sometimes feels >10%). Something very roughly like the following:
1. I had a general inclination towards utilitarianism and a large moral circle, which got me into EA.
2. EA introduced me to arguments about longtermism and existential risks this century being high enough to be a global priority (which could perhaps be quite “low” by usual standards)
3. I started becoming convinced by those arguments, and thus learning more about them, and beginning to switch my focus to x-risk reduction.
4. Learning and thinking more about x-risks made the potential scale and quality of the future if we avoid them more salient, which made longtermism more emotionally resonant. This then feeds back into 2 and 3.
5. Learning and thinking more about x-risks and longtermism also exposed me to more arguments against concerns about x-risks, and meant I was positioned to respond to them not with “Ok, let’s shift some probability mass towards the best thing to work on being global poverty and/or animal welfare” but instead “Ok, let’s shift some probability mass towards the best thing to work on being longtermist efforts other than current work on x-risks.” This led me to think more about various ways longtermism could be acted on, and thus more ways the future could be excellent or terrible, and thus more reasons why longtermism feels important.
I’m not saying this is an ideal reasoning process. Some of it arguably looks a little like motivated reasoning or entering something of an echo chamber. But I think that’s roughly the process I went through.
Interesting comment.
I think personally I had a sort of amplifying feedback loop between longtermism and assigning a “significant” credence to HoH (not actually sure what credence I assign to it, but it probably at least sometimes feels >10%). Something very roughly like the following:
1. I had a general inclination towards utilitarianism and a large moral circle, which got me into EA.
2. EA introduced me to arguments about longtermism and existential risks this century being high enough to be a global priority (which could perhaps be quite “low” by usual standards)
3. I started becoming convinced by those arguments, and thus learning more about them, and beginning to switch my focus to x-risk reduction.
4. Learning and thinking more about x-risks made the potential scale and quality of the future if we avoid them more salient, which made longtermism more emotionally resonant. This then feeds back into 2 and 3.
5. Learning and thinking more about x-risks and longtermism also exposed me to more arguments against concerns about x-risks, and meant I was positioned to respond to them not with “Ok, let’s shift some probability mass towards the best thing to work on being global poverty and/or animal welfare” but instead “Ok, let’s shift some probability mass towards the best thing to work on being longtermist efforts other than current work on x-risks.” This led me to think more about various ways longtermism could be acted on, and thus more ways the future could be excellent or terrible, and thus more reasons why longtermism feels important.
I’m not saying this is an ideal reasoning process. Some of it arguably looks a little like motivated reasoning or entering something of an echo chamber. But I think that’s roughly the process I went through.