Thanks for this post I am always interested to hear why people are sceptical of longtermism.
If I were to try to summarise your view briefly (which is helpful for my response) I would say:
You have person-affecting tendencies which make you unconcerned with reducing extinction risks
You are suffering-focused
You don’t think humanity is very good now nor that it is likely to be in the future under a sort of ‘business as usual’ path, which makes you unenthusiastic about making the future long or big
You don’t think the future will be long (unless we have totalitarianism) which reduces the scope for doing good by focusing on the future
You’re sceptical there are lock-in scenarios we can affect within the next few decades, and don’t think there is much point of trying to affect them beyond this time horizon
I’m going to accept 1, 2 as your personal values and I won’t try to shift you on them. I don’t massively disagree on point 3.
I’m not sure I completely agree on point 4 but I can perhaps accept it as a reasonable view, with a caveat. Even if the future isn’t verylong in expectation, surely it is kind oflong in expectation? Like probably more than a few hundred years? If this is the case, might it be better to be some sort of “medium-termist” as opposed to a “traditional neartermist”. For example, might it be better to tackle climate change than to give out malarial bednets? I’m not sure if the answer is yes, but it’s something to think about.
Also, as has been mentioned, if we can only have long futures under totalitarianism, which would be terrible, might we want to reduce risks of totalitarianism?
Moving onto point 5 and lock-in scenarios. Firstly I do realise that the constellation of your views means that the only type of x-risk you are likely to care about is s-risks, so I will focus on lock in events that involve vast amounts of suffering. With that in mind, why aren’t you interested in something like AI alignment? Misaligned AI could lock-in vast amounts of suffering. We could also create loads of digital sentience that suffers vastly. And all this could happen this century. We can’t be sure of course, but it does seems reasonable to worry about this given how high the stakes are and the uncertainty over timelines. Do you not agree? There may also be other s-risks that may have potential lock-ins in the nearish future but I’d have to read more.
My final question, still on point 5, is why don’t you think we can affect probabilities of lock-in events that may happen beyond the next few decades? What about growing the Effective Altruism/longtermist community, or saving/investing money for the future, or improving values? These are all things that many EAs think can be credible longtermist interventions and could reasonably affect chances of lock-in (including of the s-risk kind) beyond the next few decades as they essentially increase the number of thoughtful/good people in the future or the amount of resources such people have at their disposal. Do you disagree?
Thanks for trying to summarise my views! This is helpful for me to see where I got the communication right and where I did not. I’ll edit your summary accordingly where you are off:
You have person-affecting tendencies which make you unconcernedless concerned with reducing extinction risks than longtermists, although you are still concerned about the nearterm impacts and put at least some value on the loss of future generations (which also depends on how long/big we can expect the future to be)
You are suffering-focused [Edit: I would not have previously described my views that way, but I guess it is an accurate enough description]
You don’t think humanity is very good now nor that it is likely to be in the future under a sort of ‘business as usual’ path, which makes you unenthusiasticwant to prioritiseabout making the future good over making it long or big
You don’t think the future will be long (unless we have totalitarianism) which reduces the scope for doing good by focusing on the future
You’re scepticalclueless whether there are lock-in scenarios we can affect within the next few decades, and don’t think there is much point of trying to affect them beyond this time horizon
Thanks for that. To be honest I would say the inaccuracies I made are down to sloppiness by me rather than by you not being clear in your communication. Having said that none of your corrections change my view on anything else I said in my original comment.
Thanks for this post I am always interested to hear why people are sceptical of longtermism.
If I were to try to summarise your view briefly (which is helpful for my response) I would say:
You have person-affecting tendencies which make you unconcerned with reducing extinction risks
You are suffering-focused
You don’t think humanity is very good now nor that it is likely to be in the future under a sort of ‘business as usual’ path, which makes you unenthusiastic about making the future long or big
You don’t think the future will be long (unless we have totalitarianism) which reduces the scope for doing good by focusing on the future
You’re sceptical there are lock-in scenarios we can affect within the next few decades, and don’t think there is much point of trying to affect them beyond this time horizon
I’m going to accept 1, 2 as your personal values and I won’t try to shift you on them. I don’t massively disagree on point 3.
I’m not sure I completely agree on point 4 but I can perhaps accept it as a reasonable view, with a caveat. Even if the future isn’t very long in expectation, surely it is kind of long in expectation? Like probably more than a few hundred years? If this is the case, might it be better to be some sort of “medium-termist” as opposed to a “traditional neartermist”. For example, might it be better to tackle climate change than to give out malarial bednets? I’m not sure if the answer is yes, but it’s something to think about.
Also, as has been mentioned, if we can only have long futures under totalitarianism, which would be terrible, might we want to reduce risks of totalitarianism?
Moving onto point 5 and lock-in scenarios. Firstly I do realise that the constellation of your views means that the only type of x-risk you are likely to care about is s-risks, so I will focus on lock in events that involve vast amounts of suffering. With that in mind, why aren’t you interested in something like AI alignment? Misaligned AI could lock-in vast amounts of suffering. We could also create loads of digital sentience that suffers vastly. And all this could happen this century. We can’t be sure of course, but it does seems reasonable to worry about this given how high the stakes are and the uncertainty over timelines. Do you not agree? There may also be other s-risks that may have potential lock-ins in the nearish future but I’d have to read more.
My final question, still on point 5, is why don’t you think we can affect probabilities of lock-in events that may happen beyond the next few decades? What about growing the Effective Altruism/longtermist community, or saving/investing money for the future, or improving values? These are all things that many EAs think can be credible longtermist interventions and could reasonably affect chances of lock-in (including of the s-risk kind) beyond the next few decades as they essentially increase the number of thoughtful/good people in the future or the amount of resources such people have at their disposal. Do you disagree?
Thanks for trying to summarise my views! This is helpful for me to see where I got the communication right and where I did not. I’ll edit your summary accordingly where you are off:
Thanks for that. To be honest I would say the inaccuracies I made are down to sloppiness by me rather than by you not being clear in your communication. Having said that none of your corrections change my view on anything else I said in my original comment.