Thanks a lot for sharing this denise. Here are some thoughts on your points.
On your point about moral realism, I’m not sure how that can be doing much work in an argument against longtermism specifically, as opposed to all other possible moral views. Moral anti-realism implies that longtermism isn’t true, but then it also implies that near-termism isn’t true. The thought seems to be that there could only be an argument that would give you reason to change your mind if moral realism were true, but if that were true, there would be no point in discussing arguments for and against longtermism because they wouldn’t have justificatory force.
Your argument suggests that you find a person-affecting form of utilitarianism most plausible. But to me we should not reach conclusions about ethics on the basis of what you find intuitively appealing without considering the main arguments for and against these positions. Person-affecting views have lots of very counter-intuitive implications and are actually quite hard to define.
I don’t think it is true that the case for longtermism rests on the total view. As discussed in the Greaves and MacAskill paper, many theories imply longtermism.
Your view that humanity is not super-awesome seems to me compatible with longtermism. The ‘not super-awesome’ critique attacks a premise of one strand of longtermism which is especially focused on ensuring human survival. But other forms of longtermism do not rely on these premises. For example, if you don’t think that humanity is super awesome, then focusing on values change looks a good bet, as does reducing s-risks.
I’m not sure your point that ‘the future will not be better than today’ hits the mark. More precisely, I think you want to say that ‘today the world is net bad and the future will be as well’. It could be true that the future is not better than today but that the future is extremely good. In that case, reducing extinction risks would still have astronomical expected value.
Independently of point 5, again I don’t think one needs to hold that the future is good for longtermism to be true. Suffering-focused people are longtermists but don’t think that the future is good. Totalists could also think that the future is not good in expectation. But still even if the future is bad in expectation, if the variance of possible realisable states of the future is high, that makes affecting the trajectory of the future extremely important.
On existential security, this is a good and underdiscussed point. I hadn’t thought about this much until recently, but after looking into it I became quite convinced that a period of existential security is very likely provided that we survive catastrophic risks and avoid value lock-in. My thoughts are not original and owe a lot to discussions with Carl Shulman, Max Daniel, Toby Ord, Anders Sandberg and others. One point is that biorisk and AI risk are transition risks not state risks. The coordination problems involved in solving them are so hard that once they are solved, they stay solved. To ensure AI safety, one has to ensure stability in coordination dynamics for millions of subjective years of strategic interplay between advanced AI systems. Once we can do that, then we have effectively solved all coordination problems. Solving biorisk is also very hard if you think there will be strongly democratised biotech. If you solve that and build digital minds to explore the galaxy, then you basically eliminate biorisk. If we go to the stars, then we at least avoid earth-bound GCRs. In short, we would have huge technological power and have solved the hardest coordination problems. If you buy limits to growth arguments, you might also think that tech progress will slow down, and so catastrophic risks will fall, as they are driven by tech progress. All of this suggests that conditional on survival of the time of perils, the probability that the future is extremely long is >>10%. So, the probability is not Pascalian.
You seem to suggest that we cannot influence the long-term future with the exception of what you call ‘lock-in events’ like extinction and permanent collapse, which are attractor states that could lock in a state of affairs in the next 100 years. I suppose another one would be AI-enabled permanent lock-in of bad values. But these are the main things that longtermists are working on, so I don’t see how this could be a criticism of longtermism.
I’m don’t think that the inference from ‘I don’t know how to influence the future’ to ‘donate to AMF’ follows. If you buy these cluelessness arguments (I don’t personally), then it seems like two obvious things to do would be to give later to prepare for when a potential lock in event rears its head. So you could invest in stocks, or you could grow the movement so that it is ready to deal with a lock in event. If you are very uncertain about how to affect the longterm future, but accept that the future is potentially extremely valuable, then this is the strongest argument ever for ‘more research needed’. If there is a big asteroid heading our way but we currently feel very unsure about how to affect that, but there are some smart people who think they can stop the asteroid, the correct answer seems to me to be “let’s put tonnes of resources into figuring out how to stop this asteroid” not “let’s donate to AMF”.
Thanks a lot for sharing this denise. Here are some thoughts on your points.
On your point about moral realism, I’m not sure how that can be doing much work in an argument against longtermism specifically, as opposed to all other possible moral views. Moral anti-realism implies that longtermism isn’t true, but then it also implies that near-termism isn’t true. The thought seems to be that there could only be an argument that would give you reason to change your mind if moral realism were true, but if that were true, there would be no point in discussing arguments for and against longtermism because they wouldn’t have justificatory force.
Your argument suggests that you find a person-affecting form of utilitarianism most plausible. But to me we should not reach conclusions about ethics on the basis of what you find intuitively appealing without considering the main arguments for and against these positions. Person-affecting views have lots of very counter-intuitive implications and are actually quite hard to define.
I don’t think it is true that the case for longtermism rests on the total view. As discussed in the Greaves and MacAskill paper, many theories imply longtermism.
Your view that humanity is not super-awesome seems to me compatible with longtermism. The ‘not super-awesome’ critique attacks a premise of one strand of longtermism which is especially focused on ensuring human survival. But other forms of longtermism do not rely on these premises. For example, if you don’t think that humanity is super awesome, then focusing on values change looks a good bet, as does reducing s-risks.
I’m not sure your point that ‘the future will not be better than today’ hits the mark. More precisely, I think you want to say that ‘today the world is net bad and the future will be as well’. It could be true that the future is not better than today but that the future is extremely good. In that case, reducing extinction risks would still have astronomical expected value.
Independently of point 5, again I don’t think one needs to hold that the future is good for longtermism to be true. Suffering-focused people are longtermists but don’t think that the future is good. Totalists could also think that the future is not good in expectation. But still even if the future is bad in expectation, if the variance of possible realisable states of the future is high, that makes affecting the trajectory of the future extremely important.
On existential security, this is a good and underdiscussed point. I hadn’t thought about this much until recently, but after looking into it I became quite convinced that a period of existential security is very likely provided that we survive catastrophic risks and avoid value lock-in. My thoughts are not original and owe a lot to discussions with Carl Shulman, Max Daniel, Toby Ord, Anders Sandberg and others. One point is that biorisk and AI risk are transition risks not state risks. The coordination problems involved in solving them are so hard that once they are solved, they stay solved. To ensure AI safety, one has to ensure stability in coordination dynamics for millions of subjective years of strategic interplay between advanced AI systems. Once we can do that, then we have effectively solved all coordination problems. Solving biorisk is also very hard if you think there will be strongly democratised biotech. If you solve that and build digital minds to explore the galaxy, then you basically eliminate biorisk. If we go to the stars, then we at least avoid earth-bound GCRs. In short, we would have huge technological power and have solved the hardest coordination problems. If you buy limits to growth arguments, you might also think that tech progress will slow down, and so catastrophic risks will fall, as they are driven by tech progress. All of this suggests that conditional on survival of the time of perils, the probability that the future is extremely long is >>10%. So, the probability is not Pascalian.
You seem to suggest that we cannot influence the long-term future with the exception of what you call ‘lock-in events’ like extinction and permanent collapse, which are attractor states that could lock in a state of affairs in the next 100 years. I suppose another one would be AI-enabled permanent lock-in of bad values. But these are the main things that longtermists are working on, so I don’t see how this could be a criticism of longtermism.
I’m don’t think that the inference from ‘I don’t know how to influence the future’ to ‘donate to AMF’ follows. If you buy these cluelessness arguments (I don’t personally), then it seems like two obvious things to do would be to give later to prepare for when a potential lock in event rears its head. So you could invest in stocks, or you could grow the movement so that it is ready to deal with a lock in event. If you are very uncertain about how to affect the longterm future, but accept that the future is potentially extremely valuable, then this is the strongest argument ever for ‘more research needed’. If there is a big asteroid heading our way but we currently feel very unsure about how to affect that, but there are some smart people who think they can stop the asteroid, the correct answer seems to me to be “let’s put tonnes of resources into figuring out how to stop this asteroid” not “let’s donate to AMF”.