Thanks for sharing your reaction! I actually agree with some of it:
I do think it’s good to retain some skepticism about our ability to understand the relevant constraints and opportunities that civilization would face in millions or billions of years. I’m not 100% confident in the claims from my previous comment.
In particular, I have non-zero credence in views that decouple moral value from physical matter. And on such views it would be very unclear what limits to growth we’re facing (if any).
But if ‘moral value’ is even roughly what I think it is (in particular, requires information processing), then this seems similarly unlikely as FTL travel being possible: I’m not a physicist, but my rough understanding is that there is only so much computation you can do with a given amount of energy or negentropy or whatever the relevant quantity is.
However, for practical purposes my reaction to these points is interestingly somewhat symmetrical to yours. :)
I think these are considerations that actually raise worries about Pascal’s Mugging. The probability that we’re so wrong about fundamental physics, or that I’m so wrong about what I’d value if only I knew more, seems so small that I’m not sure what to do with it.
There is also the issue that if we were so wrong, I would expect that we’re very wrong about a number of different things as well. I think the modal scenarios on which the above “limits to growth” picture is wrong is not “how we expect the future to look like, but with FTL travel” but very weird things like “we’re in a simulation”. Unknown unknowns rather than known unknowns. So my reaction to the possibility of being in such a world is not “let’s prioritize economic growth [or any other specific thing] instead”, but more like ”??? I don’t know how to think about this, so I should to a first approximation ignore it”.
Taking a step back, the place where I was coming from is: In this century, everyone might well die (or something similarly bad might happen). And it seems like there are things we can do that significantly help us survive. There are all these reasons why this might not be as significant as it seems—aliens, intelligent life re-evolving on Earth, us being in a simulation, us being super confused about what we’d value if we understood the world better, infinite ethics, etc. - but ultimately I’m going to ask myself: Am I sufficiently troubled by these possibilities to risk irrecoverable ruin? And currently I feel fairly comfortable answering this question with “no”.
Overall, this makes me think that disagreements about the limits to growth, and how confident we can be in them or their significance, is probably not the crux here. Based on the whole discussion so far, I suspect it’s more likely to be “Can sufficiently many people do sufficiently impactful things to reduce the risk of human extinction or similarly bad outcomes?”. [And at least for you specifically, perhaps “impartial altruism vs. ‘enlightened egoism’” might also play a role.]
Thanks for sharing your reaction! I actually agree with some of it:
I do think it’s good to retain some skepticism about our ability to understand the relevant constraints and opportunities that civilization would face in millions or billions of years. I’m not 100% confident in the claims from my previous comment.
In particular, I have non-zero credence in views that decouple moral value from physical matter. And on such views it would be very unclear what limits to growth we’re facing (if any).
But if ‘moral value’ is even roughly what I think it is (in particular, requires information processing), then this seems similarly unlikely as FTL travel being possible: I’m not a physicist, but my rough understanding is that there is only so much computation you can do with a given amount of energy or negentropy or whatever the relevant quantity is.
It could still turn out that we’re wrong about how information processing relates to physics (relatedly, look what some current longtermists were interested in during their early days ;)), or about how value relates to information processing. But this also seems very unlikely to me.
However, for practical purposes my reaction to these points is interestingly somewhat symmetrical to yours. :)
I think these are considerations that actually raise worries about Pascal’s Mugging. The probability that we’re so wrong about fundamental physics, or that I’m so wrong about what I’d value if only I knew more, seems so small that I’m not sure what to do with it.
There is also the issue that if we were so wrong, I would expect that we’re very wrong about a number of different things as well. I think the modal scenarios on which the above “limits to growth” picture is wrong is not “how we expect the future to look like, but with FTL travel” but very weird things like “we’re in a simulation”. Unknown unknowns rather than known unknowns. So my reaction to the possibility of being in such a world is not “let’s prioritize economic growth [or any other specific thing] instead”, but more like ”??? I don’t know how to think about this, so I should to a first approximation ignore it”.
Taking a step back, the place where I was coming from is: In this century, everyone might well die (or something similarly bad might happen). And it seems like there are things we can do that significantly help us survive. There are all these reasons why this might not be as significant as it seems—aliens, intelligent life re-evolving on Earth, us being in a simulation, us being super confused about what we’d value if we understood the world better, infinite ethics, etc. - but ultimately I’m going to ask myself: Am I sufficiently troubled by these possibilities to risk irrecoverable ruin? And currently I feel fairly comfortable answering this question with “no”.
Overall, this makes me think that disagreements about the limits to growth, and how confident we can be in them or their significance, is probably not the crux here. Based on the whole discussion so far, I suspect it’s more likely to be “Can sufficiently many people do sufficiently impactful things to reduce the risk of human extinction or similarly bad outcomes?”. [And at least for you specifically, perhaps “impartial altruism vs. ‘enlightened egoism’” might also play a role.]