Great piece. I really connected with the part about the vastness of the possibility of conscious experience.
That said, I’m inclined to think that Utopia, however weird, would also be, in a certain sense, recognizable — that if we really understood and experienced it, we would see in it the same thing that made us sit bolt upright, long ago, when we first touched love, joy, beauty; that we would feel, in front of the bonfire, the heat of the ember from which it was lit. There would be, I think, a kind of remembering. As Lewis puts it: “The gods are strange to mortal eyes, and yet they are not strange.” Utopia would be weird and alien and incomprehensible, yes; but it would still, I think, be our Utopia; still the Utopia that gives the fullest available expression to what we would actually seek, if we really understood.
It sounds a little bit like you’re saying that utopia would be recognisable to modern day humans. If you are saying that, i’m not sure I would agree. Can a great ape have a revelatory experience that a human can have when taking in a piece of art? There exists art that can create the relevant experience, but I highly doubt if you showed every piece of art to any great ape that it would have such an experience. Therefore how can we expect the experiences available in utopia to be recognisable to a modern day human?
The washing out hypothesis is a different concern to what we are talking about here. The idea I have been discussing here is not that an intervention might become less significant as time goes on. An intervention could be extremely significant for the far future, or not significant at all. However, our ability to predict the impact of that intervention on the far future is outside our purview.
From the article:
Or perhaps the difficulty lies in the high number of causal possibilities the further we reach into the future.
In the article they compare the impact of an intervention (malaria bed nets) on the near future with the impact of an intervention (reducing x-risk from asteroids, global pandemics, AI risk) on the far future. As I said earlier, not an adequate comparison.
If we compare the positive impact of an intervention on quadrillions of people to a positive impact of an intervention on only billions of people, should we be surprised that the intervention that considers the impact on more people has a greater effect? Put another way, should we be surprised the bed net intervention has a smaller impact when we reduce the time horizon of its impact to the near future?
To this you might say, well interventions focused on malaria might have this ‘washing out’ effect. But so might interventions for reducing existential risk. For example, the intervention discussed in the paper to reduce extinction-level pandemics is to spend money on strengthening the healthcare system. Something that could easily be subject to the ‘washing out’ effect.
Nevertheless, the bed net intervention is only one intervention, and there are other interventions that could have more plausible effects on the far future which would be more adequate comparisons (if such comparisons were feasible in the first place), for example, medical research.
If extinction and non-extinction are “attractor states”, from what I gather, a state that is expected to last an extremely long time, what exactly isn’t an attractor state?
Let me translate that sentence: Focusing on existential risk is more beneficial for the far future than other cause areas because it increases the probability of humans being alive for an extremely long time. If it’s more beneficial, we need the relevant comparison, as per above, the relevant comparison is lacking.