My concern with working to achieve universal eudaimonia is that, technologically, we are quite far from being able to achieve this. There is possibility that if we focus solely on that, society in general will suffer, perhaps to the point where we collapse and are never able to achieve eudaimonia. We might also miss out on the chance of stopping some x-risk because we put too many resources into eudaimonia. Also, I believe that by helping the world overcome poverty and working on short-term technology boosts, we get closer to a point where working on universal eudaimonia is more achievable.
I don’t have any values or thought experiments to back this up, these are just my initial concerns. That’s not to say that I don’t think the concept should be worked on. I met David Pearce this year and he convinced me that such a concept should be the ‘final goal’ of humanity and EA.
That’s certainly something worth worrying about. But we could also worry that if we successfully eliminate x-risks, we still need to ensure that the far future has lots of happiness and minimal suffering, and this might not happen by default. It’s not clear which is more important. I lean a little toward x-risk reduction but it’s hard to say.
My concern with working to achieve universal eudaimonia is that, technologically, we are quite far from being able to achieve this. There is possibility that if we focus solely on that, society in general will suffer, perhaps to the point where we collapse and are never able to achieve eudaimonia. We might also miss out on the chance of stopping some x-risk because we put too many resources into eudaimonia. Also, I believe that by helping the world overcome poverty and working on short-term technology boosts, we get closer to a point where working on universal eudaimonia is more achievable.
I don’t have any values or thought experiments to back this up, these are just my initial concerns. That’s not to say that I don’t think the concept should be worked on. I met David Pearce this year and he convinced me that such a concept should be the ‘final goal’ of humanity and EA.
That’s certainly something worth worrying about. But we could also worry that if we successfully eliminate x-risks, we still need to ensure that the far future has lots of happiness and minimal suffering, and this might not happen by default. It’s not clear which is more important. I lean a little toward x-risk reduction but it’s hard to say.