Thank you for sharing your thoughts. What do you think of the following scenario?
In world A the risk for an existential catastrophe is fairly low and most currently existing people are happy.
In world B the existential risk is slightly lower. In expectation there will live 100 billion additional people (compared to A) in the far future whose lifes are better than those of the people today. However, this reduction of risk is so costly that most of the currently existing people have miserable lifes.
Your theory probably favours option B. Is this intended ?
Yes, my theory favours B, assuming that those 100 billion additional people have on expectation a welfare higher than the threshold, that the higher X-risk in world A does not on expectation decrease the welfare of existing people, and that the negative welfare in absolute terms of having a miserable life is less than ten times higher than the positive welfare of currently existing people in world A. In that case, the added welfare of those additional people is higher than the loss of welfare of the current people. In other words: if there are so many extra future people who are so happy, we really should sacrifice a lot in order to generate that outcome.
However, the question is whether we would set the threshold lower than the welfare of those future people. It is possible that most current people are die-hard person-affecting utilitarians who care only about making people happy instead of making happy people. In that case, when facing a choice between worlds A and B, people may democratically decide to set a very high threshold, which means they prefer world A
Thank you for sharing your thoughts. What do you think of the following scenario?
In world A the risk for an existential catastrophe is fairly low and most currently existing people are happy.
In world B the existential risk is slightly lower. In expectation there will live 100 billion additional people (compared to A) in the far future whose lifes are better than those of the people today. However, this reduction of risk is so costly that most of the currently existing people have miserable lifes.
Your theory probably favours option B. Is this intended ?
Yes, my theory favours B, assuming that those 100 billion additional people have on expectation a welfare higher than the threshold, that the higher X-risk in world A does not on expectation decrease the welfare of existing people, and that the negative welfare in absolute terms of having a miserable life is less than ten times higher than the positive welfare of currently existing people in world A. In that case, the added welfare of those additional people is higher than the loss of welfare of the current people. In other words: if there are so many extra future people who are so happy, we really should sacrifice a lot in order to generate that outcome.
However, the question is whether we would set the threshold lower than the welfare of those future people. It is possible that most current people are die-hard person-affecting utilitarians who care only about making people happy instead of making happy people. In that case, when facing a choice between worlds A and B, people may democratically decide to set a very high threshold, which means they prefer world A