For the far future to be a significant factor in moral decisions, it must lead to different decisions compared to those made when only considering the near future. If the same decisions are made in both cases, there is no need to consider the far future.
Given the vastness of the future compared to the present, focusing on the far future, risks harming the present. Resources spent on the far future could instead be used to address immediate problems like health crises, hunger, and conflict.
This seems a very strange view. If we knew the future would not last long—perhaps a black hole would swallow up humanity in 200 years—then the future would not be very vast, it would have less moral weight, and aiding it would be less demanding. Would this really leave longtermism more palatable to the critics?
In the article the authors are somewhat ambiguous about the meaning of ‘near future’. They do at one point refer to the present and a few generations, as their potential time stamp. But your point raises an interesting question for the longtermists: How long does the future need to be in order for future people to have moral weight?
Although we might want to qualify it slightly in that the element of interest is not necessarily the number of years into the future but rather how many people (or beings) will be in the future. The question then becomes: How many people need to be alive in the future in order for their lives to have moral weight?
If we knew a black hole would swallow humanity in 200 years, on some estimates, there could still be ~15 billion human lives to come. If we knew that the future held only 15 more billion lives, would that justify not focusing on existential risks?
For the far future to be a significant factor in moral decisions, it must lead to different decisions compared to those made when only considering the near future. If the same decisions are made in both cases, there is no need to consider the far future.
Given the vastness of the future compared to the present, focusing on the far future, risks harming the present. Resources spent on the far future could instead be used to address immediate problems like health crises, hunger, and conflict.
This seems a very strange view. If we knew the future would not last long—perhaps a black hole would swallow up humanity in 200 years—then the future would not be very vast, it would have less moral weight, and aiding it would be less demanding. Would this really leave longtermism more palatable to the critics?
In the article the authors are somewhat ambiguous about the meaning of ‘near future’. They do at one point refer to the present and a few generations, as their potential time stamp. But your point raises an interesting question for the longtermists: How long does the future need to be in order for future people to have moral weight?
Although we might want to qualify it slightly in that the element of interest is not necessarily the number of years into the future but rather how many people (or beings) will be in the future. The question then becomes: How many people need to be alive in the future in order for their lives to have moral weight?
If we knew a black hole would swallow humanity in 200 years, on some estimates, there could still be ~15 billion human lives to come. If we knew that the future held only 15 more billion lives, would that justify not focusing on existential risks?