Reason 1 [for disagreeing with longtermism]: You don’t believe that very large numbers of people in the far future add up to being a very big moral priority. For instance, you may take a Rawlsian view, believing that we should always focus on helping the worst-off.
It’s not clear that, of all the people that will ever exist, the worst-off among them are currently alive. True, the future will likely be on average better than the present. But since the future potentially contains vastly more people, it’s also more likely to contain the worst-off people. Moreover, work on S-risks by Tomasik, Gloor, Baumann and others provides additional reason for expecting such people—using ‘people’ in a broad sense—to be located in the future.
Also, if you’re trying to exhaust reasons, it would be better to add the qualifier “possible people”. There are different kinds of person-affecting views someone could hold.
It’s not clear that, of all the people that will ever exist, the worst-off among them are currently alive. True, the future will likely be on average better than the present. But since the future potentially contains vastly more people, it’s also more likely to contain the worst-off people. Moreover, work on S-risks by Tomasik, Gloor, Baumann and others provides additional reason for expecting such people—using ‘people’ in a broad sense—to be located in the future.
Thanks, I have adjusted it to show the additional assumption required.
Also, if you’re trying to exhaust reasons, it would be better to add the qualifier “possible people”. There are different kinds of person-affecting views someone could hold.