Thanks for sharing the link Josh, I enjoyed reading the post (and I agree with Nathan, it definitely seems worth sharing here in its entirety). I think itās a great example of good-faith criticism, lack of deference, and very clearly written :)
As for areas of slight disagreement, where I would welcome your perspective and/āor corrections:
1: Future people count, but less than present people.
The first time I read the post I thought you actually were saying that future people do just matter less intrinsically, which I think is as plausible as finding people living in a certain geographic region more morally worthwhile that others because of that fact. You point out that this is not what you believe in your 4th footnote which I think should probably be part of the main post.
As for differing obligations I do agree that from an individual perspective we have special moral obligations to those close to you, but I donāt think this extends very far beyond that circle, and other differences have to be justified through people in the present being easier to causally effect (your counterargument 3) rather than applying some discount rate over obligations in future years. Maybe these amount to the same thingāthat closeness of causal connection is what you mean by the āconnectednessā of your network, otherwise you might open yourself up to some repugnant implications.[1]
2: There might not be that many future people.
I also think that the Thorstad Paper you link here is an important one for longtermists to deal with (the blog posts you link to are also really good). As you point out though, this does have weird counter-intuitive conclusions. For example, in Thorstadās simple model, the less likely you think x-Risk is, the higher the value you get from reducing it even further! As for your second point about how much value there is in a future, I think for sake of argument clarity this probably deserves its own sub-section?[2] But in principle, even if itās not a source of great value, as long as itās large enough then this effect should cancel out on a totalist utilitarian axiology.[3]
3: We might not be able to help future people much.
So I think this is actually where the rubber really hits the road in terms of objections to longtermism (along with your later point about tradeoffs). I think one of the weaker parts of What We Owe the Future is concrete ways longtermism differs from other philosophies of doing good in terms of being action-guiding. As you point out, in practice xRisk dominates here because, by definition, its effects are permanent and so will definitely affect the future.
I do think Thorstad overstates the strength of the āregression to the inscrutableā claim. We have to make choices and trade-offs even in the absence of clear and dispositive empirical evidence, though I do think that EA should act with humility in the face of Deep Uncertainty, and actions should be more exploratory rather than committed.[4]
4: Remember the trade-offs
I donāt think that I disagree much with you here in terms of thinking trade-offs are important, and the counterarguments you raise means that the value longtermists place on the future should be interrogated more. I do want to pick a slight issue with:
Longtermism commits us to using these resources to help far-off humans, necessarily at the cost of those alive today.
Which reads like a knock-down case against Longtermism. But such objections can be raised against any moral system, since they all commit us to some moral trade-off. The moral-network framework you introduce in section 1 commits you to using resources to help those close to you, necessarily at the expense of those further away from you in your network.
But even as this is stated, I also think that it doesnāt follow. I think, to be a longtermist, you only really need to accept the first of MacAskillās premises. I think someone who thinks that the future has great moral value, but thinks that they donāt have a clear causal way to ensure/āimprove that value, would be justified as being a longtermist without be committed to ignore the plight of those in the present for the sake of a possible future.
Overall, I agreed with a fair amount of what you wrote. As I wrote this comment up, I think that the best frame for longtermism actually isnāt as a brand new moral theory, but as a ādefaultā assumption. In your moral network, itād be a default setting where all weights and connections are set equally impartially across space and time. One could disagree with the particular weights, but you could also disagree with what we could causally effect. I think, under this kind of framing, you hae a lot more in common with ālongtermismā than it might seem at the moment, but Iād welcome your thoughts
It is an interesting point though, and agree it probably needs more concrete work from longtermists (or more publicity for those works where that case is already made!)
There is a separate object-level discussion about whether AI/āBio-risks actually are inscrutable, but I donāt particularly want to get into that debate here!
Thank you for the feedback on both the arguments and writing (something I am aiming to improve through this writing). Sorry for being slow to respond, itās been a busy few of weeks!
I donāt think thereās actually any disagreement here except here:
But even as this is stated, I also think that it doesnāt follow. I think, to be a longtermist, you only really need to accept the first of MacAskillās premises. I think someone who thinks that the future has great moral value, but thinks that they donāt have a clear causal way to ensure/āimprove that value, would be justified as being a longtermist without be committed to ignore the plight of those in the present for the sake of a possible future.
I disagree, at least taking the MacAskill definition of as āthe view that we should be doing much more to protect future generationsā. This is not just a moral conclusion but also a conclusion regarding how we should use marginal resources. If you do not think there is a causal way to affect future people, I think you must reject that conclusion.
However, I think sometimes longtermism is used to mean āwe should value future people roughly similarly to how we value current peopleā. Under this definition, I agree with you.
Thanks for sharing the link Josh, I enjoyed reading the post (and I agree with Nathan, it definitely seems worth sharing here in its entirety). I think itās a great example of good-faith criticism, lack of deference, and very clearly written :)
As for areas of slight disagreement, where I would welcome your perspective and/āor corrections:
1: Future people count, but less than present people.
The first time I read the post I thought you actually were saying that future people do just matter less intrinsically, which I think is as plausible as finding people living in a certain geographic region more morally worthwhile that others because of that fact. You point out that this is not what you believe in your 4th footnote which I think should probably be part of the main post.
As for differing obligations I do agree that from an individual perspective we have special moral obligations to those close to you, but I donāt think this extends very far beyond that circle, and other differences have to be justified through people in the present being easier to causally effect (your counterargument 3) rather than applying some discount rate over obligations in future years. Maybe these amount to the same thingāthat closeness of causal connection is what you mean by the āconnectednessā of your network, otherwise you might open yourself up to some repugnant implications.[1]
2: There might not be that many future people.
I also think that the Thorstad Paper you link here is an important one for longtermists to deal with (the blog posts you link to are also really good). As you point out though, this does have weird counter-intuitive conclusions. For example, in Thorstadās simple model, the less likely you think x-Risk is, the higher the value you get from reducing it even further! As for your second point about how much value there is in a future, I think for sake of argument clarity this probably deserves its own sub-section?[2] But in principle, even if itās not a source of great value, as long as itās large enough then this effect should cancel out on a totalist utilitarian axiology.[3]
3: We might not be able to help future people much.
So I think this is actually where the rubber really hits the road in terms of objections to longtermism (along with your later point about tradeoffs). I think one of the weaker parts of What We Owe the Future is concrete ways longtermism differs from other philosophies of doing good in terms of being action-guiding. As you point out, in practice xRisk dominates here because, by definition, its effects are permanent and so will definitely affect the future.
I do think Thorstad overstates the strength of the āregression to the inscrutableā claim. We have to make choices and trade-offs even in the absence of clear and dispositive empirical evidence, though I do think that EA should act with humility in the face of Deep Uncertainty, and actions should be more exploratory rather than committed.[4]
4: Remember the trade-offs
I donāt think that I disagree much with you here in terms of thinking trade-offs are important, and the counterarguments you raise means that the value longtermists place on the future should be interrogated more. I do want to pick a slight issue with:
Which reads like a knock-down case against Longtermism. But such objections can be raised against any moral system, since they all commit us to some moral trade-off. The moral-network framework you introduce in section 1 commits you to using resources to help those close to you, necessarily at the expense of those further away from you in your network.
But even as this is stated, I also think that it doesnāt follow. I think, to be a longtermist, you only really need to accept the first of MacAskillās premises. I think someone who thinks that the future has great moral value, but thinks that they donāt have a clear causal way to ensure/āimprove that value, would be justified as being a longtermist without be committed to ignore the plight of those in the present for the sake of a possible future.
Overall, I agreed with a fair amount of what you wrote. As I wrote this comment up, I think that the best frame for longtermism actually isnāt as a brand new moral theory, but as a ādefaultā assumption. In your moral network, itād be a default setting where all weights and connections are set equally impartially across space and time. One could disagree with the particular weights, but you could also disagree with what we could causally effect. I think, under this kind of framing, you hae a lot more in common with ālongtermismā than it might seem at the moment, but Iād welcome your thoughts
Though this wouldnāt necessarily be disqualifying against your theory, as it seems to affect every theory of population ethics!
It is an interesting point though, and agree it probably needs more concrete work from longtermists (or more publicity for those works where that case is already made!)
For the record, I am not a (naĆÆve) totalist utilitarian
There is a separate object-level discussion about whether AI/āBio-risks actually are inscrutable, but I donāt particularly want to get into that debate here!
Thank you for the feedback on both the arguments and writing (something I am aiming to improve through this writing). Sorry for being slow to respond, itās been a busy few of weeks!
I donāt think thereās actually any disagreement here except here:
I disagree, at least taking the MacAskill definition of as āthe view that we should be doing much more to protect future generationsā. This is not just a moral conclusion but also a conclusion regarding how we should use marginal resources. If you do not think there is a causal way to affect future people, I think you must reject that conclusion.
However, I think sometimes longtermism is used to mean āwe should value future people roughly similarly to how we value current peopleā. Under this definition, I agree with you.