Are we in an Original Position regarding the interests of our descendants?
If you:
Had to make a decision about the basic structure of a society where your distantdescendants will live (in 200 or 2000 years), and
only care about their welfare, and
don’t know (almost) anything about who they will be, how many, how their society will be structured, etc.,
Then you are under some sort of veil of ignorance, in a situation quite similar to Rawls’s Original Position… with one major difference: it’s not an abstract thought experiment for ideal political theory.
What led me to this is that I suspect that the welfare of my descendants will likely depend more on the basic structure of their society than on any amount of resources I try to transfer to them – but I’m not sure about that: there are some examples of successful transfers of great wealth through many generations.
I’m not sure Rawls’s theory of justicewould follow from this, but it’s quite possible: when I have the welfare of a subset of unidentified individuals in the future in mind, I feel tempted to prefer that their society will abide by something like his two principles of justice. According to Harsanyi, it’s also tempting to prefer something like Average utilitarianism (which, in this context, converges to sum-utilitarianism, because we are abstracting away populational variations).
After thinking this, I didn’t see any major philosophical opinions changing in myself, but I was surprised that I never found any argument over this in the literature.
Maybe because it’s not such a good way of reasoning about future generations: there are more effective ways of improving future welfare than fostering political liberalism. But I guess this is the sort of reasoning we’d expect from something like a reciprocity-based theory of longtermism.
Two researchers at the RAND Corporation recently argued for a related idea. From our Future Matters summary:
Douglas Ligor and Luke Matthews’s Outer space and the veil of ignorance proposes a framework for thinking about space regulation. The authors credit John Rawls with an idea actually first developed by the utilitarian economist John Harsanyi: that to decide what rules should govern society, we must ask what each member would prefer if they ignored in advance their own position in it. The authors then note that, when it comes to space governance, humanity is currently behind a de facto veil of ignorance. As they write, “we still do not know who will shoulder the burden to clean up our space debris, or which nation or company will be the first to capitalize on mining extraterrestrial resources.” Since the passage of time will gradually lift this veil, and reveal which nations benefit from which rules, the authors argue that this is a unique time for the international community to agree on binding rules for space governance.
Are we in an Original Position regarding the interests of our descendants?
If you:
Had to make a decision about the basic structure of a society where your distant descendants will live (in 200 or 2000 years), and
only care about their welfare, and
don’t know (almost) anything about who they will be, how many, how their society will be structured, etc.,
Then you are under some sort of veil of ignorance, in a situation quite similar to Rawls’s Original Position… with one major difference: it’s not an abstract thought experiment for ideal political theory.
What led me to this is that I suspect that the welfare of my descendants will likely depend more on the basic structure of their society than on any amount of resources I try to transfer to them – but I’m not sure about that: there are some examples of successful transfers of great wealth through many generations.
I’m not sure Rawls’s theory of justice would follow from this, but it’s quite possible: when I have the welfare of a subset of unidentified individuals in the future in mind, I feel tempted to prefer that their society will abide by something like his two principles of justice. According to Harsanyi, it’s also tempting to prefer something like Average utilitarianism (which, in this context, converges to sum-utilitarianism, because we are abstracting away populational variations).
After thinking this, I didn’t see any major philosophical opinions changing in myself, but I was surprised that I never found any argument over this in the literature.
Maybe because it’s not such a good way of reasoning about future generations: there are more effective ways of improving future welfare than fostering political liberalism. But I guess this is the sort of reasoning we’d expect from something like a reciprocity-based theory of longtermism.
Two researchers at the RAND Corporation recently argued for a related idea. From our Future Matters summary: