Realistically, we can only represent future moral agents, who may not adequately consider the interests of future moral patients (such as nonhuman animals or nonbiological beings).
Could you expand on what you mean by the first part of that sentence, and what makes you say that?
It seems true that only moral agents can āvoteā in the sort of meaningful sense we typically associate with āvotingā. But it also seems like, in representing future beings, weāre primarily representing their preferences, or something like that. And it seems like this doesnāt really require them āvotingā, and thus could be done for future moral patients in ways that are analogous to how we could do it for future moral agents.
Subsidize liquid prediction markets about the results of these surveys in all future years. For example, we can bet about people in 2045ās answers to āDid we do too much or too little about climate change in 2015-2025?ā
We will get to see market odds on what people in 10, 20, or 30 years will say about our current policy decisions. For example, people arguing against a policy can cite facts like āThe market expects that in 20 years we will consider this policy to have been a mistake.ā
It seems we could analogously subsidize liquid prediction markets for things like the results in 2045, conditional on passing X or Y policy, of whatever our best metrics are for the welfare or preference-satisfaction of animals, or of AIs whose experiences matter but who arenāt moral agents. And then people could say things like āThe market expects that [proxy] will indicate in that [group of moral patients] will be better off in 2045 if pass [policy X] than if we pass [policy Y].ā
Of course, coming up with such metrics is hard, but that seems like a problem weāll want to fix anyway.
And perhaps, at the least, we could use a metric along the lines of āthe views in 2045 of experts or the general public on the preference-satisfaction or welfare of those moral patientsā. Even if this still boils down to asking for the views of future moral agents, itās at least asking about their beliefs about this other thing that matters, rather than just what they want, so it might give additional and useful information. (Iād imagine this being done in addition to asking what those moral agents want, not instead of that.)
I should mention that I hadnāt thought about this issue at all till I read your post, so those statements should all be taken as quite tentative. Relatedly, I donāt really have a view on whether we should do anything like that; Iām just suggesting that it seems like we could do it.
Could you expand on what you mean by the first part of that sentence, and what makes you say that?
I just meant that proposals to represent future non-human animals will likely gain less traction than the idea of representing future humans. But I agree that it would be perfectly possible to do it (as you say). And of course Iād be strongly in favour of having a Parliamentary Committee for all Future Sentient Beings or something like that, but again, thatās not politically feasible anytime soon. So we have to find a sweet spot where a proposal is both realistic and would be a significant improvement from our perspective.
It seems we could analogously subsidize liquid prediction markets for things like the results in 2045, conditional on passing X or Y policy, of whatever our best metrics are for the welfare or preference-satisfaction of animals, or of AIs whose experiences matter but who arenāt moral agents. And then people could say things like āThe market expects that [proxy] will indicate in that [group of moral patients] will be better off in 2045 if pass [policy X] than if we pass [policy Y].ā
Of course, coming up with such metrics is hard, but that seems like a problem weāll want to fix anyway.
I agree, and Iād be really excited about such prediction markets! However, perhaps the case of nonhuman animals differs in that it is often quite clear what policies would be better for animals (e.g. better welfare standards), whether itās current or future animals, and the bottleneck is just the lack of political will to do it. (But it would be valuable to know more about which policies would be most importantāe.g. perhaps such markets would say that funding cultivated meat research is 10x as important as other reforms.)
By contrast, it seems less clear what we could do now to benefit future moral agents (seeing as theyāll be able to decide for themselves what to do), so perhaps there is more of a need for prediction markets.
I just meant that proposals to represent future non-human animals will likely gain less traction than the idea of representing future humans.
Ah, that makes sense, then.
However, perhaps the case of nonhuman animals differs in that it is often quite clear what policies would be better for animals (e.g. better welfare standards), whether itās current or future animals, and the bottleneck is just the lack of political will to do it. [...]
By contrast, it seems less clear what we could do now to benefit future moral agents (seeing as theyāll be able to decide for themselves what to do), so perhaps there is more of a need for prediction markets.
This is an interesting point, and I think thereās something to it. But I also tentatively think that the distinction might be less sharp than you suggest. (The following is again just quick thoughts.)
Firstly, it seems to me that we should currently have a lot of uncertainties about what would be better for animals. And it also seems that, in any case, much of the public probably is uncertain about a lot of relevant things (even if sufficient evidence to resolve those uncertainties does exist somewhere).
There are indeed some relatively obvious low-hanging fruit, but my guess would be that, for all the really big changes (e.g., phasing out factory farming, improving conditions for wild animals), it would be hard to say for sure what would be net-positive. For example, perhaps factory farmed animals have net positive lives, or could have net positive lives given some changes in conditions, in which case developing clean meat, increasing rates of veganism, etc. could be net negative (from a non-suffering-focused perspective), as it removes wellbeing from the world.
Of course, even if facing such uncertainties, expected value reasoning might strongly support one course of action. Relatedly, in reality, Iām quite strongly in favour of phasing out factory farming, and Iām personally a vegetarian-going-on-vegan. But I do think thereās room for some uncertainty there. And even if there are already arguments and evidence that should resolve that uncertainty for people, itās possible that those arguments and bits of evidence would be more complex or less convincing than something like āIn 2045, people/āexperts/āsome metric will be really really sure that animals wouldāve been better off if weād done X than if weād done Y.ā (But thatās just a hypothesis; I donāt know how convincing people would find such judgements-from-the-future.)
Secondly, it seems that there are several key things where itās quite clear what policies would be better for future moral agents, and the bottleneck is just the lack of political will to do it. (Or at least, where what would be better is about as clear as it is for many animal-related things.) E.g., reducing emissions; doing more technical AI safety research; more pandemic preparedness (i.e., I wouldāve said that last year; maybe now things are more where they should be). Perhaps the reason is that these policies relate to issues where future moral agents wonāt ābe able to decide for themselves what to doā, or at least where itād be much harder for them to do X than it is for us to do X.
Perhaps the summary of these ideas is that:
This sort of prediction market might be useful both for generating information and for building political will /ā changing motivations.
That might apply somewhat similarly both for what future moral agents would want and for what future moral patients would want
But that relies on getting the necessary support to set up such prediction markets and have people start paying attention, which might be harder in the case of future moral patients, as you note
Nice post!
Could you expand on what you mean by the first part of that sentence, and what makes you say that?
It seems true that only moral agents can āvoteā in the sort of meaningful sense we typically associate with āvotingā. But it also seems like, in representing future beings, weāre primarily representing their preferences, or something like that. And it seems like this doesnāt really require them āvotingā, and thus could be done for future moral patients in ways that are analogous to how we could do it for future moral agents.
For example, you quote Paul Christianoās suggestion that we could:
It seems we could analogously subsidize liquid prediction markets for things like the results in 2045, conditional on passing X or Y policy, of whatever our best metrics are for the welfare or preference-satisfaction of animals, or of AIs whose experiences matter but who arenāt moral agents. And then people could say things like āThe market expects that [proxy] will indicate in that [group of moral patients] will be better off in 2045 if pass [policy X] than if we pass [policy Y].ā
Of course, coming up with such metrics is hard, but that seems like a problem weāll want to fix anyway.
And perhaps, at the least, we could use a metric along the lines of āthe views in 2045 of experts or the general public on the preference-satisfaction or welfare of those moral patientsā. Even if this still boils down to asking for the views of future moral agents, itās at least asking about their beliefs about this other thing that matters, rather than just what they want, so it might give additional and useful information. (Iād imagine this being done in addition to asking what those moral agents want, not instead of that.)
I should mention that I hadnāt thought about this issue at all till I read your post, so those statements should all be taken as quite tentative. Relatedly, I donāt really have a view on whether we should do anything like that; Iām just suggesting that it seems like we could do it.
Hi Michael,
thanks for the comment!
I just meant that proposals to represent future non-human animals will likely gain less traction than the idea of representing future humans. But I agree that it would be perfectly possible to do it (as you say). And of course Iād be strongly in favour of having a Parliamentary Committee for all Future Sentient Beings or something like that, but again, thatās not politically feasible anytime soon. So we have to find a sweet spot where a proposal is both realistic and would be a significant improvement from our perspective.
I agree, and Iād be really excited about such prediction markets! However, perhaps the case of nonhuman animals differs in that it is often quite clear what policies would be better for animals (e.g. better welfare standards), whether itās current or future animals, and the bottleneck is just the lack of political will to do it. (But it would be valuable to know more about which policies would be most importantāe.g. perhaps such markets would say that funding cultivated meat research is 10x as important as other reforms.)
By contrast, it seems less clear what we could do now to benefit future moral agents (seeing as theyāll be able to decide for themselves what to do), so perhaps there is more of a need for prediction markets.
Ah, that makes sense, then.
This is an interesting point, and I think thereās something to it. But I also tentatively think that the distinction might be less sharp than you suggest. (The following is again just quick thoughts.)
Firstly, it seems to me that we should currently have a lot of uncertainties about what would be better for animals. And it also seems that, in any case, much of the public probably is uncertain about a lot of relevant things (even if sufficient evidence to resolve those uncertainties does exist somewhere).
There are indeed some relatively obvious low-hanging fruit, but my guess would be that, for all the really big changes (e.g., phasing out factory farming, improving conditions for wild animals), it would be hard to say for sure what would be net-positive. For example, perhaps factory farmed animals have net positive lives, or could have net positive lives given some changes in conditions, in which case developing clean meat, increasing rates of veganism, etc. could be net negative (from a non-suffering-focused perspective), as it removes wellbeing from the world.
Of course, even if facing such uncertainties, expected value reasoning might strongly support one course of action. Relatedly, in reality, Iām quite strongly in favour of phasing out factory farming, and Iām personally a vegetarian-going-on-vegan. But I do think thereās room for some uncertainty there. And even if there are already arguments and evidence that should resolve that uncertainty for people, itās possible that those arguments and bits of evidence would be more complex or less convincing than something like āIn 2045, people/āexperts/āsome metric will be really really sure that animals wouldāve been better off if weād done X than if weād done Y.ā (But thatās just a hypothesis; I donāt know how convincing people would find such judgements-from-the-future.)
Secondly, it seems that there are several key things where itās quite clear what policies would be better for future moral agents, and the bottleneck is just the lack of political will to do it. (Or at least, where what would be better is about as clear as it is for many animal-related things.) E.g., reducing emissions; doing more technical AI safety research; more pandemic preparedness (i.e., I wouldāve said that last year; maybe now things are more where they should be). Perhaps the reason is that these policies relate to issues where future moral agents wonāt ābe able to decide for themselves what to doā, or at least where itād be much harder for them to do X than it is for us to do X.
Perhaps the summary of these ideas is that:
This sort of prediction market might be useful both for generating information and for building political will /ā changing motivations.
That might apply somewhat similarly both for what future moral agents would want and for what future moral patients would want
But that relies on getting the necessary support to set up such prediction markets and have people start paying attention, which might be harder in the case of future moral patients, as you note