I think my crux with this argument is “actions are taken by individuals”. This is true, strictly speaking; but when e.g. a member of U.S. Congress votes on a bill, they’re taking an action on behalf of their constituents, and affecting the whole U.S. (and often world) population. I like to ground morality in questions of a political philosophy flavor, such as: “What is the algorithm that we would like legislators to use to decide which legislation to support?”. And as I see it, there’s no way around answering questions like this one, when decisions have significant trade-offs in terms of which people benefit.
And often these trade-offs need to deal with population ethics. Imagine, as a simplified example, that China is about to deploy an AI that has a 50% chance of killing everyone and a 50% chance of creating a flourishing future of many lives like the one many longtermists like to imagine. The U.S. is considering deploying its own “conservative” AI, which we’re pretty confident is safe, and which will prevent any other AGIs from being built but won’t do much else (so humans might be destined for a future that looks like a moderately improved version of the present). Should the U.S. deploy this AI? It seems like we need to grapple with population ethics to answer this question.
(And so I also disagree with “I can’t imagine a reasonable scenario in which I would ever have the power to choose between such worlds”, insofar as you’ll have an effect on what we choose, either by voting or more directly than that.)
Maybe you’d dispute that this is a plausible scenario? I think that’s a reasonable position, though my example is meant to point at a cluster of scenarios involving AI development. (Abortion policy is a less fanciful example: I think any opinion on the question built on consequentialist grounds needs to either make an empirical claim about counterfactual worlds with different abortion laws, or else wrestle with difficult questions of population ethics.)
“What is the algorithm that we would like legislators to use to decide which legislation to support?”
I would like them to use an algorithm that is not based on some sort of global calculation about future world-states. That leads to parentalism in government and social engineering. Instead, I would like the algorithm to be based on something like protecting rights and preventing people from directly harming each other. Then, within that framework, people have the freedom to improve their own lives and their own world.
Re the China/US scenario: this does seem implausible; why would the US AI prevent almost all future progress, forever? Setting that aside, though, if this scenario did happen, it would be a very tough call. However, I wouldn’t make it on the basis of counting people and adding up happiness. I would make it on the basis of something like the value of progress vs. the value of survival.
Abortion policy is a good example. I don’t see how you can decide this on the basis of counting people. What matters here is the wishes of the parents, the rights of the mother, and your view on whether the fetus has rights.
I think my crux with this argument is “actions are taken by individuals”. This is true, strictly speaking; but when e.g. a member of U.S. Congress votes on a bill, they’re taking an action on behalf of their constituents, and affecting the whole U.S. (and often world) population. I like to ground morality in questions of a political philosophy flavor, such as: “What is the algorithm that we would like legislators to use to decide which legislation to support?”. And as I see it, there’s no way around answering questions like this one, when decisions have significant trade-offs in terms of which people benefit.
And often these trade-offs need to deal with population ethics. Imagine, as a simplified example, that China is about to deploy an AI that has a 50% chance of killing everyone and a 50% chance of creating a flourishing future of many lives like the one many longtermists like to imagine. The U.S. is considering deploying its own “conservative” AI, which we’re pretty confident is safe, and which will prevent any other AGIs from being built but won’t do much else (so humans might be destined for a future that looks like a moderately improved version of the present). Should the U.S. deploy this AI? It seems like we need to grapple with population ethics to answer this question.
(And so I also disagree with “I can’t imagine a reasonable scenario in which I would ever have the power to choose between such worlds”, insofar as you’ll have an effect on what we choose, either by voting or more directly than that.)
Maybe you’d dispute that this is a plausible scenario? I think that’s a reasonable position, though my example is meant to point at a cluster of scenarios involving AI development. (Abortion policy is a less fanciful example: I think any opinion on the question built on consequentialist grounds needs to either make an empirical claim about counterfactual worlds with different abortion laws, or else wrestle with difficult questions of population ethics.)
I would like them to use an algorithm that is not based on some sort of global calculation about future world-states. That leads to parentalism in government and social engineering. Instead, I would like the algorithm to be based on something like protecting rights and preventing people from directly harming each other. Then, within that framework, people have the freedom to improve their own lives and their own world.
Re the China/US scenario: this does seem implausible; why would the US AI prevent almost all future progress, forever? Setting that aside, though, if this scenario did happen, it would be a very tough call. However, I wouldn’t make it on the basis of counting people and adding up happiness. I would make it on the basis of something like the value of progress vs. the value of survival.
Abortion policy is a good example. I don’t see how you can decide this on the basis of counting people. What matters here is the wishes of the parents, the rights of the mother, and your view on whether the fetus has rights.