It only preserves option value from your perspective to the extent that you think humanity will have a similar perspective as you and will make resonable choices. Matthew seems to think that humanity will use ~all of the resources on economic consumption such that the main source of value (from a longtermist, scope sensitive, utilitarian-ish perspective) will be from the minds of the laborers that produce the goods for this consumption.
I agree with your first sentence as a summary of my view.
The second sentence is also roughly accurate[ETA: see comment below for why I am no longer endorsing this], but I do not consider it to be a complete summary of the argument I gave in the post. I gave additional reasons for thinking that the values of the human species are not special from a total utilitarian perspective. This included the point that humans are largely not utilitarians, and in fact frequently have intuitions that would act against the recommendations of utilitarianism if their preferences were empowered. I elaborated substantially on this point in the post.
On second thought, regarding the second sentence, I think I want to take back my endorsement. I don’t necessarily think the main source of value will come from the minds of AIs who labor, although I find this idea plausible depending on the exact scenario. I don’t really think I have a strong opinion about this question, and I didn’t see my argument as resting on it. And so I’d really prefer it not be seen as part of my argument (and I did not generally try to argue this in the post).
Really, my main point was that I don’t actually see much of a difference between AI consumption and human consumption, from a utilitarian perspective. Yet, when thinking about what has moral value in the world, I think focusing on consumption in both cases is generally correct. This includes considerations related to incidental utility that comes as a byproduct from consumption, but the “incidental” part here is not a core part of what I’m arguing.
I agree with your first sentence as a summary of my view.
The second sentence is also roughly accurate[ETA: see comment below for why I am no longer endorsing this], but I do not consider it to be a complete summary of the argument I gave in the post. I gave additional reasons for thinking that the values of the human species are not special from a total utilitarian perspective. This included the point that humans are largely not utilitarians, and in fact frequently have intuitions that would act against the recommendations of utilitarianism if their preferences were empowered. I elaborated substantially on this point in the post.On second thought, regarding the second sentence, I think I want to take back my endorsement. I don’t necessarily think the main source of value will come from the minds of AIs who labor, although I find this idea plausible depending on the exact scenario. I don’t really think I have a strong opinion about this question, and I didn’t see my argument as resting on it. And so I’d really prefer it not be seen as part of my argument (and I did not generally try to argue this in the post).
Really, my main point was that I don’t actually see much of a difference between AI consumption and human consumption, from a utilitarian perspective. Yet, when thinking about what has moral value in the world, I think focusing on consumption in both cases is generally correct. This includes considerations related to incidental utility that comes as a byproduct from consumption, but the “incidental” part here is not a core part of what I’m arguing.