Hmm, maybe I’ll try to clarify what I think you’re arguing as I predict it will be confusing to caleb and bystanders. The way I would have put this is:
It only preserves option value from your perspective to the extent that you think humanity overall[1] will have a similar perspective as you and will make resonable choices. Matthew seems to think that humanity will use ~all of the resources on (directly worthless?) economic consumption such that the main source of value (from a longtermist, scope sensitive, utilitarian-ish perspective) will be from the minds of the laborers that produce the goods for this consumption. Thus, there isn’t any option value as almost all the action is coming from indirect value rather than from people trying to produce value.
I disagree strongly with Matthew on this view about where the value will come from in expectation insofar as that is an accurate interpretation. (I elaborate on why in this comment.) I’m not certain about this being a correct interpretation of Matthew’s views, but it at least seems heavily implied by:
Consequently, in a scenario where AIs are aligned with human preferences, the consciousness of AIs will likely be determined mainly by economic efficiency factors during production, rather than by moral considerations. To put it another way, the key factor influencing whether AIs are conscious in this scenario will be the relative efficiency of creating conscious AIs compared to unconscious ones for producing the goods and services demanded by future people. As these efficiency factors are likely to be similar in both aligned and unaligned scenarios, we are led to the conclusion that, from a total utilitarian standpoint, there is little moral difference between these two outcomes.
It only preserves option value from your perspective to the extent that you think humanity will have a similar perspective as you and will make resonable choices. Matthew seems to think that humanity will use ~all of the resources on economic consumption such that the main source of value (from a longtermist, scope sensitive, utilitarian-ish perspective) will be from the minds of the laborers that produce the goods for this consumption.
I agree with your first sentence as a summary of my view.
The second sentence is also roughly accurate[ETA: see comment below for why I am no longer endorsing this], but I do not consider it to be a complete summary of the argument I gave in the post. I gave additional reasons for thinking that the values of the human species are not special from a total utilitarian perspective. This included the point that humans are largely not utilitarians, and in fact frequently have intuitions that would act against the recommendations of utilitarianism if their preferences were empowered. I elaborated substantially on this point in the post.
On second thought, regarding the second sentence, I think I want to take back my endorsement. I don’t necessarily think the main source of value will come from the minds of AIs who labor, although I find this idea plausible depending on the exact scenario. I don’t really think I have a strong opinion about this question, and I didn’t see my argument as resting on it. And so I’d really prefer it not be seen as part of my argument (and I did not generally try to argue this in the post).
Really, my main point was that I don’t actually see much of a difference between AI consumption and human consumption, from a utilitarian perspective. Yet, when thinking about what has moral value in the world, I think focusing on consumption in both cases is generally correct. This includes considerations related to incidental utility that comes as a byproduct from consumption, but the “incidental” part here is not a core part of what I’m arguing.
Hmm, maybe I’ll try to clarify what I think you’re arguing as I predict it will be confusing to caleb and bystanders. The way I would have put this is:
It only preserves option value from your perspective to the extent that you think humanity overall[1] will have a similar perspective as you and will make resonable choices. Matthew seems to think that humanity will use ~all of the resources on (directly worthless?) economic consumption such that the main source of value (from a longtermist, scope sensitive, utilitarian-ish perspective) will be from the minds of the laborers that produce the goods for this consumption. Thus, there isn’t any option value as almost all the action is coming from indirect value rather than from people trying to produce value.
I disagree strongly with Matthew on this view about where the value will come from in expectation insofar as that is an accurate interpretation. (I elaborate on why in this comment.) I’m not certain about this being a correct interpretation of Matthew’s views, but it at least seems heavily implied by:
Really, whoever controls resources under worlds where “humanity” keeps control.
I agree with your first sentence as a summary of my view.
The second sentence is also roughly accurate[ETA: see comment below for why I am no longer endorsing this], but I do not consider it to be a complete summary of the argument I gave in the post. I gave additional reasons for thinking that the values of the human species are not special from a total utilitarian perspective. This included the point that humans are largely not utilitarians, and in fact frequently have intuitions that would act against the recommendations of utilitarianism if their preferences were empowered. I elaborated substantially on this point in the post.On second thought, regarding the second sentence, I think I want to take back my endorsement. I don’t necessarily think the main source of value will come from the minds of AIs who labor, although I find this idea plausible depending on the exact scenario. I don’t really think I have a strong opinion about this question, and I didn’t see my argument as resting on it. And so I’d really prefer it not be seen as part of my argument (and I did not generally try to argue this in the post).
Really, my main point was that I don’t actually see much of a difference between AI consumption and human consumption, from a utilitarian perspective. Yet, when thinking about what has moral value in the world, I think focusing on consumption in both cases is generally correct. This includes considerations related to incidental utility that comes as a byproduct from consumption, but the “incidental” part here is not a core part of what I’m arguing.