Perhaps you think this view is worth dismising because either:
You think humanity wouldn’t do things which are better than what AIs would do, so it’s unimportant. (E.g. because humanity is 99.9% selfish. I’m skeptical, I think this is going to be more like 50% selfish and the naive billionare extrapolation is more like 90% selfish.)
From an impartial (non-selfish) perspective, yes, I’m not particularly attached to human economic consumption relative to AI economic consumption. In general, my utilitarian intuitions are such that I don’t have a strong preference for humans over most “default” unaligned AIs, except insofar as this conflicts with my preferences for existing people (including myself, my family, friends etc.).
I’d additionally point out that AIs could be altruistic too. Indeed, it seems plausible to me they’ll be even more altruistic than humans, since the AI training process is likely to deliberately select for altruism, whereas human evolution directly selected for selfishness (at least on the gene level, if not the personal level too).
This is a topic we’ve touched on several times before, and I agree you’re conveying my views — and our disagreement — relatively accurately overall.
You think scope sensitive (linear returns) isn’t worth putting a huge amount of weight on.
I also think this, yes. For example, we could consider the following bets:
99% chance of 1% of control over the universe, and a 1% chance of 0% control
10% chance of 90% of control over the universe, and a 90% chance of 0% control
According to a scope sensitive calculation, the second gamble is better than the first. Yet, from a personal perspective, I’d prefer (1) under a wide variety of assumptions.
From an impartial (non-selfish) perspective, yes, I’m not particularly attached to human economic consumption relative to AI economic consumption. In general, my utilitarian intuitions are such that I don’t have a strong preference for humans over most “default” unaligned AIs, except insofar as this conflicts with my preferences for existing people (including myself, my family, friends etc.).
I’d additionally point out that AIs could be altruistic too. Indeed, it seems plausible to me they’ll be even more altruistic than humans, since the AI training process is likely to deliberately select for altruism, whereas human evolution directly selected for selfishness (at least on the gene level, if not the personal level too).
This is a topic we’ve touched on several times before, and I agree you’re conveying my views — and our disagreement — relatively accurately overall.
I also think this, yes. For example, we could consider the following bets:
99% chance of 1% of control over the universe, and a 1% chance of 0% control
10% chance of 90% of control over the universe, and a 90% chance of 0% control
According to a scope sensitive calculation, the second gamble is better than the first. Yet, from a personal perspective, I’d prefer (1) under a wide variety of assumptions.