I generally agree that we should be more concerned about this. In particular, I find people who will happily approve Shut Up and Multiply sentiment but reject this consideration suspect in their reasoning.
A more extreme version of this is that, given the massively greater efficiency with which a digital consciousness could convert matter and energy to utilons (IIRC naively about 3 orders of magnitude according to Bostrom, before any increase from greater coordination), on strict expected value reasoning you have to be extremely confident that this won’t happen—or at least have a much stronger rebuttal than ‘AI won’t necessarily be conscious’.
Separately, I think there might be a case for accelerationism even if you think it increases the risk of AI takeover and that AI takeover is bad, on the grounds that in many scenarios advancing faster might still increase the probability of human descendants getting through the time of perils before some other threat destroys us (every year we remain in our current state is another year in which we run the risk of, for example, a global nuclear war or civilisation-ending pandemic).
A more extreme version of this is that, given the massively greater efficiency with which a digital consciousness could convert matter and energy to utilons
I have a post where I conclude the above may well apply not only to digital consciousness, but also to animals:
I calculated the welfare ranges per calorie consumption for a few species.
They vary a lot. The values for bees and pigs are 4.88 k and 0.473 times as high as that for humans.
They are higher for non-human animals:
5 of the 6 species I analysed have values higher than that of humans.
The lower the calorie consumption, the higher the median welfare range per calorie consumption.
I generally agree that we should be more concerned about this. In particular, I find people who will happily approve Shut Up and Multiply sentiment but reject this consideration suspect in their reasoning.
A more extreme version of this is that, given the massively greater efficiency with which a digital consciousness could convert matter and energy to utilons (IIRC naively about 3 orders of magnitude according to Bostrom, before any increase from greater coordination), on strict expected value reasoning you have to be extremely confident that this won’t happen—or at least have a much stronger rebuttal than ‘AI won’t necessarily be conscious’.
Separately, I think there might be a case for accelerationism even if you think it increases the risk of AI takeover and that AI takeover is bad, on the grounds that in many scenarios advancing faster might still increase the probability of human descendants getting through the time of perils before some other threat destroys us (every year we remain in our current state is another year in which we run the risk of, for example, a global nuclear war or civilisation-ending pandemic).
Hi,
I have a post where I conclude the above may well apply not only to digital consciousness, but also to animals: