Plenty of people want wealth and power, which are “conducive to gaining control over [parts of] humanity”.
I agree, but I think very few people want to acquire e.g. 10 T$ of resources without broad consent of others. In addition, if a single AI system expressed such a desire, humans would not want to scale up its capabilities.
I agree with Robin Hanson on this question. However, I think humans will likely become an increasingly small fraction of the world over time, as AIs become a larger part of it. Just as hunter-gatherers are threatened by industrial societies, so too may biological humans one day become threatened by future AIs. Such a situation may not be very morally bad (or deserving the title “existential risk”), because humans are not the only morally important beings in the world. Yet, it is still true that AI carries a great risk to humanity.
I agree biological humans will likely become an increasingly small fraction of the world, but it does not follow that AI carries a great risk to humas[1]. I would not say people born after 1960 carry a great risk risk to people born before 1960, even though the fraction of the global resources controlled by the latter is becoming increasingly small. I would consider that AI poses a great risk to humans if these were expected to suffer significantly more than in their typical lives, which also involve suffering, in the process of losing control over resources.
I agree, but I think very few people want to acquire e.g. 10 T$ of resources without broad consent of others.
I think I simply disagree with the claim here. I think it’s not true. I think many people would want to acquire $10T without the broad consent of others, if they had the ability to obtain such wealth (and they could actually spend it; here I’m assuming they actually control this quantity of resources and don’t get penalized because of the fact it was acquired without the broad consent of others, because that would change the scenario). It may be that fewer than 50% of people have such a desire. I’d be very surprised if it were <1% and, I’d even be surprised if it was <10%.
I agree biological humans will likely become an increasingly small fraction of the world, but it does not follow that AI carries a great risk to humas[1]. I would not say people born after 1960 carry a great risk risk to people born before 1960, even though the fraction of the global resources controlled by the latter is becoming increasingly small.
I think humans born after 1960 do pose a risk to humans born before 1960 in some ordinary senses. For example, the younger humans could vote to decrease medical spending, which could lead to early death for the older humans. They could also vote to increase taxes on people who have accumulated a lot of wealth, which very disproportionately hurts old people. This is not an implausible risk either; I think these things have broadly happened many times in the past.
That said, I suspect part of the disagreement here is about time scales. In the short and medium term, I agree: I’m not so much worried about AI posing a risk to humanity. I was really only talking about long-term scenarios in my above comment.
Thanks for following up, Matthew.
I agree, but I think very few people want to acquire e.g. 10 T$ of resources without broad consent of others. In addition, if a single AI system expressed such a desire, humans would not want to scale up its capabilities.
I agree biological humans will likely become an increasingly small fraction of the world, but it does not follow that AI carries a great risk to humas[1]. I would not say people born after 1960 carry a great risk risk to people born before 1960, even though the fraction of the global resources controlled by the latter is becoming increasingly small. I would consider that AI poses a great risk to humans if these were expected to suffer significantly more than in their typical lives, which also involve suffering, in the process of losing control over resources.
You said “risk to humanity” instead of “risk to humans”. I prefer this because humanity is sometimes used to include other beings.
I think I simply disagree with the claim here. I think it’s not true. I think many people would want to acquire $10T without the broad consent of others, if they had the ability to obtain such wealth (and they could actually spend it; here I’m assuming they actually control this quantity of resources and don’t get penalized because of the fact it was acquired without the broad consent of others, because that would change the scenario). It may be that fewer than 50% of people have such a desire. I’d be very surprised if it were <1% and, I’d even be surprised if it was <10%.
I think humans born after 1960 do pose a risk to humans born before 1960 in some ordinary senses. For example, the younger humans could vote to decrease medical spending, which could lead to early death for the older humans. They could also vote to increase taxes on people who have accumulated a lot of wealth, which very disproportionately hurts old people. This is not an implausible risk either; I think these things have broadly happened many times in the past.
That said, I suspect part of the disagreement here is about time scales. In the short and medium term, I agree: I’m not so much worried about AI posing a risk to humanity. I was really only talking about long-term scenarios in my above comment.