I agree, but I think very few people want to acquire e.g. 10 T$ of resources without broad consent of others.
I think I simply disagree with the claim here. I think it’s not true. I think many people would want to acquire $10T without the broad consent of others, if they had the ability to obtain such wealth (and they could actually spend it; here I’m assuming they actually control this quantity of resources and don’t get penalized because of the fact it was acquired without the broad consent of others, because that would change the scenario). It may be that fewer than 50% of people have such a desire. I’d be very surprised if it were <1% and, I’d even be surprised if it was <10%.
I agree biological humans will likely become an increasingly small fraction of the world, but it does not follow that AI carries a great risk to humas[1]. I would not say people born after 1960 carry a great risk risk to people born before 1960, even though the fraction of the global resources controlled by the latter is becoming increasingly small.
I think humans born after 1960 do pose a risk to humans born before 1960 in some ordinary senses. For example, the younger humans could vote to decrease medical spending, which could lead to early death for the older humans. They could also vote to increase taxes on people who have accumulated a lot of wealth, which very disproportionately hurts old people. This is not an implausible risk either; I think these things have broadly happened many times in the past.
That said, I suspect part of the disagreement here is about time scales. In the short and medium term, I agree: I’m not so much worried about AI posing a risk to humanity. I was really only talking about long-term scenarios in my above comment.
I think I simply disagree with the claim here. I think it’s not true. I think many people would want to acquire $10T without the broad consent of others, if they had the ability to obtain such wealth (and they could actually spend it; here I’m assuming they actually control this quantity of resources and don’t get penalized because of the fact it was acquired without the broad consent of others, because that would change the scenario). It may be that fewer than 50% of people have such a desire. I’d be very surprised if it were <1% and, I’d even be surprised if it was <10%.
I think humans born after 1960 do pose a risk to humans born before 1960 in some ordinary senses. For example, the younger humans could vote to decrease medical spending, which could lead to early death for the older humans. They could also vote to increase taxes on people who have accumulated a lot of wealth, which very disproportionately hurts old people. This is not an implausible risk either; I think these things have broadly happened many times in the past.
That said, I suspect part of the disagreement here is about time scales. In the short and medium term, I agree: I’m not so much worried about AI posing a risk to humanity. I was really only talking about long-term scenarios in my above comment.