I have no doubt that if one human became superintelligent that would also have a high risk of disaster, precisely because they would have preferences that I don’t share (probably selfish ones)
I would worry if a single human had much more power than all other humans combined. Likewise, I would worry if an AI agent had more power than all other AI agents and humans combined. However, I think the probability of any of these scenarios becoming true in the next 10 years is lower than 0.001 %. Elon Musk has a net worth of 765 billion $, 0.543 % (= 765*10^9/​(141*10^12)) of the market cap of all publicly listed companies of 141 T$.
Elon Musk has already used this power to do actions which will potentially kill millions (by funding the Trump campaign enough to get to close down USAID). I think that should worry us, and the chance of people amassing even more power should worry us even more.
Hi Guy. Elon Musk was not the only person responsible for the recent large cuts in foreign aid from the United States (US). In addition, I believe outcomes like human extinction are way less likely. I agree it makes sense to worry about concentration of power, but not about extreme outcomes like human extinction.
I think the evolution analogy becomes relevant again here: consider that the genus Homo was at first more intelligent than other species but not more powerful than their numbers combined… until suddenly one jump in intelligence let homo sapiens wreak havoc across the globe. Similarly, there might be a tipping point in AI intelligence where fighting back becomes very suddenly infeasible. I think this is a much better analogy than Elon Musk, because like an evolving species a superintelligent AI can multiply and self-improve.
I think a good point that Y&S make is that we shouldn’t expect to know where the point of no return is, and should be prudent enough to stop well before it. I suppose you must have some source/​reason for the 0.001% confidence claim, but it seems pretty wild to me to be so confident in a field like that is evolving and—at least from my perspective—pretty hard to understand.
It is unclear to me whether all humans together are more powerful than all other organisms on Earth together. It depends on what is meat by powerful. The power consumption of humans is 19.6 TW (= 1.07 + 18.5), only 0.700 % (= 19.6/​(2.8*10^3)) of all organisms. In any case, all humans together being more powerful than all other organisms on Earth together is still way more likely than the most powerful human being much more powerful than all other organisms on Earth together.
My upper bound of 0.001 % is just a guess, but I do endorse it. You can have a best guess that an event in very unlikely, but still be super uncertain about its probability. For example, one could believe an event has a probability of 10^-100 to 10^-10, which would imply it is super unlikely despite 90 (= −10 - (-100)) orders of magnitude (OOMs) of uncertainty in the probability.
By power I mean: ability to change the world, according to one’s preferences. Humans clearly dominate today in terms of this kind of power. Our power is limited, but it is not the case that other organisms have power over us, because while we might rely on them, they are not able to leverage that dependency. Rather, we use them as much as we can.
No human is currently so powerful as to have power over all other humans, and I think that’s definitely a good thing. But it doesn’t seem like it would take much more advantage to let one intelligent being dominate all others.
Are you thinking about humans as an aligned collective in the 1st paragraph of your comment? I agree all humans coordinating their actions together would have more power than other groups of organisms with their actual levels of coordination. However, such level of coordination among humans is not realistic. All 10^30 bacteria (see Table S1 of Bar-On et al. (2018)) coordinating their actions together would arguably also have more power than all humans with their actual level of coordination.
I agree it is good that no human has power over all humans. However, I still think one being dominating all others has a probability lower than 0.001 % over the next 10 years. I am open tobetsagainst short AI timelines, or what they supposedly imply, up to 10 k$. Do you see any that we could make that is good for both of us under our own views?
Thanks for the comment, Tristan.
I would worry if a single human had much more power than all other humans combined. Likewise, I would worry if an AI agent had more power than all other AI agents and humans combined. However, I think the probability of any of these scenarios becoming true in the next 10 years is lower than 0.001 %. Elon Musk has a net worth of 765 billion $, 0.543 % (= 765*10^9/​(141*10^12)) of the market cap of all publicly listed companies of 141 T$.
Elon Musk has already used this power to do actions which will potentially kill millions (by funding the Trump campaign enough to get to close down USAID). I think that should worry us, and the chance of people amassing even more power should worry us even more.
Hi Guy. Elon Musk was not the only person responsible for the recent large cuts in foreign aid from the United States (US). In addition, I believe outcomes like human extinction are way less likely. I agree it makes sense to worry about concentration of power, but not about extreme outcomes like human extinction.
Extinction perhaps not, but I think eternal autocracy is definitely possible.
I think the evolution analogy becomes relevant again here: consider that the genus Homo was at first more intelligent than other species but not more powerful than their numbers combined… until suddenly one jump in intelligence let homo sapiens wreak havoc across the globe. Similarly, there might be a tipping point in AI intelligence where fighting back becomes very suddenly infeasible. I think this is a much better analogy than Elon Musk, because like an evolving species a superintelligent AI can multiply and self-improve.
I think a good point that Y&S make is that we shouldn’t expect to know where the point of no return is, and should be prudent enough to stop well before it. I suppose you must have some source/​reason for the 0.001% confidence claim, but it seems pretty wild to me to be so confident in a field like that is evolving and—at least from my perspective—pretty hard to understand.
It is unclear to me whether all humans together are more powerful than all other organisms on Earth together. It depends on what is meat by powerful. The power consumption of humans is 19.6 TW (= 1.07 + 18.5), only 0.700 % (= 19.6/​(2.8*10^3)) of all organisms. In any case, all humans together being more powerful than all other organisms on Earth together is still way more likely than the most powerful human being much more powerful than all other organisms on Earth together.
My upper bound of 0.001 % is just a guess, but I do endorse it. You can have a best guess that an event in very unlikely, but still be super uncertain about its probability. For example, one could believe an event has a probability of 10^-100 to 10^-10, which would imply it is super unlikely despite 90 (= −10 - (-100)) orders of magnitude (OOMs) of uncertainty in the probability.
By power I mean: ability to change the world, according to one’s preferences. Humans clearly dominate today in terms of this kind of power. Our power is limited, but it is not the case that other organisms have power over us, because while we might rely on them, they are not able to leverage that dependency. Rather, we use them as much as we can.
No human is currently so powerful as to have power over all other humans, and I think that’s definitely a good thing. But it doesn’t seem like it would take much more advantage to let one intelligent being dominate all others.
Are you thinking about humans as an aligned collective in the 1st paragraph of your comment? I agree all humans coordinating their actions together would have more power than other groups of organisms with their actual levels of coordination. However, such level of coordination among humans is not realistic. All 10^30 bacteria (see Table S1 of Bar-On et al. (2018)) coordinating their actions together would arguably also have more power than all humans with their actual level of coordination.
I agree it is good that no human has power over all humans. However, I still think one being dominating all others has a probability lower than 0.001 % over the next 10 years. I am open to bets against short AI timelines, or what they supposedly imply, up to 10 k$. Do you see any that we could make that is good for both of us under our own views?