I think the evolution analogy becomes relevant again here: consider that the genus Homo was at first more intelligent than other species but not more powerful than their numbers combined… until suddenly one jump in intelligence let homo sapiens wreak havoc across the globe. Similarly, there might be a tipping point in AI intelligence where fighting back becomes very suddenly infeasible. I think this is a much better analogy than Elon Musk, because like an evolving species a superintelligent AI can multiply and self-improve.
I think a good point that Y&S make is that we shouldn’t expect to know where the point of no return is, and should be prudent enough to stop well before it. I suppose you must have some source/reason for the 0.001% confidence claim, but it seems pretty wild to me to be so confident in a field like that is evolving and—at least from my perspective—pretty hard to understand.
It is unclear to me whether all humans together are more powerful than all other organisms on Earth together. It depends on what is meat by powerful. The power consumption of humans is 19.6 TW (= 1.07 + 18.5), only 0.700 % (= 19.6/(2.8*10^3)) of all organisms. In any case, all humans together being more powerful than all other organisms on Earth together is still way more likely than the most powerful human being much more powerful than all other organisms on Earth together.
My upper bound of 0.001 % is just a guess, but I do endorse it. You can have a best guess that an event in very unlikely, but still be super uncertain about its probability. For example, one could believe an event has a probability of 10^-100 to 10^-10, which would imply it is super unlikely despite 90 (= −10 - (-100)) orders of magnitude (OOMs) of uncertainty in the probability.
By power I mean: ability to change the world, according to one’s preferences. Humans clearly dominate today in terms of this kind of power. Our power is limited, but it is not the case that other organisms have power over us, because while we might rely on them, they are not able to leverage that dependency. Rather, we use them as much as we can.
No human is currently so powerful as to have power over all other humans, and I think that’s definitely a good thing. But it doesn’t seem like it would take much more advantage to let one intelligent being dominate all others.
Are you thinking about humans as an aligned collective in the 1st paragraph of your comment? I agree all humans coordinating their actions together would have more power than other groups of organisms with their actual levels of coordination. However, such level of coordination among humans is not realistic. All 10^30 bacteria (see Table S1 of Bar-On et al. (2018)) coordinating their actions together would arguably also have more power than all humans with their actual level of coordination.
I agree it is good that no human has power over all humans. However, I still think one being dominating all others has a probability lower than 0.001 % over the next 10 years. I am open tobetsagainst short AI timelines, or what they supposedly imply, up to 10 k$. Do you see any that we could make that is good for both of us under our own views?
I think the evolution analogy becomes relevant again here: consider that the genus Homo was at first more intelligent than other species but not more powerful than their numbers combined… until suddenly one jump in intelligence let homo sapiens wreak havoc across the globe. Similarly, there might be a tipping point in AI intelligence where fighting back becomes very suddenly infeasible. I think this is a much better analogy than Elon Musk, because like an evolving species a superintelligent AI can multiply and self-improve.
I think a good point that Y&S make is that we shouldn’t expect to know where the point of no return is, and should be prudent enough to stop well before it. I suppose you must have some source/reason for the 0.001% confidence claim, but it seems pretty wild to me to be so confident in a field like that is evolving and—at least from my perspective—pretty hard to understand.
It is unclear to me whether all humans together are more powerful than all other organisms on Earth together. It depends on what is meat by powerful. The power consumption of humans is 19.6 TW (= 1.07 + 18.5), only 0.700 % (= 19.6/(2.8*10^3)) of all organisms. In any case, all humans together being more powerful than all other organisms on Earth together is still way more likely than the most powerful human being much more powerful than all other organisms on Earth together.
My upper bound of 0.001 % is just a guess, but I do endorse it. You can have a best guess that an event in very unlikely, but still be super uncertain about its probability. For example, one could believe an event has a probability of 10^-100 to 10^-10, which would imply it is super unlikely despite 90 (= −10 - (-100)) orders of magnitude (OOMs) of uncertainty in the probability.
By power I mean: ability to change the world, according to one’s preferences. Humans clearly dominate today in terms of this kind of power. Our power is limited, but it is not the case that other organisms have power over us, because while we might rely on them, they are not able to leverage that dependency. Rather, we use them as much as we can.
No human is currently so powerful as to have power over all other humans, and I think that’s definitely a good thing. But it doesn’t seem like it would take much more advantage to let one intelligent being dominate all others.
Are you thinking about humans as an aligned collective in the 1st paragraph of your comment? I agree all humans coordinating their actions together would have more power than other groups of organisms with their actual levels of coordination. However, such level of coordination among humans is not realistic. All 10^30 bacteria (see Table S1 of Bar-On et al. (2018)) coordinating their actions together would arguably also have more power than all humans with their actual level of coordination.
I agree it is good that no human has power over all humans. However, I still think one being dominating all others has a probability lower than 0.001 % over the next 10 years. I am open to bets against short AI timelines, or what they supposedly imply, up to 10 k$. Do you see any that we could make that is good for both of us under our own views?