AIs could help us achieve what we want. We could become extremely wealthy, solve aging and disease, find ways of elevating well-being, maybe even solve wild animal suffering, and accelerate alternatives to meat. I’m concerned about s-risks and the possibility of severe misalignment, but I don’t think either are default outcomes. I just haven’t seen a good argument for why we’d expect these catastrophic scenarios under standard incentives for businesses. Unless you think that these risks are probable, why would you think AI is an exception to the general trend of technology being good?
Without getting into whether or not it’s reasonable to expect catastrophe as the default under standard incentives for businesses, I think it’s reasonable to hold the view that AI is probably going to be good while still thinking that the risks are unacceptably high.
If you think the odds of catastrophe are 10% — but otherwise think the remaining 90% is going to lead to amazing and abundant worlds for humans — you might still conclude that AI doesn’t challenge the general trend of technology being good.
But I think it’s also reasonable to conclude that 10% is still way too high given the massive stakes and the difficulty involved with trying to reverse/change course, which is disanalogous with most other technologies. IMO, the high stakes + difficulty of changing course is sufficient enough to override the “tech is generally good” heuristic.
I also think existential risk from AI is way too high. That’s why I strongly support AI safety research, careful regulation and AI governance. I’m objecting to the point about whether AI should be seen as an exception to the rule that technology is good. In the most probable scenario, it may well be the best technology ever!
100% agree, that’s my view. 80+% chance of being good (on a spectrum of good too, not just utopia good), but unacceptably high risk of being bad. And within that remaining 20ish (or whatever)% of possible bad, most of the bad in my mind is far from existential (bad actor controls AI, AI drives inequality to the point of serious unrest and war for a time etc.)
Its interesting to me that even within this AI safety discussion, a decent number of comments don’t seem to have a bellcurve of outcomes in mind—many still seem to be looking at a binary between technoutopia and doom. I do recognise that its reasonable to think that those 2 are by far the most likely options though.
If this debate were about whether we should do anything to reduce AI risk, then I would strongly be on the side of doing something. I’m not an effective accelerationist. I think AI will probably be good, but that doesn’t mean I think we should simply wait around until it happens. I’m objecting to a narrower point about whether we should view AI as an exception to the general rule that technology is good.
I think the answer to that question is how catastrophically bad tech of high enough capabilities could be, negative externalities or tech, and whether you include tech designed to cause harm like weapons. I have a very positive view of most technology but I’m not sure how a category that included all of those would look in the end due to the tail risks.
I agree to most of the above but I’m left more confused as to why you don’t already see AI as an exception to tech progress being generally good.
AIs could help us achieve what we want. We could become extremely wealthy, solve aging and disease, find ways of elevating well-being, maybe even solve wild animal suffering, and accelerate alternatives to meat. I’m concerned about s-risks and the possibility of severe misalignment, but I don’t think either are default outcomes. I just haven’t seen a good argument for why we’d expect these catastrophic scenarios under standard incentives for businesses. Unless you think that these risks are probable, why would you think AI is an exception to the general trend of technology being good?
Without getting into whether or not it’s reasonable to expect catastrophe as the default under standard incentives for businesses, I think it’s reasonable to hold the view that AI is probably going to be good while still thinking that the risks are unacceptably high.
If you think the odds of catastrophe are 10% — but otherwise think the remaining 90% is going to lead to amazing and abundant worlds for humans — you might still conclude that AI doesn’t challenge the general trend of technology being good.
But I think it’s also reasonable to conclude that 10% is still way too high given the massive stakes and the difficulty involved with trying to reverse/change course, which is disanalogous with most other technologies. IMO, the high stakes + difficulty of changing course is sufficient enough to override the “tech is generally good” heuristic.
I also think existential risk from AI is way too high. That’s why I strongly support AI safety research, careful regulation and AI governance. I’m objecting to the point about whether AI should be seen as an exception to the rule that technology is good. In the most probable scenario, it may well be the best technology ever!
100% agree, that’s my view. 80+% chance of being good (on a spectrum of good too, not just utopia good), but unacceptably high risk of being bad. And within that remaining 20ish (or whatever)% of possible bad, most of the bad in my mind is far from existential (bad actor controls AI, AI drives inequality to the point of serious unrest and war for a time etc.)
Its interesting to me that even within this AI safety discussion, a decent number of comments don’t seem to have a bellcurve of outcomes in mind—many still seem to be looking at a binary between technoutopia and doom. I do recognise that its reasonable to think that those 2 are by far the most likely options though.
If this debate were about whether we should do anything to reduce AI risk, then I would strongly be on the side of doing something. I’m not an effective accelerationist. I think AI will probably be good, but that doesn’t mean I think we should simply wait around until it happens. I’m objecting to a narrower point about whether we should view AI as an exception to the general rule that technology is good.
I think the answer to that question is how catastrophically bad tech of high enough capabilities could be, negative externalities or tech, and whether you include tech designed to cause harm like weapons. I have a very positive view of most technology but I’m not sure how a category that included all of those would look in the end due to the tail risks.