100% agree, that’s my view. 80+% chance of being good (on a spectrum of good too, not just utopia good), but unacceptably high risk of being bad. And within that remaining 20ish (or whatever)% of possible bad, most of the bad in my mind is far from existential (bad actor controls AI, AI drives inequality to the point of serious unrest and war for a time etc.)
Its interesting to me that even within this AI safety discussion, a decent number of comments don’t seem to have a bellcurve of outcomes in mind—many still seem to be looking at a binary between technoutopia and doom. I do recognise that its reasonable to think that those 2 are by far the most likely options though.
If this debate were about whether we should do anything to reduce AI risk, then I would strongly be on the side of doing something. I’m not an effective accelerationist. I think AI will probably be good, but that doesn’t mean I think we should simply wait around until it happens. I’m objecting to a narrower point about whether we should view AI as an exception to the general rule that technology is good.
I think the answer to that question is how catastrophically bad tech of high enough capabilities could be, negative externalities or tech, and whether you include tech designed to cause harm like weapons. I have a very positive view of most technology but I’m not sure how a category that included all of those would look in the end due to the tail risks.
100% agree, that’s my view. 80+% chance of being good (on a spectrum of good too, not just utopia good), but unacceptably high risk of being bad. And within that remaining 20ish (or whatever)% of possible bad, most of the bad in my mind is far from existential (bad actor controls AI, AI drives inequality to the point of serious unrest and war for a time etc.)
Its interesting to me that even within this AI safety discussion, a decent number of comments don’t seem to have a bellcurve of outcomes in mind—many still seem to be looking at a binary between technoutopia and doom. I do recognise that its reasonable to think that those 2 are by far the most likely options though.
If this debate were about whether we should do anything to reduce AI risk, then I would strongly be on the side of doing something. I’m not an effective accelerationist. I think AI will probably be good, but that doesn’t mean I think we should simply wait around until it happens. I’m objecting to a narrower point about whether we should view AI as an exception to the general rule that technology is good.
I think the answer to that question is how catastrophically bad tech of high enough capabilities could be, negative externalities or tech, and whether you include tech designed to cause harm like weapons. I have a very positive view of most technology but I’m not sure how a category that included all of those would look in the end due to the tail risks.