If this debate were about whether we should do anything to reduce AI risk, then I would strongly be on the side of doing something. I’m not an effective accelerationist. I think AI will probably be good, but that doesn’t mean I think we should simply wait around until it happens. I’m objecting to a narrower point about whether we should view AI as an exception to the general rule that technology is good.
I think the answer to that question is how catastrophically bad tech of high enough capabilities could be, negative externalities or tech, and whether you include tech designed to cause harm like weapons. I have a very positive view of most technology but I’m not sure how a category that included all of those would look in the end due to the tail risks.
If this debate were about whether we should do anything to reduce AI risk, then I would strongly be on the side of doing something. I’m not an effective accelerationist. I think AI will probably be good, but that doesn’t mean I think we should simply wait around until it happens. I’m objecting to a narrower point about whether we should view AI as an exception to the general rule that technology is good.
I think the answer to that question is how catastrophically bad tech of high enough capabilities could be, negative externalities or tech, and whether you include tech designed to cause harm like weapons. I have a very positive view of most technology but I’m not sure how a category that included all of those would look in the end due to the tail risks.