(Clarification about my views in the context of the AI pause debate)
I’m finding it hard to communicate my views on AI risk. I feel like some people are responding to the general vibe they think I’m giving off rather than the actual content. Other times, it seems like people will focus on a narrow snippet of my comments/post and respond to it without recognizing the context. For example, one person interpreted me as saying that I’m against literally any AI safety regulation. I’m not.
For a full disclosure, my views on AI risk can be loosely summarized as follows:
I think AI will probably be very beneficial for humanity.
Nonetheless, I think that there are credible, foreseeable risks from AI that could do vast harm, and we should invest heavily to ensure these outcomes don’t happen.
I also don’t think technology is uniformly harmless. Plenty of technologies have caused net harm. Factory farming is a giant net harm that might have even made our entire industrial civilization a mistake!
I’m not blindly against regulation. I think all laws can and should be viewed as forms of regulations, and I don’t think it’s feasible for society to exist without laws.
That said, I’m also not blindly in favor of regulation, even for AI risk. You have to show me that the benefits outweigh the harm
I am generally in favor of thoughtful, targeted AI regulations that align incentives well, and reduce downside risks without completely stifling innovation.
I’m open to extreme regulations and policies if or when an AI catastrophe seems imminent, but I don’t think we’re in such a world right now. I’m not persuaded by the arguments that people have given for this thesis, such as Eliezer Yudkowsky’s AGI ruin post.
(Clarification about my views in the context of the AI pause debate)
I’m finding it hard to communicate my views on AI risk. I feel like some people are responding to the general vibe they think I’m giving off rather than the actual content. Other times, it seems like people will focus on a narrow snippet of my comments/post and respond to it without recognizing the context. For example, one person interpreted me as saying that I’m against literally any AI safety regulation. I’m not.
For a full disclosure, my views on AI risk can be loosely summarized as follows:
I think AI will probably be very beneficial for humanity.
Nonetheless, I think that there are credible, foreseeable risks from AI that could do vast harm, and we should invest heavily to ensure these outcomes don’t happen.
I also don’t think technology is uniformly harmless. Plenty of technologies have caused net harm. Factory farming is a giant net harm that might have even made our entire industrial civilization a mistake!
I’m not blindly against regulation. I think all laws can and should be viewed as forms of regulations, and I don’t think it’s feasible for society to exist without laws.
That said, I’m also not blindly in favor of regulation, even for AI risk. You have to show me that the benefits outweigh the harm
I am generally in favor of thoughtful, targeted AI regulations that align incentives well, and reduce downside risks without completely stifling innovation.
I’m open to extreme regulations and policies if or when an AI catastrophe seems imminent, but I don’t think we’re in such a world right now. I’m not persuaded by the arguments that people have given for this thesis, such as Eliezer Yudkowsky’s AGI ruin post.
Thanks, that seems like a pretty useful summary.