I think you’ve summarized the general state of EA views on x-risk with artificial intelligence—thanks! My views* are considered extreme around here, but I think it’s important to note that to me there seems to be a vocal contingent of us who give lower consideration to AI x-risk, at least on the forum, and I wonder if this represents a general trend. (epistemic status: low—I have no hard data to back this up besides the fact that there seem to be more pro-AI posts around here)
*I think any substantial (>0.01%) risk of extinction due to AI action in the next century warrants a total and “permanent” (>50 years) pause on all AI development, enforced through international law
I think you’ve summarized the general state of EA views on x-risk with artificial intelligence—thanks! My views* are considered extreme around here, but I think it’s important to note that to me there seems to be a vocal contingent of us who give lower consideration to AI x-risk, at least on the forum, and I wonder if this represents a general trend. (epistemic status: low—I have no hard data to back this up besides the fact that there seem to be more pro-AI posts around here)
*I think any substantial (>0.01%) risk of extinction due to AI action in the next century warrants a total and “permanent” (>50 years) pause on all AI development, enforced through international law