I want to note that both Philosophy Tube’s and Sabine Hossenfelder’s sceptism against AI-risk stemmed from AGI’s reliance on extraordinary hardware capacities. They both believe it will be very difficult for an AGI to copy itself because there won’t be suitable hardware in the world. Therefore AGI will be physically bound, limited in number and easier to deal with. I think introductory resources should address this more often. For example, there isn’t a mention of this criticism in 80000 Hours’ problem profile on this topic.
I want to note that both Philosophy Tube’s and Sabine Hossenfelder’s sceptism against AI-risk stemmed from AGI’s reliance on extraordinary hardware capacities. They both believe it will be very difficult for an AGI to copy itself because there won’t be suitable hardware in the world. Therefore AGI will be physically bound, limited in number and easier to deal with. I think introductory resources should address this more often. For example, there isn’t a mention of this criticism in 80000 Hours’ problem profile on this topic.