I want to note that both Philosophy Tubeās and Sabine Hossenfelderās sceptism against AI-risk stemmed from AGIās reliance on extraordinary hardware capacities. They both believe it will be very difficult for an AGI to copy itself because there wonāt be suitable hardware in the world. Therefore AGI will be physically bound, limited in number and easier to deal with. I think introductory resources should address this more often. For example, there isnāt a mention of this criticism in 80000 Hoursā problem profile on this topic.
I want to note that both Philosophy Tubeās and Sabine Hossenfelderās sceptism against AI-risk stemmed from AGIās reliance on extraordinary hardware capacities. They both believe it will be very difficult for an AGI to copy itself because there wonāt be suitable hardware in the world. Therefore AGI will be physically bound, limited in number and easier to deal with. I think introductory resources should address this more often. For example, there isnāt a mention of this criticism in 80000 Hoursā problem profile on this topic.