Just my anecdotal experience, but when I ask a lot of EAs working in or interested in AGI risk why they think it’s a hugely important x-risk, one of the first arguments that comes to people’s minds is some variation on “a lot of smart people [working on AGI risk] are very worried about it”. My model of many people in EA interested in AI safety is that they use this heuristic as a dominant factor in their reasoning — which is perfectly understandable! After all, formulating a view of the magnitude of risk from transformative AI without relying on any such heuristics is extremely hard. But I think this post is a valuable reminder that it’s not particularly good epistemics for lots of people to think like this.
when I ask a lot of EAs working in or interested in AGI risk
Can I ask roughly what work they’re doing? Again I think it makes more sense if you’re earning-to-give or doing engineering work, and less if you’re doing conceptual or strategic research. It also makes sense if you’re interested in it as an avenue to learn more.
Just my anecdotal experience, but when I ask a lot of EAs working in or interested in AGI risk why they think it’s a hugely important x-risk, one of the first arguments that comes to people’s minds is some variation on “a lot of smart people [working on AGI risk] are very worried about it”. My model of many people in EA interested in AI safety is that they use this heuristic as a dominant factor in their reasoning — which is perfectly understandable! After all, formulating a view of the magnitude of risk from transformative AI without relying on any such heuristics is extremely hard. But I think this post is a valuable reminder that it’s not particularly good epistemics for lots of people to think like this.
Can I ask roughly what work they’re doing? Again I think it makes more sense if you’re earning-to-give or doing engineering work, and less if you’re doing conceptual or strategic research. It also makes sense if you’re interested in it as an avenue to learn more.