If, like me, you think that EA as a whole is ridiculously inflating the risk of AI, then you also have to think that there is some flaw in the EA culture and decision making behavior that is causing these incorrect beliefs or bad prioritization. It seems reasonable when opposing these beliefs to critique out both the object level flaws as well as the wider EA issues that allowed them to go unnoticed
This seems very reasonable.
For example, I don’t think your criticism applies to the vulture post, because the differing financial incentives of being an AGI risk believer vs skeptic is probably a contributor to AI risk overestimation, which is a valuable thing to point out.
I don’t think this makes sense as a retroactive explanation (though it seems very plausible as a prospective prediction going forwards). I think the leaders of longtermist orgs are mostly selected from a) people who already cared a lot about AI risk or longtermism stuff before EA was much of a thing, b) people (like Will MacAskill) who updated fairly early on in the movement’s trajectory (back when much more $s was put into neartermism community building/research than longtermist community building/research), or c) funders.
So I think it is very much not the case that (“The vultures are circling”) is an accurate diagnosis of the epistemics of EA community leaders.
(To be clear, I was one of the people who updated towards AI risk etc stuff fairly late (late 2017ish?), so I don’t have any strong claims to epistemic virtue etc myself in this domain.)
This seems very reasonable.
I don’t think this makes sense as a retroactive explanation (though it seems very plausible as a prospective prediction going forwards). I think the leaders of longtermist orgs are mostly selected from a) people who already cared a lot about AI risk or longtermism stuff before EA was much of a thing, b) people (like Will MacAskill) who updated fairly early on in the movement’s trajectory (back when much more $s was put into neartermism community building/research than longtermist community building/research), or c) funders.
So I think it is very much not the case that (“The vultures are circling”) is an accurate diagnosis of the epistemics of EA community leaders.
(To be clear, I was one of the people who updated towards AI risk etc stuff fairly late (late 2017ish?), so I don’t have any strong claims to epistemic virtue etc myself in this domain.)