I think that instead of talking about potential failures in the way the EA community prioritized AI risk, it might be better to talk about something more concrete, e.g.
The views of the average EA
How much money was given to AI
How many EAs shifted their careers to be AI-focused as opposed to something else that deserved more EA attention
I think if we think there were mistakes in the concrete actions people have taken, e.g. mistaken funding decisions or mistaken career changes (I’m not sure that there were), we should look at the process that led to those decisions, and address that process directly.
Targeting ‘the views of the average EA’ seems pretty hard. I do think it might be important, because it has downstream effects on things like recruitment, external perception, funding, etc. But then I think we need to have a story for how we affect the views of the average EA (as Ben mentions). My guess is that we don’t have a story like that, and that’s a big part of ‘what went wrong’—the movement is growing in a chaotic way that no individual is responsible for, and that can lead to collectively bad epistemics.
‘Encouraging EAs to defer less’ and ‘expressing more public uncertainty’ could be part of the story for helping the average EA have better views. It also seems possible to me that we want some kind of centralized official source for presenting EA beliefs that keeps up to date the best case for and against certain views (though this obviously has its own issues). Then we can be more sure that people have come to their views after being exposed to alternatives, and we can have something concrete to point to when we worry that there hasn’t been enough criticism.
I think that instead of talking about potential failures in the way the EA community prioritized AI risk, it might be better to talk about something more concrete, e.g.
The views of the average EA
How much money was given to AI
How many EAs shifted their careers to be AI-focused as opposed to something else that deserved more EA attention
I think if we think there were mistakes in the concrete actions people have taken, e.g. mistaken funding decisions or mistaken career changes (I’m not sure that there were), we should look at the process that led to those decisions, and address that process directly.
Targeting ‘the views of the average EA’ seems pretty hard. I do think it might be important, because it has downstream effects on things like recruitment, external perception, funding, etc. But then I think we need to have a story for how we affect the views of the average EA (as Ben mentions). My guess is that we don’t have a story like that, and that’s a big part of ‘what went wrong’—the movement is growing in a chaotic way that no individual is responsible for, and that can lead to collectively bad epistemics.
‘Encouraging EAs to defer less’ and ‘expressing more public uncertainty’ could be part of the story for helping the average EA have better views. It also seems possible to me that we want some kind of centralized official source for presenting EA beliefs that keeps up to date the best case for and against certain views (though this obviously has its own issues). Then we can be more sure that people have come to their views after being exposed to alternatives, and we can have something concrete to point to when we worry that there hasn’t been enough criticism.