Thanks for your thoughtful response, James—I much appreciate it.
This is an interesting point and one I didn’t consider. I find this slightly hard to believe as I imagine EA as being quite esoteric (e.g. full of weird moral views) so struggle to imagine many people would be clambering to work for an organisation focused on wild animal welfare or AI safety when they could work for an issue they cared about more (e.g. climate change) for a similar salary.
My impression is that there are a fair number of people who apply to EA jobs who, while of course being positive to EA, have a fairly shallow understanding of it—and who would be sceptical of aspects of EA they find “weird”. I also think a decent share of them aren’t put off by a salary that isn’t very high (especially since their alternative employment may be in the non-EA non-profit sphere).
Posts such as Vultures Are Circling highlight people trying to “game” the system in order to access EA funding, and I think this problem will only grow.
I am not that well-informed, but fwiw—like I wrote in the thread—I think that people engaging in motivated reasoning, fooling themselves that their projects are actually effective, is a bigger problem. And as discussed I think tendency to do that isn’t much correlated with willingness to accept a lower salary.
Maybe I’m overplaying the problem that EA recruiters face and it’s actually extremely easy to discern values using various recruitment processes, but I think this is unlikely.
Sorry, no I didn’t want to suggest that. I think it’s in fact quite hard. I was just talking about which strategies are relatively more and less promising, not about how hard it is to determine value-alignment in general.
Thanks for your thoughtful response, James—I much appreciate it.
My impression is that there are a fair number of people who apply to EA jobs who, while of course being positive to EA, have a fairly shallow understanding of it—and who would be sceptical of aspects of EA they find “weird”. I also think a decent share of them aren’t put off by a salary that isn’t very high (especially since their alternative employment may be in the non-EA non-profit sphere).
I am not that well-informed, but fwiw—like I wrote in the thread—I think that people engaging in motivated reasoning, fooling themselves that their projects are actually effective, is a bigger problem. And as discussed I think tendency to do that isn’t much correlated with willingness to accept a lower salary.
Sorry, no I didn’t want to suggest that. I think it’s in fact quite hard. I was just talking about which strategies are relatively more and less promising, not about how hard it is to determine value-alignment in general.