Hey Stefan, thanks again for this response and will respond with the attention it deserves!
I think there are non-trivial numbers of highly committed effective altruists—who would make very careful decisions regarding what research questions to prioritise and tackle, and who would be very careful about hiring decisions—who would not be willing to work for a low salary.
I definitely agree, and I talk about this in my piece as well e.g. in the introduction I say “There are clear benefits e.g. attracting high-calibre individuals that would otherwise be pursuing less altruistic jobs, which is obviously great.” So I don’t think we’re in disagreement about this, but rather I’m questioning where the line should be drawn, as there must be some considerations to stop us raising salaries indefinitely. Furthermore, in my diagrams you can see that there are similarly altruistic people that would only be willing to work at higher salaries (the shaded area below).
Conversely, I think there are many people who, e.g. come from the larger non-profit or do-gooding world would be willing to work for a low salary, but who wouldn’t be very committed to effective altruist principles.
This is an interesting point and one I didn’t consider. I find this slightly hard to believe as I imagine EA as being quite esoteric (e.g. full of weird moral views) so struggle to imagine many people would be clambering to work for an organisation focused on wild animal welfare or AI safety when they could work for an issue they cared about more (e.g. climate change) for a similar salary.
So I don’t think we have any particular reason to expect that lower salaries would be the most effective way of ensuring that decisions about, e.g. research prioritisation or hiring are value-aligned. That is particularly so since, as you notice in the introduction, lower salaries have other downsides.
Again, I would agree thats it’s not the most effective way of ensuring value alignment within organisations, but I would say it’s an important factor.
For instance, in research on the general population led by Lucius Caviola, we found a relatively weak correlation between what we call “expansive altruism” (willingness to give resources to others, including distant others) and “effectiveness-focus” (willingness to choose the most effective ways of helping others). Expansive altruism isn’t precisely the same thing as willingness to work for a low salary, and things may look a bit differently among potential applicants to effective altruist jobs—but it nevertheless suggests that willingness to work for a low salary need not be as useful a costly signal as it may seem.
This was actually really useful for me and I would definitely say I was generally conflating “willingness to work for a lower salary” with “value-alignment”. I’ve probably updated more towards your view in that “effectiveness-focus” is a crucial component of EA that wouldn’t be selected for simply by being willing to take a lower salary, which might more accurately map to “expansive altruism”.
For these reasons, I think it’s better for EA recruiters to try to gauge, e.g. inclinations towards cause-neutrality, willingness to overcome motivated reasoning, and other important effective altruist traits, directly, rather than to try to infer them via their willingness to accept a low salary—since those inferences will typically not have a high degree of accuracy.
I agree this is probably the best outcome and certainly what I would like to happen, but I also think it’s challenging. Posts such as Vultures Are Circling highlight people trying to “game” the system in order to access EA funding, and I think this problem will only grow. Therefore I think EA recruiters might face difficulty in discerning between 7⁄10 EA-aligned and 8⁄10 EA-aligned, which I think could be important on a community level. Maybe I’m overplaying the problem that EA recruiters face and it’s actually extremely easy to discern values using various recruitment processes, but I think this is unlikely.
Thanks for your thoughtful response, James—I much appreciate it.
This is an interesting point and one I didn’t consider. I find this slightly hard to believe as I imagine EA as being quite esoteric (e.g. full of weird moral views) so struggle to imagine many people would be clambering to work for an organisation focused on wild animal welfare or AI safety when they could work for an issue they cared about more (e.g. climate change) for a similar salary.
My impression is that there are a fair number of people who apply to EA jobs who, while of course being positive to EA, have a fairly shallow understanding of it—and who would be sceptical of aspects of EA they find “weird”. I also think a decent share of them aren’t put off by a salary that isn’t very high (especially since their alternative employment may be in the non-EA non-profit sphere).
Posts such as Vultures Are Circling highlight people trying to “game” the system in order to access EA funding, and I think this problem will only grow.
I am not that well-informed, but fwiw—like I wrote in the thread—I think that people engaging in motivated reasoning, fooling themselves that their projects are actually effective, is a bigger problem. And as discussed I think tendency to do that isn’t much correlated with willingness to accept a lower salary.
Maybe I’m overplaying the problem that EA recruiters face and it’s actually extremely easy to discern values using various recruitment processes, but I think this is unlikely.
Sorry, no I didn’t want to suggest that. I think it’s in fact quite hard. I was just talking about which strategies are relatively more and less promising, not about how hard it is to determine value-alignment in general.
Hey Stefan, thanks again for this response and will respond with the attention it deserves!
I definitely agree, and I talk about this in my piece as well e.g. in the introduction I say “There are clear benefits e.g. attracting high-calibre individuals that would otherwise be pursuing less altruistic jobs, which is obviously great.” So I don’t think we’re in disagreement about this, but rather I’m questioning where the line should be drawn, as there must be some considerations to stop us raising salaries indefinitely. Furthermore, in my diagrams you can see that there are similarly altruistic people that would only be willing to work at higher salaries (the shaded area below).
This is an interesting point and one I didn’t consider. I find this slightly hard to believe as I imagine EA as being quite esoteric (e.g. full of weird moral views) so struggle to imagine many people would be clambering to work for an organisation focused on wild animal welfare or AI safety when they could work for an issue they cared about more (e.g. climate change) for a similar salary.
Again, I would agree thats it’s not the most effective way of ensuring value alignment within organisations, but I would say it’s an important factor.
This was actually really useful for me and I would definitely say I was generally conflating “willingness to work for a lower salary” with “value-alignment”. I’ve probably updated more towards your view in that “effectiveness-focus” is a crucial component of EA that wouldn’t be selected for simply by being willing to take a lower salary, which might more accurately map to “expansive altruism”.
I agree this is probably the best outcome and certainly what I would like to happen, but I also think it’s challenging. Posts such as Vultures Are Circling highlight people trying to “game” the system in order to access EA funding, and I think this problem will only grow. Therefore I think EA recruiters might face difficulty in discerning between 7⁄10 EA-aligned and 8⁄10 EA-aligned, which I think could be important on a community level. Maybe I’m overplaying the problem that EA recruiters face and it’s actually extremely easy to discern values using various recruitment processes, but I think this is unlikely.
Thanks for your thoughtful response, James—I much appreciate it.
My impression is that there are a fair number of people who apply to EA jobs who, while of course being positive to EA, have a fairly shallow understanding of it—and who would be sceptical of aspects of EA they find “weird”. I also think a decent share of them aren’t put off by a salary that isn’t very high (especially since their alternative employment may be in the non-EA non-profit sphere).
I am not that well-informed, but fwiw—like I wrote in the thread—I think that people engaging in motivated reasoning, fooling themselves that their projects are actually effective, is a bigger problem. And as discussed I think tendency to do that isn’t much correlated with willingness to accept a lower salary.
Sorry, no I didn’t want to suggest that. I think it’s in fact quite hard. I was just talking about which strategies are relatively more and less promising, not about how hard it is to determine value-alignment in general.