One possibility is because the EA organizations you hire for are focused on causes which also have a lot of representation in the non-profit sector outside of the EA movement, like global health and animal welfare, it’s easier to attract talent which is both very skilled and very dedicated. Since a focus on the far-future is more limited to EA and adjacent communities, there is just a smaller talent pool of both extremely skilled and dedicated potential employees to draw from.
Far-future-focused EA orgs could be constantly suffering from this problem of a limited talent pool, to the point they’d be willing to pay hundreds of thousands of dollars to find an extremely talented hire. In AI safety/alignment, this wouldn’t be weird as AI researchers can easily take a salary of hundreds of thousands at companies like OpenAI or Google. But this should only apply to orgs like MIRI or maybe FHI, which are far from the only orgs 80k surveyed.
So the data seems to imply leaders at EA orgs which already have a dozen staff would pay 20%+ of their budget for the next single marginal hire. So it still doesn’t make sense that year after year a lot of EA orgs apparently need talent so badly they’ll spend money they don’t have to get it.
there is just a smaller talent pool of both extremely skilled and dedicated potential employees to draw from
We have been screening fairly selectively on having an EA mindset, though, so I’m not sure how much larger our pool is compared to other EA orgs. In fact, you could maybe argue the opposite—given the prevalence of long-termism among the most involved EAs, it may be harder to convince them to work for us.
So the data seems to imply leaders at EA orgs which already have a dozen staff would pay 20%+ of their budget for the next single marginal hire.
From my vantage point, though, their actions don’t seem consistent with this view.
One possibility is because the EA organizations you hire for are focused on causes which also have a lot of representation in the non-profit sector outside of the EA movement, like global health and animal welfare, it’s easier to attract talent which is both very skilled and very dedicated. Since a focus on the far-future is more limited to EA and adjacent communities, there is just a smaller talent pool of both extremely skilled and dedicated potential employees to draw from.
Far-future-focused EA orgs could be constantly suffering from this problem of a limited talent pool, to the point they’d be willing to pay hundreds of thousands of dollars to find an extremely talented hire. In AI safety/alignment, this wouldn’t be weird as AI researchers can easily take a salary of hundreds of thousands at companies like OpenAI or Google. But this should only apply to orgs like MIRI or maybe FHI, which are far from the only orgs 80k surveyed.
So the data seems to imply leaders at EA orgs which already have a dozen staff would pay 20%+ of their budget for the next single marginal hire. So it still doesn’t make sense that year after year a lot of EA orgs apparently need talent so badly they’ll spend money they don’t have to get it.
We have been screening fairly selectively on having an EA mindset, though, so I’m not sure how much larger our pool is compared to other EA orgs. In fact, you could maybe argue the opposite—given the prevalence of long-termism among the most involved EAs, it may be harder to convince them to work for us.
From my vantage point, though, their actions don’t seem consistent with this view.
Yeah, I’m still left with more questions than answers.