Those may seem like the wrong metrics to be looking at given that the proportion of people doing direct work in EA is small compared to all the people engaging with EA. The organizations you listed are also highly selective so only a few people will end up working at them. I think the bias reveals itself when opportunities such as MLAB come up and the number of applicants is overwhelming compared to the number of positions available, not to mention the additional people who may want to work in these areas but don’t apply for various reasons. I think if one used engagement on things like forum posts like a proxy of total time and energy people put engaging with EA then I think it would turn out that people engage disproportionately more with the topics the OP listed. Though maybe that’s just my bias given that’s the content I engage with the most!
The overwhelming number of applicants to MLAB is not indicative of a surplus of theoretical AI alignment researchers. Redwood Research seems to be solving problems today which are analogous to future AI alignment problems. So, Redwood’s work actually has decent feedback loops, as far as AI safety goes.
…. But the number of people we need working on them should probably be more limited than the current trajectory ….
I’ll therefore ask much more specifically, what are the most intellectually interesting topics in Effective Altruism, and then I’ll suggest that we should be doing less work on them—and list a few concrete suggestions for how to do that.
I feel like the op was mostly talking about direct work. Even if they weren’t I think most of the impact that EA will have will eventually cash out as direct work so it would be a bit surprising if ‘EA attention’ and direct work were not very correlated AND we were losing a lot of impact because of problems in the attention bit and not the direct work bit.
Those may seem like the wrong metrics to be looking at given that the proportion of people doing direct work in EA is small compared to all the people engaging with EA. The organizations you listed are also highly selective so only a few people will end up working at them. I think the bias reveals itself when opportunities such as MLAB come up and the number of applicants is overwhelming compared to the number of positions available, not to mention the additional people who may want to work in these areas but don’t apply for various reasons. I think if one used engagement on things like forum posts like a proxy of total time and energy people put engaging with EA then I think it would turn out that people engage disproportionately more with the topics the OP listed. Though maybe that’s just my bias given that’s the content I engage with the most!
The overwhelming number of applicants to MLAB is not indicative of a surplus of theoretical AI alignment researchers. Redwood Research seems to be solving problems today which are analogous to future AI alignment problems. So, Redwood’s work actually has decent feedback loops, as far as AI safety goes.
Agreed—neither Redwood nor MLAB were the type of alignment work that was being referenced in the post.
I feel like the op was mostly talking about direct work. Even if they weren’t I think most of the impact that EA will have will eventually cash out as direct work so it would be a bit surprising if ‘EA attention’ and direct work were not very correlated AND we were losing a lot of impact because of problems in the attention bit and not the direct work bit.
“I feel like the op was mostly talking about direct work.”
No—see various other comment threads
I noticed the same when attending GPI conferences, well attended by EA-adjacent academics, which is why I picked infinite ethics as an example.
Which organisations? I think I only mentioned CFAR which I am not sure is very selective right now (due to not running hiring rounds).