Meta note: I believe this is now the most upvoted EA forum post of all time by a wide margin. Seems like it struck a chord with a lot of people. It’s probably worthwhile for people to write follow-up posts exploring issues related to human capital allocation, since it is a pretty central challenge for the movement. Example prompts:
Brainstorming Task Y for someone with an absurdly impressive resume.
Does the high difficulty of getting a job at an EA organization mean we should stop promoting EA? (What are the EA movement’s current bottlenecks?)
Consequentialist cluelessness and how it relates to funding speculative projects and early-stage organizations (some previous discussion here).
A meta point: A lot of the discussion here has focused on reducing the time spent applying. I think a more fundamental and important problem, based on the replies here and my own experiences, is that many, many EAs feel that either they’re working at a top EA org or they’re not contributing much. Since only a fraction of EAs can currently work at a top EA org due to supply vastly exceeding demand, even if the time spent applying goes down a lot, many EAs will end up feeling negatively about themselves and/or EA when they get rejected. See e.g. this post by Scott Alexander on the message he feels he gets from the community. A couple of excerpts below:
It just really sucks to constantly have one lobe of my brain thinking “You have to do [direct work/research], everybody is so desperate for your help and you’ll be letting them down if you don’t”, and the other lobe thinking “If you try to do the thing, you’ll be in an uphill competition against 2,000 other people who want to do it, which ends either in time wasted for no reason, or in you having an immense obligation to perform at 110% all the time to justify why you were chosen over a thousand almost-equally-good candidates”.
So instead I earn-to-give, and am constantly hit with messages (see above caveat! messages may not be real!) of “Why are you doing this? Nobody’s funding-constrained! Money isn’t real! Only talent constraints matter!” while knowing that if I tried to help with talent constraints, I would get “Sorry, we have 2,000 applicants per position, you’re imposing a huge cost on us by even making us evaluate you”.
I agree that if it’s true that “many EAs feel that either they’re working at a top EA org or they’re not contributing much,” then that is much worse than anything about application time cost and urgently needs to be fixed. I’ve never felt that way about EA org work vs. alternatives, so I may have just missed that this is a message many people are getting.
E.g. Scott’s post also says:
Should also acknowledge the possibility that “talent-constrained” means the world needs more clean meat researchers, malaria vaccine scientists, and AI programmers, and not just generic high-qualification people applying to EA organizations. This wasn’t how I understood the term but it would make sense.
…and my reply is “Yes, talent-constrained also means those other things, and it’s a big problem if that was unclear to a noticeable fraction of the community.”
FWIW I suspect there’s also something a bit more subtle going on than overly narrow misunderstandings of “talent-constrained,” e.g. something like Max Daniel’s hypothesis.
I think the votes for the old posts are not directly comparable with those in the new forum, since previously individuals could not give more than one upvote to a post. It may still be that this post would have been most upvoted of all time even if the new voting system would have been used for those old posts, however.
Good point, but this one has still received the most upvotes, if we assume that a negligible number of people downvoted it. At writing time, it has received 100 votes. According to https://ea.greaterwrong.com/archive, the only previous posts that received more than 100 points has less than 50 votes each. Insofar as I can tell, the second and third most voted-on posts are Empirical data on value drift at 75 and Effective altruism is a question at 68.
Does the high difficulty of getting a job at an EA organization mean we should stop promoting EA? (What are the EA movement’s current bottlenecks?)
Promoting donations or Earnign to Give seems fine. I think we should stop promoting ‘EA is talent constrained’. There is a sense in which EA is ‘talent constrained’. But the current messaging around ‘EA is talent constrained’ consistently misleads people, even very informed people such as the OP and some of the experts who gave him advice. On the other hand EA can certainly absorb much more money. Many smaller orgs are certainly funding constrained. And at a minimum people can donate to Give Directly if the other giving oppurtunities are filled.
I’d love to collaborate with folks on the cluelessness aspect of this.
I believe GPI is doing work on further specifying what we mean by cluelessness & developing a taxonomy of it.
I’m personally interested in better understanding on-the-ground implications of cluelessness, e.g. what does it imply about which areas to focus on presently? Some preliminary work in that direction here.
Most near-term interventions likely won’t be pivotal for the far future, so we can ignore their long-term effects to cooperate with near-term focused value systems.
Balance steering capacity with object-level action.
Unexpected outcomes will largely fall into two categories: those we think we should have anticipated, and those we don’t think we reasonably could have anticipated. For the first category, I think we could do better at brainstorming unusual reasons why our plans might fail. I have a draft post on how to do this. For the second category, I don’t think there is much to do. Maybe there will be a blizzard during midsummer all over California this year, and I will hold Californian authorities blameless for their failure to prepare for that blizzard.
I stumbled across this today; haven’t had a chance to read it but it looks relevant.
Meta note: I believe this is now the most upvoted EA forum post of all time by a wide margin. Seems like it struck a chord with a lot of people. It’s probably worthwhile for people to write follow-up posts exploring issues related to human capital allocation, since it is a pretty central challenge for the movement. Example prompts:
Brainstorming Task Y for someone with an absurdly impressive resume.
Does the high difficulty of getting a job at an EA organization mean we should stop promoting EA? (What are the EA movement’s current bottlenecks?)
Consequentialist cluelessness and how it relates to funding speculative projects and early-stage organizations (some previous discussion here).
A meta point: A lot of the discussion here has focused on reducing the time spent applying. I think a more fundamental and important problem, based on the replies here and my own experiences, is that many, many EAs feel that either they’re working at a top EA org or they’re not contributing much. Since only a fraction of EAs can currently work at a top EA org due to supply vastly exceeding demand, even if the time spent applying goes down a lot, many EAs will end up feeling negatively about themselves and/or EA when they get rejected. See e.g. this post by Scott Alexander on the message he feels he gets from the community. A couple of excerpts below:
I agree that if it’s true that “many EAs feel that either they’re working at a top EA org or they’re not contributing much,” then that is much worse than anything about application time cost and urgently needs to be fixed. I’ve never felt that way about EA org work vs. alternatives, so I may have just missed that this is a message many people are getting.
E.g. Scott’s post also says:
…and my reply is “Yes, talent-constrained also means those other things, and it’s a big problem if that was unclear to a noticeable fraction of the community.”
FWIW I suspect there’s also something a bit more subtle going on than overly narrow misunderstandings of “talent-constrained,” e.g. something like Max Daniel’s hypothesis.
I think the votes for the old posts are not directly comparable with those in the new forum, since previously individuals could not give more than one upvote to a post. It may still be that this post would have been most upvoted of all time even if the new voting system would have been used for those old posts, however.
Good point, but this one has still received the most upvotes, if we assume that a negligible number of people downvoted it. At writing time, it has received 100 votes. According to https://ea.greaterwrong.com/archive, the only previous posts that received more than 100 points has less than 50 votes each. Insofar as I can tell, the second and third most voted-on posts are Empirical data on value drift at 75 and Effective altruism is a question at 68.
At time of writing, this post has 100 unique votes. Most are probably upvotes given its current karma (193).
Not 100% sure, but I don’t recall any post on the old Forum having 100 votes.
Promoting donations or Earnign to Give seems fine. I think we should stop promoting ‘EA is talent constrained’. There is a sense in which EA is ‘talent constrained’. But the current messaging around ‘EA is talent constrained’ consistently misleads people, even very informed people such as the OP and some of the experts who gave him advice. On the other hand EA can certainly absorb much more money. Many smaller orgs are certainly funding constrained. And at a minimum people can donate to Give Directly if the other giving oppurtunities are filled.
Couldn’t agree more!
+1, thank you for highlighting this.
I’d love to collaborate with folks on the cluelessness aspect of this.
I believe GPI is doing work on further specifying what we mean by cluelessness & developing a taxonomy of it.
I’m personally interested in better understanding on-the-ground implications of cluelessness, e.g. what does it imply about which areas to focus on presently? Some preliminary work in that direction here.
I’ve thought a lot about cluelessness, and I could give you feedback on something you’re thinking of writing.
Nice. I’ve already written a sequence on it (first post here) – curious for your thoughts on it!
Also, I think Richard Ngo’s working on a piece on the topic, building off my sequence & the academic work that Hilary Greaves has done.
I wrote some comments on your sequence:
Most near-term interventions likely won’t be pivotal for the far future, so we can ignore their long-term effects to cooperate with near-term focused value systems.
Fight ambiguity aversion.
Fight status quo bias.
Balance steering capacity with object-level action.
Unexpected outcomes will largely fall into two categories: those we think we should have anticipated, and those we don’t think we reasonably could have anticipated. For the first category, I think we could do better at brainstorming unusual reasons why our plans might fail. I have a draft post on how to do this. For the second category, I don’t think there is much to do. Maybe there will be a blizzard during midsummer all over California this year, and I will hold Californian authorities blameless for their failure to prepare for that blizzard.
I stumbled across this today; haven’t had a chance to read it but it looks relevant.