During the FTX era, it lost that for a while. Getting your project funded basically required that you had a pulse and sounded like an EA.
Do you mean just in AI safety/​meta? Because the FTXFF was only funding long-term projects, so I know there were still huge funding gaps in global poverty and animal welfare. And even in bio and nuclear security, there were still very large funding gaps.
I have been offered jobs that are billed as impactful and that pay vastly more money than I need while not providing any clear benefit for the world.
That’s great that you are so successful. But there are many EAs/​rationalists who are well-qualified who still have not gotten an EA job (probably thousands). I think part of the disconnect between the people saying there is lots of money and the people who are saying that they can’t get a job is that EAs are so well-qualified that they are used to only having to apply for a few jobs per offer. The number of applicants per job is something like 30-300, so it makes sense that the average person needs to apply to that many to get a job. But when EAs apply to 10 to 30 EA jobs and don’t get one, they tend to get frustrated. Also there are just not that many EA jobs that would be relevant to an individual EA’s qualifications.
Furthermore, there are enormous potential effective uses of money that don’t require EA labor, such as Give Directly, stockpiling PPE, etc.
So overall, I do think we should be careful to limit grift, but there is enormous room for effective use of funding in EA, one estimate was ~$1 trillion.
I definitely agree to some extent about FTX, though money did flow into some other spaces as well. But agree that I’m painting with too broad a brush there.
And definitely — I think I meant this post partially as a lamentation of what feels inevitable with funding. Our ability to make the world better might massively grow, but at the same time, it feels like something essential to EA’s past success (or at least, the culture and community that pushed it to do weird, fringe-y things that I think hold much of EA’s greatest promise) might be lost.
Do you mean just in AI safety/​meta? Because the FTXFF was only funding long-term projects, so I know there were still huge funding gaps in global poverty and animal welfare. And even in bio and nuclear security, there were still very large funding gaps.
That’s great that you are so successful. But there are many EAs/​rationalists who are well-qualified who still have not gotten an EA job (probably thousands). I think part of the disconnect between the people saying there is lots of money and the people who are saying that they can’t get a job is that EAs are so well-qualified that they are used to only having to apply for a few jobs per offer. The number of applicants per job is something like 30-300, so it makes sense that the average person needs to apply to that many to get a job. But when EAs apply to 10 to 30 EA jobs and don’t get one, they tend to get frustrated. Also there are just not that many EA jobs that would be relevant to an individual EA’s qualifications.
Furthermore, there are enormous potential effective uses of money that don’t require EA labor, such as Give Directly, stockpiling PPE, etc.
So overall, I do think we should be careful to limit grift, but there is enormous room for effective use of funding in EA, one estimate was ~$1 trillion.
I definitely agree to some extent about FTX, though money did flow into some other spaces as well. But agree that I’m painting with too broad a brush there.
And definitely — I think I meant this post partially as a lamentation of what feels inevitable with funding. Our ability to make the world better might massively grow, but at the same time, it feels like something essential to EA’s past success (or at least, the culture and community that pushed it to do weird, fringe-y things that I think hold much of EA’s greatest promise) might be lost.