I think I’m sympathetic to the criticism but I still feel like EA has sufficiently high hurdles to stop the grifters. a) It’s not like you get a lot of money just by saying the right words. You might be able to secure early funds or funds for a local group but at some point, you will have to show results to get more money. b) EA funding mechanisms are fast but not loose. I think the meme that you can get money for everything now is massively overblown. A lot of people who are EA aligned didn’t get funding from the FTX foundation, OpenPhil or the LTFF. The internal bars for funders still seem to be hard to cross and I expect this to hold for a while. c) I’m not sure how the grifters would accumulate power and steer the movement off the rails. Either they start as grifters but actually get good results and then rise to power (at that point they might not be grifters anymore) or they don’t get any results and don’t rise to power. Overall, I don’t see a strong mechanism by which the grifters rise to power without either stopping being grifters or blowing their cover. Maybe you could expand on that. I think the company analogy that you are making is less plausible in an EA context because (I believe) people update stronger on negative evidence. It’s not just some random manager position that you’re putting at risk, there are lives at stake. But maybe I’m too naive here.
Either they start as grifters but actually get good results and then rise to power (at that point they might not be grifters anymore) or they don’t get any results and don’t rise to power.
I largely agree with this, but I think it’s important to keep in mind that “grifter” is not a binary trait. My biggest worry is not that people who are completely unaligned with EA would capture wealth and steer it into the void, but rather that of 10 EA’s the one most prone to “grifting” would end up with more influence than the rest.
What makes this so difficult is that the line between ‘grifter’ and ‘skilled at navigating complicated social environments’ is pretty thin and the latter is generally a desirable trait.
Generally I’m still not too worried about this, but I do think it’s a shame if we end up undervaluing talented people who are less good at ‘grifting’ resulting in an ineffecient allocation of our human capital.
An example from my own life to illustrate the point: Someone jokingly pointed out to me that if I were to spend a few weeks in Oxford mingling with people, arguing for the importance of EU policy, that would potentially do more to change people’s minds than if I were to spend that time writing on the forum.
If this were true (I hope its not!), I don’t think that is how people should make up their minds about the importance of cause-areas and I will not participate in such a system. Someone more prone to grifting would and end up with more influence.
if I were to spend a few weeks in Oxford mingling with people, arguing for the importance of EU policy, that would potentially do more to change people’s minds than if I were to spend that time writing on the forum.
I also don’t know whether this is true, but the general idea that talking to people in person individually would be more persuasive than over text isn’t surprising. There’s a lower barrier to ideas flowing, you can better see how the other person is responding, and you don’t have consider how people not in the conversation might misinterpret you.
I think I’m sympathetic to the criticism but I still feel like EA has sufficiently high hurdles to stop the grifters.
a) It’s not like you get a lot of money just by saying the right words. You might be able to secure early funds or funds for a local group but at some point, you will have to show results to get more money.
b) EA funding mechanisms are fast but not loose. I think the meme that you can get money for everything now is massively overblown. A lot of people who are EA aligned didn’t get funding from the FTX foundation, OpenPhil or the LTFF. The internal bars for funders still seem to be hard to cross and I expect this to hold for a while.
c) I’m not sure how the grifters would accumulate power and steer the movement off the rails. Either they start as grifters but actually get good results and then rise to power (at that point they might not be grifters anymore) or they don’t get any results and don’t rise to power. Overall, I don’t see a strong mechanism by which the grifters rise to power without either stopping being grifters or blowing their cover. Maybe you could expand on that. I think the company analogy that you are making is less plausible in an EA context because (I believe) people update stronger on negative evidence. It’s not just some random manager position that you’re putting at risk, there are lives at stake. But maybe I’m too naive here.
I largely agree with this, but I think it’s important to keep in mind that “grifter” is not a binary trait. My biggest worry is not that people who are completely unaligned with EA would capture wealth and steer it into the void, but rather that of 10 EA’s the one most prone to “grifting” would end up with more influence than the rest.
What makes this so difficult is that the line between ‘grifter’ and ‘skilled at navigating complicated social environments’ is pretty thin and the latter is generally a desirable trait.
Generally I’m still not too worried about this, but I do think it’s a shame if we end up undervaluing talented people who are less good at ‘grifting’ resulting in an ineffecient allocation of our human capital.
An example from my own life to illustrate the point: Someone jokingly pointed out to me that if I were to spend a few weeks in Oxford mingling with people, arguing for the importance of EU policy, that would potentially do more to change people’s minds than if I were to spend that time writing on the forum.
If this were true (I hope its not!), I don’t think that is how people should make up their minds about the importance of cause-areas and I will not participate in such a system. Someone more prone to grifting would and end up with more influence.
I also don’t know whether this is true, but the general idea that talking to people in person individually would be more persuasive than over text isn’t surprising. There’s a lower barrier to ideas flowing, you can better see how the other person is responding, and you don’t have consider how people not in the conversation might misinterpret you.
This matches my personal experience as well.
Can you give any examples of AI safety organizations that became less able to get funding due to lack of results?
CSER is the obvious example in my mind, and there are other non-public examples.
Also RAISE https://www.lesswrong.com/posts/oW6mbA3XHzcfJTwNq/raise-post-mortem