But this update is an important reminder that such a high degree of reliance on one funder (especially on the GCR side) represents a structural risk.
I assume that basically all senior EAs agree, and that they disprefer the current situation.
I’d flag though that I’m really not sure how well-equipped these nonprofits are to change this, especially without more dedicated resources.
Most nonprofit projects I see in EA, seem to be funded approximately at-cost. An organization would get a $200k grant to do work, that costs them approximately $200k to do.
Compare this to fast-growing businesses. These businesses often have significant sales and marketing budgets, sometimes this makes up 20-60% of the cost of the business. This is how these businesses expand. If these businesses charged at-cost, they would never grow.
I feel like it’s assumed that the nonprofits have the responsibility to find ways of growing, despite them not getting much money from donors to actually do so. Maybe it’s assumed that they’ll do this in their spare time, and that it will be very easy?
It seems very reasonable to me that if we think growth is possible and valuable, potentially 20-40% of OP money should go to fund this growth, either directly (OP directly spending to find other opportunities), or indirectly (OP gives nonprofits extra overheads, which are used for fundraising work). I’m curious what you and others at OP think this rough number should be, and what the corresponding strategies should be.
I agree but I want to be clear that I don’t think senior EAs are innocent here. I agree with Habryka that this is a situation that was made by a lot of the senior EAs themselves who actively went all in on only two funders (now down to one) and discouraged a lot of attempts to diversify philanthropy.
I encouraged people against earning to give before (though I updated sharply after 2022), and I largely regret that move. (I don’t think of myself as senior, especially at the time, but I’m unusually vocal online so I wouldn’t be surprised if I had a disproportionate influence).
Due to my borderline forum addiction, you probably have a disproportionate influence on on me haha. Will probably never earn to give though so no harm done here ;).
I’m not thinking just discouraging attempts to diversify funding of one’s own org, but also discouraging earning to give, discouraging projects to bring in more donors, etc.
Yea, that seems bad. It felt like there was a big push a few years ago to make a huge AI safety researcher pipeline, and now I’m nervous that we don’t actually have the funding to handle all of that pipeline, for example.
For sure. Not only the lack of funding to handle the pipeline, but there seems to be increasing concern around the benefits to harm tradeoff of technical AI research too.
Perhaps like with the heavy correction against earning to give a few years ago which now seems likely a mistake, maybe theres a lesson to be learned against overcorrecting against the status quo in any direction too quickly...
One obvious way that EA researchers could help improve the situation, is to use comments like these to highlight that it is lacking, and try to discuss where to improve things. :)
I also hope this doesn’t come off as me being upset with OP / EA Funders. I think they’re overall great (compared to most alternatives), but also that they are very important—so when they do get something wrong, that’s a big deal and we should think through it.
Yeah, Cari and Dustin especially are a large part of what made a lot of this ecosystem possible in the first place, they seem like sincere people, and ultimately it’s “their money” and their choice to do with their money what they genuinely believe can achieve the most good.
I think that Cari and Dustin’s funding has obviously created a lot of value. Maybe, even ~60% of all shapley EA value.
I personally don’t feel like I know much of what Cari and Dustin actually believe about things, other than that they funded OP. They both seem to have been fairly private.
At this point, I’m hesitant to trust any authority figure that much. “Billionaires in tech, focused on AI safety issues” currently has a disappointingly mixed track record.
‘ultimately it’s “their money” and their choice to do with their money what they genuinely believe can achieve the most good. ’ → Sure, but this is also true of all many[1] big donors. I think that all big donors should probably get more evaluation and criticism. I’m not sure how many specific actions to take differently when knowing “it’s their choice to do with their money what they genuinely believe can achieve the most good.”
“genuinely believe can achieve the most good” → Small point, but I’m sure some of this is political. “Achieve the most good” often means, “makes the funder look better, arguably so that they could do more good later on”. Some funders pay for local arts museums as a way for getting favor, EA funders sometimes pay for some causes with particularly-good-optics for similar reasons. My guess is that the EA funders generally do this for ultimately altruistic reasons, but would admit that this set of incentives is pretty gnarly.
[1] Edited, after someone flagged. I had a specific reference class in mind, “all” is inaccurate.
Thinking about this more: I think there’s some frame behind this, like,
”There are good guys and bad guys. We need to continue to make it clear that the good guys are good, and should be reluctant to draw attention to their downsides.”
In contrast, I try to think about it more like,
”There’s a bunch of humans, doing human things with human motivations. Some wind up producing more value than others. There’s expected value to be had by understanding that value, and understanding the positives/negatives of the most important human institutions. Often more attention should be spent in trying to find mistakes being made, than in highlighting things going well.”
“There are good guys and bad guys. We need to continue to make it clear that the good guys are good, and should be reluctant to draw attention to their downsides.”
I agree this will be a (very) bad epistemic move. One thing I want to avoid is disincentivizing broadly good moves because their costs are more obvious/sharp to us. There are of course genuinely good reasons to criticize mostly good but flawed decisions (people like that are more amenable to criticism so criticism of them is more useful, their decisions are more consequential). And of course there are alternative framings where critical feedback is more clearly a gift, which I would want us to move more towards.
That said, all of this is hard to navigate well in practice.
Agreed! Ideally, “getting a lot of attention and criticism, but people generally are favorable”, should be looked at far more favorably than “just not getting attention”. I think VCs get this, but many people online don’t.
I assume that basically all senior EAs agree, and that they disprefer the current situation.
I’d flag though that I’m really not sure how well-equipped these nonprofits are to change this, especially without more dedicated resources.
Most nonprofit projects I see in EA, seem to be funded approximately at-cost. An organization would get a $200k grant to do work, that costs them approximately $200k to do.
Compare this to fast-growing businesses. These businesses often have significant sales and marketing budgets, sometimes this makes up 20-60% of the cost of the business. This is how these businesses expand. If these businesses charged at-cost, they would never grow.
I feel like it’s assumed that the nonprofits have the responsibility to find ways of growing, despite them not getting much money from donors to actually do so. Maybe it’s assumed that they’ll do this in their spare time, and that it will be very easy?
It seems very reasonable to me that if we think growth is possible and valuable, potentially 20-40% of OP money should go to fund this growth, either directly (OP directly spending to find other opportunities), or indirectly (OP gives nonprofits extra overheads, which are used for fundraising work). I’m curious what you and others at OP think this rough number should be, and what the corresponding strategies should be.
I agree but I want to be clear that I don’t think senior EAs are innocent here. I agree with Habryka that this is a situation that was made by a lot of the senior EAs themselves who actively went all in on only two funders (now down to one) and discouraged a lot of attempts to diversify philanthropy.
I encouraged people against earning to give before (though I updated sharply after 2022), and I largely regret that move. (I don’t think of myself as senior, especially at the time, but I’m unusually vocal online so I wouldn’t be surprised if I had a disproportionate influence).
Due to my borderline forum addiction, you probably have a disproportionate influence on on me haha. Will probably never earn to give though so no harm done here ;).
I don’t know about that, your other work sounds pretty great!
Yep, this also makes sense.
I imagine responsibility is shared, and also the opportunity to improve things from here is shared.
I don’t feel like I’ve witnessed too many cases of organizations discouraging attempts to diversify funding, but trust that you have.
I’m not thinking just discouraging attempts to diversify funding of one’s own org, but also discouraging earning to give, discouraging projects to bring in more donors, etc.
Yea, that seems bad. It felt like there was a big push a few years ago to make a huge AI safety researcher pipeline, and now I’m nervous that we don’t actually have the funding to handle all of that pipeline, for example.
For sure. Not only the lack of funding to handle the pipeline, but there seems to be increasing concern around the benefits to harm tradeoff of technical AI research too.
Perhaps like with the heavy correction against earning to give a few years ago which now seems likely a mistake, maybe theres a lesson to be learned against overcorrecting against the status quo in any direction too quickly...
One obvious way that EA researchers could help improve the situation, is to use comments like these to highlight that it is lacking, and try to discuss where to improve things. :)
is… that rot13′d for a reason? (it seemed innocuous to me)
I also hope this doesn’t come off as me being upset with OP / EA Funders. I think they’re overall great (compared to most alternatives), but also that they are very important—so when they do get something wrong, that’s a big deal and we should think through it.
Yeah, Cari and Dustin especially are a large part of what made a lot of this ecosystem possible in the first place, they seem like sincere people, and ultimately it’s “their money” and their choice to do with their money what they genuinely believe can achieve the most good.
I generally agree, but with reservations.
I think that Cari and Dustin’s funding has obviously created a lot of value. Maybe, even ~60% of all shapley EA value.
I personally don’t feel like I know much of what Cari and Dustin actually believe about things, other than that they funded OP. They both seem to have been fairly private.
At this point, I’m hesitant to trust any authority figure that much. “Billionaires in tech, focused on AI safety issues” currently has a disappointingly mixed track record.
‘ultimately it’s “their money” and their choice to do with their money what they genuinely believe can achieve the most good. ’ → Sure, but this is also true of
allmany[1] big donors. I think that all big donors should probably get more evaluation and criticism. I’m not sure how many specific actions to take differently when knowing “it’s their choice to do with their money what they genuinely believe can achieve the most good.”“genuinely believe can achieve the most good” → Small point, but I’m sure some of this is political. “Achieve the most good” often means, “makes the funder look better, arguably so that they could do more good later on”. Some funders pay for local arts museums as a way for getting favor, EA funders sometimes pay for some causes with particularly-good-optics for similar reasons. My guess is that the EA funders generally do this for ultimately altruistic reasons, but would admit that this set of incentives is pretty gnarly.
[1] Edited, after someone flagged. I had a specific reference class in mind, “all” is inaccurate.
Thinking about this more:
I think there’s some frame behind this, like,
”There are good guys and bad guys. We need to continue to make it clear that the good guys are good, and should be reluctant to draw attention to their downsides.”
In contrast, I try to think about it more like,
”There’s a bunch of humans, doing human things with human motivations. Some wind up producing more value than others. There’s expected value to be had by understanding that value, and understanding the positives/negatives of the most important human institutions. Often more attention should be spent in trying to find mistakes being made, than in highlighting things going well.”
Thanks for elucidating your thoughts more here.
I agree this will be a (very) bad epistemic move. One thing I want to avoid is disincentivizing broadly good moves because their costs are more obvious/sharp to us. There are of course genuinely good reasons to criticize mostly good but flawed decisions (people like that are more amenable to criticism so criticism of them is more useful, their decisions are more consequential). And of course there are alternative framings where critical feedback is more clearly a gift, which I would want us to move more towards.
That said, all of this is hard to navigate well in practice.
Agreed! Ideally, “getting a lot of attention and criticism, but people generally are favorable”, should be looked at far more favorably than “just not getting attention”. I think VCs get this, but many people online don’t.