This is year two, not year one.
See the new Q&A item addressing the ned to build capacity; they could give to Givewell, Givedirectly, or via Coeff’s funds specific to their goals. They could also give via Gates foundation, etc. They can do this while building up their internal capacity, so they really don’t need to delay additional years.
They have incredibly short AGI timelines, so per their own views, they can’t afford to move slowly. If they are giving less than 5% of assets after they already claim AGI, that’s a huge failure. So in my view, your proposed 2028 target for giving so little that they are more than doubling assets yearly is insanely conservative, not at all “an aggressive, public ramp-up targets.”
That said, yes, I already agreed that actually ambitious public ramp-up commitments could be sufficient; as I said in the post “if it’s done in the next two years, I will admit they are doing their jobs”—but they didn’t announce any such plans, and as noted in the post, the total giving commitment is a cash total certainly worth less than 1/6th of their (current rapidly growing) funds; that’s insanely low given that it is their total eventual commitment!
Davidmanheim
“What Exactly Would An International AI Treaty Say?” Is a Bad Objection
No, it would not. Per the frame that makes the argument more compelling, as I said; “Secondly, they may be even more successful in building significantly more powerful AI, transforming the world. Obviously, the nonprofit would become far wealthier, but given OpenAI’s mandate, it also becomes irrelevant.”
But within the first option, if they are actually more than doubling their value yearly (as implied by 100x in 6 years, which matches their current revenue growth continuing at the current rate,) if they give away $20 billion per year, starting at their current valuation of $150 billion, they end up giving away only a small fraction of their eventual endowment—about 13%. And in that case, given that it’s hard to spend 13% of $150b effectively, it’s going to be far harder to spend any large percentage of their $15 trillion endowment in later years!
To forestall an obvious objection, I do not endorse the decision of OpenAI to use this structure, and there are many other problems. However, the above arguments should apply according to the views they profess, which seems important.
$1 billion is not enough; OpenAI Foundation must start spending tens of billions each year
To be clear about my views, I do support spending on local community orgs—but “local organizations or those where I have personal affiliations or feel responsibilities towards are also important to me—but… this is conceptually separate from giving charity effectively, and as I mentioned, I donate separately from the 10% dedicated to charity.”
I am not saying everyone is malicious, nor that no-one cares—but belief fixation can happen with a moderate non-majority proportion of a population incentivized to believe it is true, and something like this isn’t incompatible with good motivations, and it is about as hard to refute once people claim it’s true as it would to establish the claim in the first place.
Very briefly, it’s unclear to me how much of the claimed impact of meta and community building orgs is counterfactual. The incentives created here are quite solidly against any impartial analysis. Also, as I’ve argued before, as an almost deontological point, I’m uncomfortable with people funding their social circle and community and putting what would otherwise be considered spending on dues in community organizations as their 10% giving to effective charity.
pretty well established though in the activist world that is often effective to pick one specific thing to get a”win” on, at the right time.
It may be well established, but given the incentives in that world, it’s unlikely that the belief would need to correlate with truth to have become well established.
Strong agree that absent new approaches the tailwind isn’t enough—but it seems unclear that pretraining scaling doesn’t have farther to go, and it seems that current approaches with synthetic data and training via RL to enhance one-shot performance have room left for significant improvement.
I also don’t know how much room there is left until we hit genius level AGI or beyond, and at that point even if we hit a wall, more scaling isn’t required, as the timeline basically ends.
the extinction scenario that Eliezer Yudkowsky has described. His scenario depends on the premise that AI systems could quickly develop advanced molecular nanotechnology capable of matching or even surpassing the sophistication of biological systems.
But that’s not the claim he makes!
To quote:
The concrete example I usually use here is nanotech, because there’s been pretty detailed analysis of what definitely look like physically attainable lower bounds on what should be possible with nanotech, and those lower bounds are sufficient to carry the point.
Mostly agree. I’ve been involved in local orgs a bit more than most people in EA, and grew up in a house where my parents were often serving terms on different synagogue and school boards, and my wife has continued her family’s similar tradition—so I strongly agree that passionate alignment changes things—but even that rarely leads to boards setting the strategic direction.
I think a large part of this is that strategy is hard, as you note, and it’s very high context for orgs. I still wonder about who is best placed to track priority drift, and about how much we want boards to own the strategic direction; it would be easy, but I think very unhelpful, for the board to basically just do what Holden suggests, and only be in charge of the CEO—because a lot of value from the board is, or can be, their broader strategic views and different knowledge. And for local orgs, that happens much more, the leaders need to convince board members to do things or make changes, rather than doing it on their own and getting vague approval from the board. But, as a last point, it seems hard to do lots of this for small orgs. Overhead from the board is costly, and I don’t know how much effort we want to expect.
My board isn’t the reason for the lack of clarity—and it certainly is my job to set the direction. I don’t think any of them are particularly dissatisfied with the way I’ve set the org’s agenda. But my conclusion is that I disagree somewhat with Holden’s post that partly guided me in the past couple years, in that it’s more situational, and there are additional useful roles for the board.
Who sets my org’s agenda?
I’d find a breakdown informative, since the distribution both between different frontier firms and between safety and not seems really critical, at least in my view of the net impacts of a program. (Of course, none of this tells us counterfactual impact, which might be moving people on net either way.)
Biggest unanswered but I think critical question:
What proportion are working for frontier labs (not “for profit” generally, but the ones creating the risks,) in which roles (how many are in capabilities work now?) and at which labs?
ALTER Israel Semiannual Update—End of 2025
I don’t think it’s that much of a sacrifice.
I don’t understand how this is an argument applicable to anyone other than yourself; other people clearly feel differently.
I also think that for many, the only difference in practice would be slightly lower savings for retirement.
If that is something they care or worry about, it’s a difference—adding the word “only” doesn’t change that!
I’ve run very successful group brainstorming sessions with experts just in order to require them to actually think about a topic enough to realize what seems obvious to me. Getting people to talk through what the next decade of AI progress will look like didn’t make them experts, or even get to the basic level I could have presented in a 15 minute talk—but it gives me me a chance to push them beyond their cached thoughts, without them rejecting views they see as extremes, since they are the ones thinking them!
That’s a really good point, thanks! Though if they don’t have short timelines, it seems like they are being quite irresponsible as board members not preventing Sam from making increasingly large bets on scaling. Of course, they might not be willing to cross him; the current board presumably learned the lesson from Ilya’s ill-fated decision.
Also, you need what are currently considered almost implausibly long timelines to not think that them spending more quickly makes sense.