Continuing my efforts to annoy everyone who will listen with this genre of question, what value of X would make this proposition seem true to you?
It would be better in expectation to have $X dollars of additional funding available in the field in the year 2028 than an additional full time AI safety researcher starting today.
Feel free to answer based on concrete example researchers if desired. Earlier respondents have based their answer on people like Paul Christiano.
I’d also be interested in hearing answers for a distribution of different years or different levels of research impact.
(This is a pretty difficult and high variance forecast, so don’t worry, I won’t put irresponsible weight on the specifics of any particular answer! Noisy shrug-filled answers are better than none for my purposes.)
Annoy away – it’s a good question! Of course, standard caveats to my answer apply, but there’s a few caveats in particular that I want to flag:
It’s possible that by 2028 there will be one (or more) further longtermist billionaires who really open up the spigot, significantly decreasing the value of marginal longtermist money at that time
It’s possible that by 2028, AI would have gotten “weird” in ways that affect the value of money at that time, even if we haven’t reached AGI (e.g., certain tech stocks might have skyrocketed by then, or it might be possible to turn money into valuable research labor via AI)
You might be considering donation opportunities that significantly differ in value from other large funders in the field
This is all pretty opinionated and I’m writing it on the fly, so others on the LTFF may disagree with me (or I might disagree with myself if I thought about it at another time).
In principle, we could try to assign probability distributions to all the important cruxes and Monte Carlo this out. Instead, I’m just going to give my answer based on simplifying assumptions that we still have one major longtermist donor who prioritizes AI safety to a similar amount as today, things haven’t gotten particularly weird, your donation opportunities don’t look that different from others’ and roughly match donation opportunities now,[1] etc.
One marginal funding opportunity to benchmark the value of donations against would be funding the marginal AI alignment researcher, which probably costs ~$100k/yr. Assuming a 10% yearly discount rate (in line with the long-term, inflation-adjusted returns to equities within the US), funding this in perpetuity is equivalent to a lump-sum donation now of $1M, or a donation in 2028 of ($1M)*(1.1^5) = $1.6M.[2]
Then the question becomes, how valuable is the marginal researcher (and how would you expect to compare against them)? Borrowing from Linch’s piece on the value of marginal grants to the LTFF, the marginal alignment grant often is a bit better than something like the following:
a late undergraduate or recent graduate from an Ivy League university or a comparable institution requests a grant to conduct independent research or comparable work in a high-impact field, but we don’t find the specific proposal particularly compelling. For example, the mentee of a fairly prominent AI safety… researcher may request 6-12 months’ stipend to explore a particular research project that their mentor(s) are excited about, but LTFF fund managers and some of our advisors are unexcited about. Alternatively, they may want to take an AGISF course, or to read and think enough to form a detailed world model about which global catastrophic risks are the most pressing, in the hopes of then transitioning their career towards combating existential risk.
In these cases, the applicant often shows some evidence of interest and focus (e.g., participation in EA local groups/EA Global or existential risk reading groups) and some indications of above-average competence or related experience, but nothing exceptional.
[Note: I think this sort of grant is well below the current funding threshold for the LTFF, given that we’re currently in a funding crunch. But I would generally expect, for the longtermist community as a whole over longer timespans, the marginal grant would be only a bit higher than that.]
Note that for many of the people funded on that kind of grant, the main expected benefit would come not from the direct work of the initial grant, but instead on the chance that the researcher winds up being surprisingly good at alignment research; so, in considering the value of the “marginally-funded researcher,” just note that it would be someone with stronger signals for alignment research than described above.
So from this, if you think you’d be around equivalent to the marginally-funded alignment researcher (where it’s worth it to keep funding them based on their research output), I’d think your labor would be worth about $1-2M in donations in 2028.[3] It’s harder to estimate the value for people doing substantially better work than that. I think values 10x that would probably apply to a decent number of people, and numbers 100x higher would be rare but not totally unimaginable.
Or that current donors are appropriately weighing how much they should spend now vs invest so even if the nature of donation opportunities differs, the (investment-adjusted) value of the donations is comparable. Note that I’m not trying to claim they are actually doing this, but the assumption makes the analysis easier.
There’s a few simplifying assumptions here: I’m neglecting to consider how the cost of living/wages may raise this cost, and I’m also neglecting to consider how the labor wouldn’t last in perpetuity but only for the remainder of the person’s career (presumably either until they reach retirement, or until AI forces them into retirement or takes over).
In principle, a very-marginally worth it to fund person may be (on net) worth much less than this, since funding them would also cost the money to fund them. In practice, I think this rough calculation still gives us a good general ballpark for people to index on, as very few people are presumably almost exactly at the point of indifference.
Continuing my efforts to annoy everyone who will listen with this genre of question, what value of X would make this proposition seem true to you?
Feel free to answer based on concrete example researchers if desired. Earlier respondents have based their answer on people like Paul Christiano.
I’d also be interested in hearing answers for a distribution of different years or different levels of research impact.
(This is a pretty difficult and high variance forecast, so don’t worry, I won’t put irresponsible weight on the specifics of any particular answer! Noisy shrug-filled answers are better than none for my purposes.)
This is a hard question to answer, in part because it depends a lot on the researcher. My wild guess for a 90%-interval is $500k-$10m
Annoy away – it’s a good question! Of course, standard caveats to my answer apply, but there’s a few caveats in particular that I want to flag:
It’s possible that by 2028 there will be one (or more) further longtermist billionaires who really open up the spigot, significantly decreasing the value of marginal longtermist money at that time
It’s possible that by 2028, AI would have gotten “weird” in ways that affect the value of money at that time, even if we haven’t reached AGI (e.g., certain tech stocks might have skyrocketed by then, or it might be possible to turn money into valuable research labor via AI)
You might be considering donation opportunities that significantly differ in value from other large funders in the field
This is all pretty opinionated and I’m writing it on the fly, so others on the LTFF may disagree with me (or I might disagree with myself if I thought about it at another time).
In principle, we could try to assign probability distributions to all the important cruxes and Monte Carlo this out. Instead, I’m just going to give my answer based on simplifying assumptions that we still have one major longtermist donor who prioritizes AI safety to a similar amount as today, things haven’t gotten particularly weird, your donation opportunities don’t look that different from others’ and roughly match donation opportunities now,[1] etc.
One marginal funding opportunity to benchmark the value of donations against would be funding the marginal AI alignment researcher, which probably costs ~$100k/yr. Assuming a 10% yearly discount rate (in line with the long-term, inflation-adjusted returns to equities within the US), funding this in perpetuity is equivalent to a lump-sum donation now of $1M, or a donation in 2028 of ($1M)*(1.1^5) = $1.6M.[2]
Then the question becomes, how valuable is the marginal researcher (and how would you expect to compare against them)? Borrowing from Linch’s piece on the value of marginal grants to the LTFF, the marginal alignment grant often is a bit better than something like the following:
[Note: I think this sort of grant is well below the current funding threshold for the LTFF, given that we’re currently in a funding crunch. But I would generally expect, for the longtermist community as a whole over longer timespans, the marginal grant would be only a bit higher than that.]
Note that for many of the people funded on that kind of grant, the main expected benefit would come not from the direct work of the initial grant, but instead on the chance that the researcher winds up being surprisingly good at alignment research; so, in considering the value of the “marginally-funded researcher,” just note that it would be someone with stronger signals for alignment research than described above.
So from this, if you think you’d be around equivalent to the marginally-funded alignment researcher (where it’s worth it to keep funding them based on their research output), I’d think your labor would be worth about $1-2M in donations in 2028.[3] It’s harder to estimate the value for people doing substantially better work than that. I think values 10x that would probably apply to a decent number of people, and numbers 100x higher would be rare but not totally unimaginable.
Or that current donors are appropriately weighing how much they should spend now vs invest so even if the nature of donation opportunities differs, the (investment-adjusted) value of the donations is comparable. Note that I’m not trying to claim they are actually doing this, but the assumption makes the analysis easier.
There’s a few simplifying assumptions here: I’m neglecting to consider how the cost of living/wages may raise this cost, and I’m also neglecting to consider how the labor wouldn’t last in perpetuity but only for the remainder of the person’s career (presumably either until they reach retirement, or until AI forces them into retirement or takes over).
In principle, a very-marginally worth it to fund person may be (on net) worth much less than this, since funding them would also cost the money to fund them. In practice, I think this rough calculation still gives us a good general ballpark for people to index on, as very few people are presumably almost exactly at the point of indifference.
Thanks for breaking down details! That’s very helpful. (And thanks to Lauro too!)