There are very expensive interventions that are financially constrained and could use up ~all EA funds, and the cost-benefit calculation takes probability of powerful AGI in a given time period as an input, so that e.g. twice the probability of AGI in the next 10 years justifies spending twice as much for a given result by doubling the chance the result gets to be applied. That can make the difference between doing the intervention or not, or drastic differences in intervention size.
Could you give an example or two? I tend to think of “~all of EA funds”-level interventions as more like timeline-shifting interventions than things that would be premised on a given timeline (though there is a fine line between the two), and am skeptical of most that I can think of, but I agree that if such things exist it would count against what I’m saying.
The funding scale of AI labs/research, AI chip production, and US political spending could absorb billions per year, tens of billions or more for the first two. Philanthropic funding of a preferred AI lab at the cutting edge as model sizes inflate could take all EA funds and more on its own.
There are also many expensive biosecurity interventions that are being compared against an AI intervention benchmark. Things like developing PPE, better sequencing/detection, countermeasures through philanthropic funding rather than hoping to leverage cheaper government funding.
Thanks for elaborating—I haven’t thought much about the bio comparison and political spending things but on funding a preferred lab/compute stuff, I agree that could be more sensitive to timelines than the AI policy things I mentioned.
FWIW I don’t think it’s as sensitive to timelines as it may first appear (doing something like that could still make sense even with longer timelines given the potential value in shaping norms, policies, public attitudes on AI, etc., particularly if one expects sub-AGI progress to help replenish EA coffers, and if such an idea were misguided I think it’d probably be for non-timeline-related reasons like accelerating competition or speeding things up too much even for a favored lab to handle).
But if I were rewriting I’d probably mention it as a prominent counterexample justifying some further work along with some of the alignment agenda stuff mentioned below.
Oh, one more thing: AI timelines put a discount on other interventions. Developing a technology that will take 30 years to have its effect is less than half as important if your median AGI timeline is 20 years.
There are very expensive interventions that are financially constrained and could use up ~all EA funds, and the cost-benefit calculation takes probability of powerful AGI in a given time period as an input, so that e.g. twice the probability of AGI in the next 10 years justifies spending twice as much for a given result by doubling the chance the result gets to be applied. That can make the difference between doing the intervention or not, or drastic differences in intervention size.
Could you give an example or two? I tend to think of “~all of EA funds”-level interventions as more like timeline-shifting interventions than things that would be premised on a given timeline (though there is a fine line between the two), and am skeptical of most that I can think of, but I agree that if such things exist it would count against what I’m saying.
The funding scale of AI labs/research, AI chip production, and US political spending could absorb billions per year, tens of billions or more for the first two. Philanthropic funding of a preferred AI lab at the cutting edge as model sizes inflate could take all EA funds and more on its own.
There are also many expensive biosecurity interventions that are being compared against an AI intervention benchmark. Things like developing PPE, better sequencing/detection, countermeasures through philanthropic funding rather than hoping to leverage cheaper government funding.
Thanks for elaborating—I haven’t thought much about the bio comparison and political spending things but on funding a preferred lab/compute stuff, I agree that could be more sensitive to timelines than the AI policy things I mentioned.
FWIW I don’t think it’s as sensitive to timelines as it may first appear (doing something like that could still make sense even with longer timelines given the potential value in shaping norms, policies, public attitudes on AI, etc., particularly if one expects sub-AGI progress to help replenish EA coffers, and if such an idea were misguided I think it’d probably be for non-timeline-related reasons like accelerating competition or speeding things up too much even for a favored lab to handle).
But if I were rewriting I’d probably mention it as a prominent counterexample justifying some further work along with some of the alignment agenda stuff mentioned below.
Oh, one more thing: AI timelines put a discount on other interventions. Developing a technology that will take 30 years to have its effect is less than half as important if your median AGI timeline is 20 years.
I assume Carl is thinking of something along the lines of “try and buy most new high-end chips”. See eg Sam interviewed by Rob.