This is cool, thanks for posting :) How do you think this generalises to a situation where labor is the key resource rather than money?
I’m a bit more interested in the question ’how much longtermist labor should be directed towards capacity-building vs. ‘direct’ work (eg. technical AIS research)?′ than the question ‘how much longtermist money should be directed towards spending now vs. investing to save later?’
I think this is mainly because longtermism, x-risk, and AIS seem to be bumping up against the labor constraint much more than the money constraint. (Or put another way, I think OpenPhil doesn’t pick their savings rate based on their timelines, but based on whether they can find good projects. As individuals, our resource allocation problem is to either try to give OpenPhil marginally better direct projects to fund or marginally better capacity-building projects to fund.)
[Also aware that you were just building this model to test whether the claim about AI timelines affecting the savings rate makes sense, and you weren’t trying to capture labor-related dynamics.]
That’s an interesting question, and I agree with your reasoning on why it’s important. My off-the-cuff thoughts:
Labor tradeoffs don’t work in the same way as capital tradoffs because there’s no temporal element. With capital, you can spend it now or later, and if you spend later, you get to spend more of it. But there’s no way to save up labor to be used later, except in the sense that you can convert labor into capital and then back into labor (although these conversions might not be efficient, e.g., if you can’t find enough talented people to do the work you want). So the tradeoff with labor is that you have to choose what to prioritize. This question is more about traditional cause prioritization than about giving now vs. later. This is something EAs have already written a lot about, and it’s probably worth more attention overall than the question of giving (money) now vs. later, but I believe the latter question is more neglected and has more low-hanging fruit.
The question of optimal giving rate might be irrelevant if, say, we’re confident that the optimal rate is somewhere above 1%, we don’t know where, but it’s impossible to spend more than 1% due to a lack of funding opportunities. But I don’t think we can be that confident that the optimal spending rate is that high. And even if we are, knowing the optimal rate still matters if you expect that we can scale up work capacity in the future.
I’d guess >50% chance that the optimal spending rate is faster than the longtermist community[1] is currently spending, but I also expect the longtermist spending rate to increase a lot in the future due to increasing work capacity plus capital becoming more liquid—according to Ben Todd’s estimate, about half of EA capital is currently too illiquid to spend.
[1] I’m talking about longtermism specifically and not all EA because the optimal spending rate for neartermist causes could be pretty different.
But there’s no way to save up labor to be used later, except in the sense that you can convert labor into capital and then back into labor (although these conversions might not be efficient, e.g., if you can’t find enough talented people to do the work you want). So the tradeoff with labor is that you have to choose what to prioritize. This question is more about traditional cause prioritization than about giving now vs. later.
Ah sorry I think I was unclear. I meant ‘capacity-building’ in the narrow sense of ‘getting more people to work on AI’ eg. by building the EA community, rather than building civilisation’s capacity eg. by improving institutional decision-making. Did you think I meant the second one? I think the first one is more analogous to capital as building the EA community looks a bit more like investing (you use some of the resource to make more later)
I think we are falling for the double illusion of transparency: I misunderstood you, and the thing I thought you were saying was even further off than what you thought I thought you were saying. I wasn’t even thinking about capacity-building labor as analogous to investment. But now I think I see what you’re saying, and the question of laboring on capacity vs. direct value does seem analogous to spending vs. investing money.
At a high level, you can probably model labor in the same way as I describe in OP: you spend some amount of labor on direct research, and the rest on capacity-building efforts that increase the capacity for doing labor in the future. So you can take the model as is and just change some numbers.
Example: If you take the model in OP and assume we currently have an expected (median) 1% of required labor capacity, a rate of return on capacity-building of 20%, and a median AGI date of 2050, then the model recommends exclusively capacity-building until 2050, then spending about 30% of each decade’s labor on direct research.
One complication is that this super-easy model treats labor as something that only exists in the present. But in reality, if you have one laborer, that person can work now and can also continue working for some number of decades. The super-easy model assumes that any labor spent on research immediately disappears, when it would be more accurate to say that research labor earns a 0% return (or let’s say a −3% return, to account for people retiring or quitting) while capacity-building labor earns a 20% return (or whatever the number is).
This complication is kind of hard to wrap my head around, but I think I can model it with a small change to my program, changing the line in run_agi_spending that reads
capital *= (1 - spending_schedule[y]) * (1 + self.investment_return)**10
In that case, the model recommends spending 100% on capacity-building for the next three decades, then about 30% per decade on research from 2050 through 2080, and then spending almost entirely on capacity-building for the rest of time.
But I’m not sure I’m modeling this concept correctly.
This is cool, thanks for posting :) How do you think this generalises to a situation where labor is the key resource rather than money?
I’m a bit more interested in the question ’how much longtermist labor should be directed towards capacity-building vs. ‘direct’ work (eg. technical AIS research)?′ than the question ‘how much longtermist money should be directed towards spending now vs. investing to save later?’
I think this is mainly because longtermism, x-risk, and AIS seem to be bumping up against the labor constraint much more than the money constraint. (Or put another way, I think OpenPhil doesn’t pick their savings rate based on their timelines, but based on whether they can find good projects. As individuals, our resource allocation problem is to either try to give OpenPhil marginally better direct projects to fund or marginally better capacity-building projects to fund.)
[Also aware that you were just building this model to test whether the claim about AI timelines affecting the savings rate makes sense, and you weren’t trying to capture labor-related dynamics.]
That’s an interesting question, and I agree with your reasoning on why it’s important. My off-the-cuff thoughts:
Labor tradeoffs don’t work in the same way as capital tradoffs because there’s no temporal element. With capital, you can spend it now or later, and if you spend later, you get to spend more of it. But there’s no way to save up labor to be used later, except in the sense that you can convert labor into capital and then back into labor (although these conversions might not be efficient, e.g., if you can’t find enough talented people to do the work you want). So the tradeoff with labor is that you have to choose what to prioritize. This question is more about traditional cause prioritization than about giving now vs. later. This is something EAs have already written a lot about, and it’s probably worth more attention overall than the question of giving (money) now vs. later, but I believe the latter question is more neglected and has more low-hanging fruit.
The question of optimal giving rate might be irrelevant if, say, we’re confident that the optimal rate is somewhere above 1%, we don’t know where, but it’s impossible to spend more than 1% due to a lack of funding opportunities. But I don’t think we can be that confident that the optimal spending rate is that high. And even if we are, knowing the optimal rate still matters if you expect that we can scale up work capacity in the future.
I’d guess >50% chance that the optimal spending rate is faster than the longtermist community[1] is currently spending, but I also expect the longtermist spending rate to increase a lot in the future due to increasing work capacity plus capital becoming more liquid—according to Ben Todd’s estimate, about half of EA capital is currently too illiquid to spend.
[1] I’m talking about longtermism specifically and not all EA because the optimal spending rate for neartermist causes could be pretty different.
Nice, thanks for these thoughts.
Ah sorry I think I was unclear. I meant ‘capacity-building’ in the narrow sense of ‘getting more people to work on AI’ eg. by building the EA community, rather than building civilisation’s capacity eg. by improving institutional decision-making. Did you think I meant the second one? I think the first one is more analogous to capital as building the EA community looks a bit more like investing (you use some of the resource to make more later)
I think we are falling for the double illusion of transparency: I misunderstood you, and the thing I thought you were saying was even further off than what you thought I thought you were saying. I wasn’t even thinking about capacity-building labor as analogous to investment. But now I think I see what you’re saying, and the question of laboring on capacity vs. direct value does seem analogous to spending vs. investing money.
At a high level, you can probably model labor in the same way as I describe in OP: you spend some amount of labor on direct research, and the rest on capacity-building efforts that increase the capacity for doing labor in the future. So you can take the model as is and just change some numbers.
Example: If you take the model in OP and assume we currently have an expected (median) 1% of required labor capacity, a rate of return on capacity-building of 20%, and a median AGI date of 2050, then the model recommends exclusively capacity-building until 2050, then spending about 30% of each decade’s labor on direct research.
One complication is that this super-easy model treats labor as something that only exists in the present. But in reality, if you have one laborer, that person can work now and can also continue working for some number of decades. The super-easy model assumes that any labor spent on research immediately disappears, when it would be more accurate to say that research labor earns a 0% return (or let’s say a −3% return, to account for people retiring or quitting) while capacity-building labor earns a 20% return (or whatever the number is).
This complication is kind of hard to wrap my head around, but I think I can model it with a small change to my program, changing the line in
run_agi_spending
that readsto
In that case, the model recommends spending 100% on capacity-building for the next three decades, then about 30% per decade on research from 2050 through 2080, and then spending almost entirely on capacity-building for the rest of time.
But I’m not sure I’m modeling this concept correctly.