I don’t have any well-formed opinions about what the post-AGI world will look like, so I don’t think it’s obvious that logarithmic utility of capital is more appropriate than simply trying to maximize the probability of a good outcome. The way you describe it is how my model worked originally, but I changed it because I believe the new model gives a stronger result even if the model is not necessarily more accurate. I wrote in a paragraph buried in Appendix B:
In an earlier draft of this essay, my model did not assign value to any capital left over after AGI emerges. It simply tried to minimize the probability of extinction. This older model came to the same basic conclusion—namely, shorter timelines mean we should spend faster. (The difference was that it spent a much larger percentage of the budget each decade, and under some conditions it would spend 100% of the budget at a certain point.[5]) But I was concerned that the older model trivialized the question by assuming we could not spend our money on anything but AI safety research—obviously if that’s the only thing we can spend money on, then we should spend lots of money on it. The new model allows for spending money on other things but still reaches the same qualitative conclusion, which is a stronger result.
It seems to me that your concern “that the older model trivialized the question by assuming we could not spend our money on anything but AI safety research” could be addressed by dividing existing longtermist or EA capital up into one portion to be spent on AI safety and one portion to be spent on other causes. Each capital stock can then be spent at independent rates according to the value of availabkr giving opportunities in their respective cause areas.
Your model already makes the assumption:
Prior to the emergence of AGI, we don’t want to spend money on anything other than AI safety research.
And then:
The new model allows for spending money on other things [but only after AGI]
It just seems like a weird constraint to say that with one stock of capital you only want to spend it on one cause (AI safety) before some event but will spend it on any cause after the event.
I’m not sure that I can articulate a specific reason this doesn’t make sense right now, but intuitively I think your older model is more reasonable.
The reason I made the model only have one thing to spend on pre-AGI is not because it’s realistic (which it isn’t), but because it makes the model more tractable. I was primarily interested in answering a simple question: do AI timelines affect giving now vs. later?
I don’t have any well-formed opinions about what the post-AGI world will look like, so I don’t think it’s obvious that logarithmic utility of capital is more appropriate than simply trying to maximize the probability of a good outcome. The way you describe it is how my model worked originally, but I changed it because I believe the new model gives a stronger result even if the model is not necessarily more accurate. I wrote in a paragraph buried in Appendix B:
Thanks, I only read through Appendix A.
It seems to me that your concern “that the older model trivialized the question by assuming we could not spend our money on anything but AI safety research” could be addressed by dividing existing longtermist or EA capital up into one portion to be spent on AI safety and one portion to be spent on other causes. Each capital stock can then be spent at independent rates according to the value of availabkr giving opportunities in their respective cause areas.
Your model already makes the assumption:
And then:
It just seems like a weird constraint to say that with one stock of capital you only want to spend it on one cause (AI safety) before some event but will spend it on any cause after the event.
I’m not sure that I can articulate a specific reason this doesn’t make sense right now, but intuitively I think your older model is more reasonable.
The reason I made the model only have one thing to spend on pre-AGI is not because it’s realistic (which it isn’t), but because it makes the model more tractable. I was primarily interested in answering a simple question: do AI timelines affect giving now vs. later?