It would be good to flag in the main text that the justification for this is in Appendix 2 (initially I thought it was a bare asertion). Also, it is interesting that in @kokotajlod’s scenario the ‘wildly superintelligent’ AI maxes out at 1 million-fold AI R&D speedup; I commented to them on a draft that this seemed implausibly high to me. I have no particular take on whether 100x is too low or too high as the theoretical max, but it would be interesting to work out why there is this Forethought vs AI Futures difference.
Interesting. Some thoughts:
First of all, the argument they give in Appendix 2 seems to be more of an argument for a lower bound, not an upper bound / theoretical limit! E.g. they give examples of fruit fly doubling times and so forth as proof of concept that you could double in a day at least in principle.
Secondly, I’m not sure the thing they mean by the speed of progress is the same as the thing we mean. They are talking about doubling times for effective compute (where 100x would mean: effective compute is doubling in a day) whereas we are talking about the ratio between how fast progress goes with AIs helping/driving it vs. without. That’s why we call it a “Progress Multiplier.” To put it another way, they are comparing the pace of progress in the future intelligence explosion scenario to the pace of progress today, whereas we are comparing it to a hypothetical future scenario in which the powers that be say “OK we are banning AIs from assisting with AI research now, the intelligence explosion can continue but from now on humans have to be doing all the research.”
Interesting. Some thoughts:
First of all, the argument they give in Appendix 2 seems to be more of an argument for a lower bound, not an upper bound / theoretical limit! E.g. they give examples of fruit fly doubling times and so forth as proof of concept that you could double in a day at least in principle.
Secondly, I’m not sure the thing they mean by the speed of progress is the same as the thing we mean. They are talking about doubling times for effective compute (where 100x would mean: effective compute is doubling in a day) whereas we are talking about the ratio between how fast progress goes with AIs helping/driving it vs. without. That’s why we call it a “Progress Multiplier.” To put it another way, they are comparing the pace of progress in the future intelligence explosion scenario to the pace of progress today, whereas we are comparing it to a hypothetical future scenario in which the powers that be say “OK we are banning AIs from assisting with AI research now, the intelligence explosion can continue but from now on humans have to be doing all the research.”