Just a comment on growth functions: I think a common prior here is that once we switch to computer consciousnesses, progress will go with Moore’s law, which is exponential with doubling time of roughly 18 months (Ray Kurzweil says it is actually slow exponential growth in the exponent). Hanson sees the transition to a much shorter doubling time, something around one month. Others have noted that if the computer consciousnesses are making the progress and they are getting faster with Moore’s law, you actually get a hyperbolic shape which goes to infinity in a finite time (around three years). Then you get to recursive self-improvement of AI, which could have a doubling time of days or weeks, and I think this is roughly the Yudkowsky position (though he does recognize that progress could get harder). I think this is the most difficult to manage. Then going the other direction from the Moore’s law prior would be many economists who see continued exponential growth with the doubling time of decades. Then we have historical economists who think economic growth rate will go back to zero. Next you have the resource (or climate) doomsters who think there will be slow negative economic growth. Further down, you have faster catastrophes, which we might recover from. Finally, you have sudden catastrophes with no recovery. Quite the diversity in opinion: It would be an interesting project (or paper?) to try to plot this out.
Just a comment on growth functions: I think a common prior here is that once we switch to computer consciousnesses, progress will go with Moore’s law, which is exponential with doubling time of roughly 18 months (Ray Kurzweil says it is actually slow exponential growth in the exponent). Hanson sees the transition to a much shorter doubling time, something around one month. Others have noted that if the computer consciousnesses are making the progress and they are getting faster with Moore’s law, you actually get a hyperbolic shape which goes to infinity in a finite time (around three years). Then you get to recursive self-improvement of AI, which could have a doubling time of days or weeks, and I think this is roughly the Yudkowsky position (though he does recognize that progress could get harder). I think this is the most difficult to manage. Then going the other direction from the Moore’s law prior would be many economists who see continued exponential growth with the doubling time of decades. Then we have historical economists who think economic growth rate will go back to zero. Next you have the resource (or climate) doomsters who think there will be slow negative economic growth. Further down, you have faster catastrophes, which we might recover from. Finally, you have sudden catastrophes with no recovery. Quite the diversity in opinion: It would be an interesting project (or paper?) to try to plot this out.