I disagree with the following: ”very strong evidence against “the world in 100 years will look kind of similar to what it looks like today”.”
Growth is an important kind of change. Arguing against the possibility of some kind of extreme growth makes it more difficult to argue that the future will be very different. Let me frame it this way:
Scenario → Technological “progress” under scenario
AI singularity → Extreme progress within this century
AI doom → Extreme “progress” within this century
Constant growth → Moderate progress within this century, very extreme progress in 8200 years
Collapse through climate change, political instability, war → Technological decline
Stagnation/slowdown → Weak progress within this century
Most of the mainstream audience mostly give credence in the scenarios 3, 4 and 5. The scenario 3 is the scenario with the highest technological progress. The blog post is mostly spent on refuting the scenario 3 by explaining the difficulty and rareness of the growth and technological change. This argument makes people give more credence in scenarios 4 and especially 5 rather than 1 and 2, since the scenarios 1 and 2 also involve a lot of technological progress.
For these reasons, I’m more inclined to believe that an introductory blog post should be more focused on assessing the possibility of 4 and 5 rather than 3.
Arguing against 3 is still important, as it is decision-relevant on the questions of whether philanthropic resources should be spent now or later. But it doesn’t look like this topic makes a good intro blog-post for AI risk.
I disagree with the following:
”very strong evidence against “the world in 100 years will look kind of similar to what it looks like today”.”
Growth is an important kind of change. Arguing against the possibility of some kind of extreme growth makes it more difficult to argue that the future will be very different. Let me frame it this way:
Scenario → Technological “progress” under scenario
AI singularity → Extreme progress within this century
AI doom → Extreme “progress” within this century
Constant growth → Moderate progress within this century, very extreme progress in 8200 years
Collapse through climate change, political instability, war → Technological decline
Stagnation/slowdown → Weak progress within this century
Most of the mainstream audience mostly give credence in the scenarios 3, 4 and 5. The scenario 3 is the scenario with the highest technological progress. The blog post is mostly spent on refuting the scenario 3 by explaining the difficulty and rareness of the growth and technological change. This argument makes people give more credence in scenarios 4 and especially 5 rather than 1 and 2, since the scenarios 1 and 2 also involve a lot of technological progress.
For these reasons, I’m more inclined to believe that an introductory blog post should be more focused on assessing the possibility of 4 and 5 rather than 3.
Arguing against 3 is still important, as it is decision-relevant on the questions of whether philanthropic resources should be spent now or later. But it doesn’t look like this topic makes a good intro blog-post for AI risk.