Thanks for the great post, Matthew. I broadly agree.
If we struggle to forecast impacts over mere decades in a data-rich field, then claiming to know what effects a policy will have over billions of years is simply not credible.
I very much agree. I also think what ultimately matters for the uncertainty at a given time in the future is not the time from now until then, but the amount of change from now until then. As a 1st approximation, I would say the horizon of predictibility is inversely proportional to the annual growth rate of gross world product (GWP). If this become 10 times as fast as some predict, I would expect the horizon of predictibity (regarding a given topic) to shorten, for instance, from a few decades to years.
To demonstrate that delaying AI would have predictable and meaningful consequences on an astronomical scale, you would need to show that those consequences will not simply wash out and become irrelevant over the long run.
Right. I would just say “after significant change (regardless of when it happens)” instead of “over the long run” in light of my point above.
Thanks for the great post, Matthew. I broadly agree.
I very much agree. I also think what ultimately matters for the uncertainty at a given time in the future is not the time from now until then, but the amount of change from now until then. As a 1st approximation, I would say the horizon of predictibility is inversely proportional to the annual growth rate of gross world product (GWP). If this become 10 times as fast as some predict, I would expect the horizon of predictibity (regarding a given topic) to shorten, for instance, from a few decades to years.
Right. I would just say “after significant change (regardless of when it happens)” instead of “over the long run” in light of my point above.