(I think if we’d gotten to human-level algorithmic efficiency at the Dartmouth conference, that would have been good, as compute build-out is intrinsically slower and more controllable than software progress (until we get nanotech). And if we’d scaled up compute + AI to 10% of the global economy decades ago, and maintained it at that level, that also would have been good, as then the frontier pace would be at the rate of compute-constrained algorithmic progress, rather than the rate we’re getting at the moment from both algorithmic progress AND compute scale-up.)
This is an interesting thought experiment. I think it probably would’ve been bad, because it would’ve initiated an intelligence explosion. Sure, it would’ve started off very slow, but it would’ve gathered steam inexorably, speeding tech development, including compute scaling. And all this before anyone had even considered the alignment problem. After a couple of decades perhaps humanity would already have been gradually disempowered past the point of no return.
This is an interesting thought experiment. I think it probably would’ve been bad, because it would’ve initiated an intelligence explosion. Sure, it would’ve started off very slow, but it would’ve gathered steam inexorably, speeding tech development, including compute scaling. And all this before anyone had even considered the alignment problem. After a couple of decades perhaps humanity would already have been gradually disempowered past the point of no return.