There’s a flipside to the above, which is that ASI can > be expected to excel in situations where we already > have extremely accurate predictive theories; the >contingencies are already known and incorporated >into the theory...
Even Michael Nielson seems to have a blind spot here, despite all the frankly brilliant and well reasoned arguments prior.
Why would “an ASI” be limited to only reprocessing existing data?
Humans will, once they have ASI grade tools, use some of those tools to do the kinds of tool use tasks that manufactures more robots and chips.
This is exponential.
With realistically a pool of billions of specialized robots, it is a straightforward task to design a prompt to call an ASI instance to analyze existing experiments and rank possible new experiments by a heuristics of predicted knowledge gain/cost, with respect to some end goal. (“Best n experiments for increasing rat longevity”)
Then loop it, perform the highest n value experiments across your robotics pool, update your models based on the results, and so on.
Hopefully the “cheap doomsday” routes Michael is concerned about are too expensive in energy to be practical because if they are not, this kind of experimental loop like above could find it.
Even Michael Nielson seems to have a blind spot here, despite all the frankly brilliant and well reasoned arguments prior.
Why would “an ASI” be limited to only reprocessing existing data?
Humans will, once they have ASI grade tools, use some of those tools to do the kinds of tool use tasks that manufactures more robots and chips.
This is exponential.
With realistically a pool of billions of specialized robots, it is a straightforward task to design a prompt to call an ASI instance to analyze existing experiments and rank possible new experiments by a heuristics of predicted knowledge gain/cost, with respect to some end goal. (“Best n experiments for increasing rat longevity”)
Then loop it, perform the highest n value experiments across your robotics pool, update your models based on the results, and so on.
Hopefully the “cheap doomsday” routes Michael is concerned about are too expensive in energy to be practical because if they are not, this kind of experimental loop like above could find it.