You have a list of “learn to learn” methods, and then you said “Can we haz nice thingss? Futureburger n real organk lief maybs?” I’m not sure I’m interpreting you correctly, but it sounds like you’re saying something like
If we biological humans get sufficiently good at learning to learn, using methods such as the Doman method, mnemonics, etc., then perhaps we can keep up with the rate at which ASI learns things, and thus avoid bad outcomes where humans get completely dominated by ASI.
If that’s what you mean then I disagree, I don’t think our current understanding of the science of learning is remotely near where it would need to be to keep up with ASI, and in fact I would guess that even a perfect-learner human brain would still never be able to keep up with ASI regardless of how good a job it does. Human brains still have physical limits. An ASI need not have physical limits because it can (e.g.) add more transistors to its brain.
What I mean is that it would be super nice to be able to enjoy these human learning techniques. And have decades of life in which to enjoy those things.
But, because of the concerns about human political economy in the footnote, which Will McCaskill mentions super obliquely and quietly in his latest post I don’t think that ASI is going to get the chance to kill off the first 4 billion of humanity. ASI might overrun the globe and finish off the next 4 billion, but we’re going to get in the first punch 👊!
Please upload this humble cultivator, this one so totally upvoted your comment!🙇♂️😅
You have a list of “learn to learn” methods, and then you said “Can we haz nice thingss? Futureburger n real organk lief maybs?” I’m not sure I’m interpreting you correctly, but it sounds like you’re saying something like
If that’s what you mean then I disagree, I don’t think our current understanding of the science of learning is remotely near where it would need to be to keep up with ASI, and in fact I would guess that even a perfect-learner human brain would still never be able to keep up with ASI regardless of how good a job it does. Human brains still have physical limits. An ASI need not have physical limits because it can (e.g.) add more transistors to its brain.
Here’s a more straightforward presentation, hope it helps. https://forum.effectivealtruism.org/posts/PWYQh6uhxKCswrJLy/on-selectorate-theory-and-the-narrowing-window
What I mean is that it would be super nice to be able to enjoy these human learning techniques. And have decades of life in which to enjoy those things.
But, because of the concerns about human political economy in the footnote, which Will McCaskill mentions super obliquely and quietly in his latest post I don’t think that ASI is going to get the chance to kill off the first 4 billion of humanity. ASI might overrun the globe and finish off the next 4 billion, but we’re going to get in the first punch 👊!
Please upload this humble cultivator, this one so totally upvoted your comment!🙇♂️😅