Natural selection is the ultimate selector. Any meta-system that faces this problem will be continually outcompeted and replaced by meta-systems that either have an architecture that can better mitigate this failure mode, or that have beliefs that are minimally detrimental to its unfolding when this failure mode occurs (in my evaluation, the first is much more likely than the second)
Indeed, better programs will outcompete worse ones, so I expect AI to improve gradually over time. Which is the opposite of foom. I don’t quite understand what you mean by a “meta-system that faces this problem”. Do you mean the problem of having imperfect beliefs, or not being perfectly rational? That’s literally every system!
In this case “improve gradually over time” could take place over the course of a few days or even a few hours. So it’s not actually antithetical to FOOM
So, for natural selection to function, there has to be a selection pressure towards the outcome in question, and sufficient time for the selection to occur. Can you explain how a flat-earther AI designed to solve mathematical equations would lose that belief in a matter of hours? Where is the selection pressure?
Natural selection is the ultimate selector. Any meta-system that faces this problem will be continually outcompeted and replaced by meta-systems that either have an architecture that can better mitigate this failure mode, or that have beliefs that are minimally detrimental to its unfolding when this failure mode occurs (in my evaluation, the first is much more likely than the second)
Indeed, better programs will outcompete worse ones, so I expect AI to improve gradually over time. Which is the opposite of foom. I don’t quite understand what you mean by a “meta-system that faces this problem”. Do you mean the problem of having imperfect beliefs, or not being perfectly rational? That’s literally every system!
In this case “improve gradually over time” could take place over the course of a few days or even a few hours. So it’s not actually antithetical to FOOM
So, for natural selection to function, there has to be a selection pressure towards the outcome in question, and sufficient time for the selection to occur. Can you explain how a flat-earther AI designed to solve mathematical equations would lose that belief in a matter of hours? Where is the selection pressure?