I’m not saying that this is the only option but simce 1800s we have let the market choose which idea is going to thrive—which service or product gets rewarded.
The hard problem of measuring the future of AI is we don’t have a preexisting model for such that once an AGI is let loose for distribution, we cannot see where will it leads us. This is a black swan event as Nassim Taleb described something world altering yet we do not know for now how transformative it can be for the future and beyond.
These are hard questions that is why alignment should be achieved as for us not to worry on how will AGI act and respond in the real world and us not worrying who controls governance of it’s code base and infrastructure.
I’m not saying that this is the only option but simce 1800s we have let the market choose which idea is going to thrive—which service or product gets rewarded.
The hard problem of measuring the future of AI is we don’t have a preexisting model for such that once an AGI is let loose for distribution, we cannot see where will it leads us. This is a black swan event as Nassim Taleb described something world altering yet we do not know for now how transformative it can be for the future and beyond.
These are hard questions that is why alignment should be achieved as for us not to worry on how will AGI act and respond in the real world and us not worrying who controls governance of it’s code base and infrastructure.