I think I broadly agree that take off is likely to be slow and that there is not a slam dunk argument for trying to make safe super intelligent agents.
However I think there is room for all sorts of work. Anything that can reduce the uncertainty of where AGI is going.
I think AI, as it is, is on slightly the wrong track. If we get on the right track we will get somewhere a lot quicker than the decades referenced above.
Computers as they stand are designed with the idea of having a human that looks after them and understands their inner workings, at least somewhat. Animals from the lowly nematode to humans do not have that assumption. Current deep learning assumes a human will create the input and output spaces and assign resources to that learning process.
If we can off load the administration of a computer to the computer itself, this would allow cheaper administration of computers and also the computer systems to become more complex. Computer systems are limited in complexity by the thing that debugs them.
I have an idea of what this might look like and if my current paradigm plays out, I think humanity will get the choice of creating separate agents or creating external lobes of our brains. Most likely humanity will pick the creating external lobes. The external lobes may act in a more economic fashion, but I think they still might have the capability of going bad. Minimising the probability of this is very important.
I think there is also probably a network effect, if we could get altruistically minded people to be the first to have the external brains then we might influence the future by preferentially helping other altruists to get external brains. This could create a social norms among people with external brains.
So I think technical work towards understanding administratively autonomous computers (no matter how intelligent they are) can reduce uncertainty and allow us to understand what choices face us.
I think I broadly agree that take off is likely to be slow and that there is not a slam dunk argument for trying to make safe super intelligent agents.
However I think there is room for all sorts of work. Anything that can reduce the uncertainty of where AGI is going.
I think AI, as it is, is on slightly the wrong track. If we get on the right track we will get somewhere a lot quicker than the decades referenced above.
Computers as they stand are designed with the idea of having a human that looks after them and understands their inner workings, at least somewhat. Animals from the lowly nematode to humans do not have that assumption. Current deep learning assumes a human will create the input and output spaces and assign resources to that learning process.
If we can off load the administration of a computer to the computer itself, this would allow cheaper administration of computers and also the computer systems to become more complex. Computer systems are limited in complexity by the thing that debugs them.
I have an idea of what this might look like and if my current paradigm plays out, I think humanity will get the choice of creating separate agents or creating external lobes of our brains. Most likely humanity will pick the creating external lobes. The external lobes may act in a more economic fashion, but I think they still might have the capability of going bad. Minimising the probability of this is very important.
I think there is also probably a network effect, if we could get altruistically minded people to be the first to have the external brains then we might influence the future by preferentially helping other altruists to get external brains. This could create a social norms among people with external brains.
So I think technical work towards understanding administratively autonomous computers (no matter how intelligent they are) can reduce uncertainty and allow us to understand what choices face us.