There is radical uncertainty about the technological paths opened by AI, wheather those paths end in AGI, and what kind of preferences would AGI have at the beguining and how they would evolve. Any mathematical modelling at this stage would be pure “pretense of knowledge”. An exercise even more sterile than the numbers war about if there is 1%, a 10% or a 99% probability of AI doom.
It is time to explore the technology, and to make researchers sensitive to risks. In fact, I think that AI safety still does not exist as an independent knowledge field, and mathematization of (almost) nothing is even worse than nothing.
While I don’t have a very good opinion on AI risk research, this is the last necessary thing.
There is radical uncertainty about the technological paths opened by AI, wheather those paths end in AGI, and what kind of preferences would AGI have at the beguining and how they would evolve. Any mathematical modelling at this stage would be pure “pretense of knowledge”. An exercise even more sterile than the numbers war about if there is 1%, a 10% or a 99% probability of AI doom.
It is time to explore the technology, and to make researchers sensitive to risks. In fact, I think that AI safety still does not exist as an independent knowledge field, and mathematization of (almost) nothing is even worse than nothing.