(ii) trying to map the Yudkowsky/Bostrom arguments, which were made before the deep learning paradigm, onto actual progress in machine learning, and finding them hard to fit well. Going into this properly would require a lot more discussion though!)
I’d be happy to read more about this point.
If we end up with powerful deep learning models that optimize a given objective extremely well, the main arguments in Superintelligence seem to go through.
(If we end up with powerful deep learning models that do NOT optimize a given objective, it seems to me plausible that x-risks from AI are more severe, rather than less.)
[EDIT: replaced “a specified objective function” with “a given objective”]
I’d be happy to read more about this point.
If we end up with powerful deep learning models that optimize a given objective extremely well, the main arguments in Superintelligence seem to go through.
(If we end up with powerful deep learning models that do NOT optimize a given objective, it seems to me plausible that x-risks from AI are more severe, rather than less.)
[EDIT: replaced “a specified objective function” with “a given objective”]