I would broadly agree. I think this is an important post and I agree with most of the ways to prepare. I think we are not there yet for large scale AI policy/strategy.
There are few things that I would highlight as additions.
1) We need to cultivate the skills of disentanglement. Different people might be differently suited, but like all skills it is one that works better with practice and people to practice with. Lesswrong is trying to place itself as that kind of place. It is having a little resurgence with the new website www.lesserwrong.com. For example there has been lots of interesting discussion on the problems of Goodheart’s law, which will be necessary to at least somewhat solve if we are to get AISafety groups that actually do AISafety research and don’t just optimise some research output metric to get funding.
I am not sure if lesswrong is the correct place, but we do need places for disentanglers to grow.
2) I would also like to highlight the fact that we don’t understand intelligence and that there have been lots of people studying it for a long time (psychologists etc) that I don’t think we do enough to bring into discussing artificial versions of the thing they have studied.
Lots of work on policy side of AI safety models it as utility maximimising agent in the economic style. I am pretty skeptical that is a good model of humans or of the AIs we will create. Figuring out what better models might be, is on the top of my personal priority list.
Edited to add 3) It seems like a sensible policy is to fund a competition in the style of at super forecasting aimed at AI and related technologies. This should give you some idea of the accuracy of peoples view on technology development/forecasting.
I would caution that we are also in the space of wicked problems so it may be there is never a complete certainty of the way we should move.
I would broadly agree. I think this is an important post and I agree with most of the ways to prepare. I think we are not there yet for large scale AI policy/strategy.
There are few things that I would highlight as additions. 1) We need to cultivate the skills of disentanglement. Different people might be differently suited, but like all skills it is one that works better with practice and people to practice with. Lesswrong is trying to place itself as that kind of place. It is having a little resurgence with the new website www.lesserwrong.com. For example there has been lots of interesting discussion on the problems of Goodheart’s law, which will be necessary to at least somewhat solve if we are to get AISafety groups that actually do AISafety research and don’t just optimise some research output metric to get funding.
I am not sure if lesswrong is the correct place, but we do need places for disentanglers to grow.
2) I would also like to highlight the fact that we don’t understand intelligence and that there have been lots of people studying it for a long time (psychologists etc) that I don’t think we do enough to bring into discussing artificial versions of the thing they have studied. Lots of work on policy side of AI safety models it as utility maximimising agent in the economic style. I am pretty skeptical that is a good model of humans or of the AIs we will create. Figuring out what better models might be, is on the top of my personal priority list.
Edited to add 3) It seems like a sensible policy is to fund a competition in the style of at super forecasting aimed at AI and related technologies. This should give you some idea of the accuracy of peoples view on technology development/forecasting.
I would caution that we are also in the space of wicked problems so it may be there is never a complete certainty of the way we should move.