Would the AI Governance & Policy group consider hiring someone in AI policy who disagreed with various policies that organizations you’ve funded have promoted?
For instance, multiple organizations you’ve funded have released papers or otherwise advocated for strong restrictions on open source AI—would you consider hiring someone who disagrees with substantially on their recommendations or many specific points they raise?
1a3orn
Karma: 20
I think you’ve made a mistake in understanding what Quintin means.
Most of the examples of you give of inability to control are “how an AI could escape, given that it wants to escape.”
Quintin’s examples of ease of control, however, are “how easy is it going to be to get the AI to want to do what we want it to do.” The arguments he gives are to that effect, and the points you bring up are orthogonal to them.