Would the AI Governance & Policy group consider hiring someone in AI policy who disagreed with various policies that organizations you’ve funded have promoted?
For instance, multiple organizations you’ve funded have released papers or otherwise advocated for strong restrictions on open source AI—would you consider hiring someone who disagrees with substantially on their recommendations or many specific points they raise?
We fund a lot of groups and individuals and they have a lot of different (and sometimes contradicting) policy opinions, so the short answer is “yes.” In general, I really did mean the “tentative” in my 12 tentative ideas for US AI policy, and the other caveats near the top are also genuine.
That said, we hold some policy intuitions more confidently than others, and if someone disagreed pretty thoroughly with our overall approach and they also weren’t very persuasive that their alternate approach would be better for x-risk reduction, then they might not be a good fit for the team.
Would the AI Governance & Policy group consider hiring someone in AI policy who disagreed with various policies that organizations you’ve funded have promoted?
For instance, multiple organizations you’ve funded have released papers or otherwise advocated for strong restrictions on open source AI—would you consider hiring someone who disagrees with substantially on their recommendations or many specific points they raise?
We fund a lot of groups and individuals and they have a lot of different (and sometimes contradicting) policy opinions, so the short answer is “yes.” In general, I really did mean the “tentative” in my 12 tentative ideas for US AI policy, and the other caveats near the top are also genuine.
That said, we hold some policy intuitions more confidently than others, and if someone disagreed pretty thoroughly with our overall approach and they also weren’t very persuasive that their alternate approach would be better for x-risk reduction, then they might not be a good fit for the team.