Can you write about cross pollination between technical safety and AI governance and policy? In the case of the new governance mechanisms role (zeroing in on proof of learning and other monitoring schemas), it seems like bridging or straddling the two teams is important.
Indeed. There aren’t hard boundaries between the various OP teams that work on AI, and people whose reporting line is on one team often do projects for or with a different team, or in another team’s “jurisdiction.” We just try to communicate about it a lot, and our team leads aren’t very possessive about their territory — we just want to get the best stuff done!
I’ll just add that in a lot of cases, I fund technical research that I think is likely to help with policy goals (for example, work in the space of model organisms of misalignment can feed into policy goals).
Can you write about cross pollination between technical safety and AI governance and policy? In the case of the new governance mechanisms role (zeroing in on proof of learning and other monitoring schemas), it seems like bridging or straddling the two teams is important.
Indeed. There aren’t hard boundaries between the various OP teams that work on AI, and people whose reporting line is on one team often do projects for or with a different team, or in another team’s “jurisdiction.” We just try to communicate about it a lot, and our team leads aren’t very possessive about their territory — we just want to get the best stuff done!
I’ll just add that in a lot of cases, I fund technical research that I think is likely to help with policy goals (for example, work in the space of model organisms of misalignment can feed into policy goals).