No, I’m talking somewhat narrowly about intent alignment, i.e. ensuring that our AI system is “trying” to do what we want. We are a relatively focused technical team, and a minority of the organization’s investment in safety and preparedness.
The policy team works on identifying misuses and developing countermeasures, and the applied team thinks about those issues as they arise today.
No, I’m talking somewhat narrowly about intent alignment, i.e. ensuring that our AI system is “trying” to do what we want. We are a relatively focused technical team, and a minority of the organization’s investment in safety and preparedness.
The policy team works on identifying misuses and developing countermeasures, and the applied team thinks about those issues as they arise today.