This seems helpful, though I’d guess another team that’s in more frequent contact with AI safety orgs could do this for significantly lower cost, since they’ll be starting off with more of the needed info and contacts.
Agreed! Other groups will be better placed. But I’m not categorically ruling this out: if nobody else appears to be on track for doing this when we’re next in prioritization mode, we might revisit this issue and see whether it makes sense to prioritize it anyway.
This seems helpful, though I’d guess another team that’s in more frequent contact with AI safety orgs could do this for significantly lower cost, since they’ll be starting off with more of the needed info and contacts.
Agreed! Other groups will be better placed. But I’m not categorically ruling this out: if nobody else appears to be on track for doing this when we’re next in prioritization mode, we might revisit this issue and see whether it makes sense to prioritize it anyway.