I’m curating this post. I wrote a draft for a feature on a politico piece for the EA newsletter, exploring this same question—are big labs following through on verbal commitments to share their models with external evaluators? Despite taking “several months” to speak with experts- the politico piece didn’t have as much useful information as this blog post. I cut the feature, because I couldn’t find as much information in the time I had. I think work like this is really valuable, filling a serious gap in our understanding of AI Safety. Thanks for writing this Zach!
I’m curating this post.
I wrote a draft for a feature on a politico piece for the EA newsletter, exploring this same question—are big labs following through on verbal commitments to share their models with external evaluators? Despite taking “several months” to speak with experts- the politico piece didn’t have as much useful information as this blog post. I cut the feature, because I couldn’t find as much information in the time I had.
I think work like this is really valuable, filling a serious gap in our understanding of AI Safety. Thanks for writing this Zach!