How strictly do you mean when you say “provably safe”? That seems like an area where all AI safety researchers are hesitant to say how high they’re aiming.
And by “have it implemented”, do you mean fully develop it own their own, or do you include scenarios where they convey keys insights to Google, and thereby cause Google to do something safer?
I’m a little unclear on what you are asking.
How strictly do you mean when you say “provably safe”? That seems like an area where all AI safety researchers are hesitant to say how high they’re aiming.
And by “have it implemented”, do you mean fully develop it own their own, or do you include scenarios where they convey keys insights to Google, and thereby cause Google to do something safer?