As you say, the overwhelming perspective in Australia up until a few months ago was in favour of acceleration for economic gain and somewhat interested in near-term ethical concerns. It’s probably only a marginal overstatement to say that the political environment was actively hostile to AI safety considerations.
There was a real risk that Australia was going to ‘drag the chain’ on any international AI safety work. Anyone who has been involved in international negotiations will know how easily one or two countries can damage consensus and slow progress.
It’s obviously hard to calculate the counterfactual, but it seems certain that the work in the community you refer to has helped normalise AI safety considerations in the minds of senior decision-makers, and so there’s a real possibility that that work shaped how Australia acted at the Summit.
Thanks for this comment. I couldn’t agree more.
As you say, the overwhelming perspective in Australia up until a few months ago was in favour of acceleration for economic gain and somewhat interested in near-term ethical concerns. It’s probably only a marginal overstatement to say that the political environment was actively hostile to AI safety considerations.
There was a real risk that Australia was going to ‘drag the chain’ on any international AI safety work. Anyone who has been involved in international negotiations will know how easily one or two countries can damage consensus and slow progress.
It’s obviously hard to calculate the counterfactual, but it seems certain that the work in the community you refer to has helped normalise AI safety considerations in the minds of senior decision-makers, and so there’s a real possibility that that work shaped how Australia acted at the Summit.