I’m glad that Australia has signed this statement.
It’s worth noting that until quite recently, the idea of catastrophic misuse or misalignment risks from AI have been dismissed or made fun of in Australian policy discourse. The delegate from Australia, Ed Husic who is Minister for Industry, Science, and Resources, actually wrote an opinion piece in a national newspaper in June 2023 that dismissed concerns about catastrophic risk
In an August 2023 public Town Hall discussion that formed part of the Australian Government’s consultation on ‘Safe and Responsible AI’, a senior advisor to Husic’s department said that trying to regulate risks from advanced AI was like the Wright Brothers trying to plan regulations for a Mars colony, and another key figure dismissed the dual-use risks from AI by likening AI to a ‘kitchen knife’, suggesting that both could be used for good and for harm.
So it was never certain that somewhere like Australia would sign on to a declaration like this, and I’m relieved and happy that we’ve done so.
I’d like to think that the work that’s been happening in the Australian AI Safety community and had an impact on Australia’s decision to agree to the declaration, including
organising Australian experts to call for serious consideration of catastrophic risks from AI and make plans to address those risks,
arranging more than 70 well-researched community submissions to the ‘Safe and Responsible AI’ consultation that called for better institutions to govern risks and concrete action to address them.
A lead long-term focused policy development & advocacy organisation in Australia, Good Ancestors, also created a rigorous submission for the process.
The declaration needs to be followed by action but the combination of this declaration and Australia’s endorsement of the US executive order on AI Safety has led me to feel more hopeful about things going well.
As you say, the overwhelming perspective in Australia up until a few months ago was in favour of acceleration for economic gain and somewhat interested in near-term ethical concerns. It’s probably only a marginal overstatement to say that the political environment was actively hostile to AI safety considerations.
There was a real risk that Australia was going to ‘drag the chain’ on any international AI safety work. Anyone who has been involved in international negotiations will know how easily one or two countries can damage consensus and slow progress.
It’s obviously hard to calculate the counterfactual, but it seems certain that the work in the community you refer to has helped normalise AI safety considerations in the minds of senior decision-makers, and so there’s a real possibility that that work shaped how Australia acted at the Summit.
I’m glad that Australia has signed this statement.
It’s worth noting that until quite recently, the idea of catastrophic misuse or misalignment risks from AI have been dismissed or made fun of in Australian policy discourse. The delegate from Australia, Ed Husic who is Minister for Industry, Science, and Resources, actually wrote an opinion piece in a national newspaper in June 2023 that dismissed concerns about catastrophic risk
In an August 2023 public Town Hall discussion that formed part of the Australian Government’s consultation on ‘Safe and Responsible AI’, a senior advisor to Husic’s department said that trying to regulate risks from advanced AI was like the Wright Brothers trying to plan regulations for a Mars colony, and another key figure dismissed the dual-use risks from AI by likening AI to a ‘kitchen knife’, suggesting that both could be used for good and for harm.
So it was never certain that somewhere like Australia would sign on to a declaration like this, and I’m relieved and happy that we’ve done so.
I’d like to think that the work that’s been happening in the Australian AI Safety community and had an impact on Australia’s decision to agree to the declaration, including
organising Australian experts to call for serious consideration of catastrophic risks from AI and make plans to address those risks,
arranging more than 70 well-researched community submissions to the ‘Safe and Responsible AI’ consultation that called for better institutions to govern risks and concrete action to address them.
A lead long-term focused policy development & advocacy organisation in Australia, Good Ancestors, also created a rigorous submission for the process.
The declaration needs to be followed by action but the combination of this declaration and Australia’s endorsement of the US executive order on AI Safety has led me to feel more hopeful about things going well.
Thanks for this comment. I couldn’t agree more.
As you say, the overwhelming perspective in Australia up until a few months ago was in favour of acceleration for economic gain and somewhat interested in near-term ethical concerns. It’s probably only a marginal overstatement to say that the political environment was actively hostile to AI safety considerations.
There was a real risk that Australia was going to ‘drag the chain’ on any international AI safety work. Anyone who has been involved in international negotiations will know how easily one or two countries can damage consensus and slow progress.
It’s obviously hard to calculate the counterfactual, but it seems certain that the work in the community you refer to has helped normalise AI safety considerations in the minds of senior decision-makers, and so there’s a real possibility that that work shaped how Australia acted at the Summit.