Executive summary: A survey of Australians found high levels of concern about risks from AI, especially catastrophic risks, and strong support for government action to regulate AI and prevent dangerous outcomes.
Key points:
Australians are most concerned about AI systems acting in unsafe, untrustworthy ways not aligned with human values. Other priority risks include job loss, cyber attacks, autonomous weapons, and infrastructure failures.
Australians are skeptical of AI development overall, with opinions divided on whether it will be net positive or negative.
Preventing dangerous and catastrophic outcomes from AI is seen as the #1 priority for Australian government action on AI. Other priorities include mandatory safety audits, corporate liability for harms, and preventing human extinction.
90% support a national government body to regulate AI, and 80% think Australia should lead international AI governance.
AI is seen as a major existential risk, judged as the 3rd most likely cause of human extinction after nuclear war and climate change. 1 in 3 think AI-caused extinction is at least moderately likely in the next 50 years.
The findings suggest the Australian government should broaden its AI risk considerations, establish a national AI regulator, require safety audits and corporate liability, and prioritize preventing catastrophic risks from frontier AI systems.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: A survey of Australians found high levels of concern about risks from AI, especially catastrophic risks, and strong support for government action to regulate AI and prevent dangerous outcomes.
Key points:
Australians are most concerned about AI systems acting in unsafe, untrustworthy ways not aligned with human values. Other priority risks include job loss, cyber attacks, autonomous weapons, and infrastructure failures.
Australians are skeptical of AI development overall, with opinions divided on whether it will be net positive or negative.
Preventing dangerous and catastrophic outcomes from AI is seen as the #1 priority for Australian government action on AI. Other priorities include mandatory safety audits, corporate liability for harms, and preventing human extinction.
90% support a national government body to regulate AI, and 80% think Australia should lead international AI governance.
AI is seen as a major existential risk, judged as the 3rd most likely cause of human extinction after nuclear war and climate change. 1 in 3 think AI-caused extinction is at least moderately likely in the next 50 years.
The findings suggest the Australian government should broaden its AI risk considerations, establish a national AI regulator, require safety audits and corporate liability, and prioritize preventing catastrophic risks from frontier AI systems.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.