Creating superintelligent artificial agents without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (n.b. we already have AGI).
yanni kyriacos
Hypothesis: The Naturalistic Fallacy has leapt from Animal Welfare into AI Capability Assessment
The naturalistic fallacy IMO influences how many people evaluate artificial intelligence capabilities, causing systematic underestimation of technological progress. In animal welfare discussions, this fallacy manifests when people justify consumption practices by arguing “humans have always eaten animals” or “it’s natural to eat meat,” improperly deriving an ethical “ought” from a historical “is.”
Similarly, in AI capability assessment, this fallacy operates through several key mechanisms:Historical cognitive dominance bias: Assuming humans must remain the dominant cognitive species simply because we’ve always occupied that position throughout evolutionary history.
Biological exceptionalism: Believing intelligence must be biological in nature because it has only emerged through natural evolution previously.
Anthropomorphic benchmarking: Judging AI capabilities exclusively against human-centric metrics while dismissing alternative forms of intelligence that may surpass humans in different domains.
Status quo preservation: Psychologically resisting evidence of AI advancement because it threatens humanity’s position as the most intelligent entities on Earth.
IMO this manifestation of the naturalistic fallacy blinds observers to exponential progress in AI capabilities by conflating what has been “natural” with what should continue to be, making the objective assessment of technological advancement particularly challenging.
*I wrote this up as dot points and Claude built it out for me. I don’t even know why I added this point, it just feels honest to do so.
You don’t need EAs Greg—you’ve got the general public!
If you could set a hiring manager a work task for an hour or two, what would you ask them to do? In this situation you’re applying for a job with them.
ooft, good point.
If antinatal advocacy was effective, wouldn’t it make sense to pursue on animal welfare grounds? Aren’t most new humans extremely net negative?
I have a 3YO so hold fire!Most new humans will likely consume hundreds (thousands?) of factory farmed animals over their lifetime, creating a substantial negative impact that might outweigh the positive contributions of that human life
Probably of far less consequence, the environmental footprint of each new human also indirectly harms wild animals through habitat destruction, pollution, and climate change (TBH I am being very speculative on this point).
Mange is spreading in wombats in Australia. I saw a severely debilitated (dying) wombat on my parents farm. WIRES animal rescue couldn’t help, so I was left wondering whether to kill it or not. I didn’t because I worried about making the suffering worse. Kind of wish I owned a gun in that moment.
AI Safety Monthly Meetup—Brief Impact Analysis
For the past 8 months, we’ve (AIS ANZ) been running consistent community meetups across 5 cities (Sydney, Melbourne, Brisbane, Wellington and Canberra). Each meetup averages about 10 attendees with about 50% new participant rate, driven primarily through LinkedIn and email outreach. I estimate we’re driving unique AI Safety related connections for around $6.
Volunteer Meetup Coordinators organise the bookings, pay for the Food & Beverage (I reimburse them after the fact) and greet attendees. This initiative would literally be impossible without them.Key Metrics:
Total Unique New Members: 200
5 cities × 5 new people per month × 8 months
Consistent 50% new attendance rate maintained
Network Growth: 600 new connections
Each new member makes 3 new connections
Only counting initial meetup connections, actual number likely higher
Cost Analysis:
Events: $3,000 (40 meetups × $75 Food & Beverage per meetup)
Marketing: $600
Total Cost: $3,600
Cost Efficiency: $6 per new connection ($3,600/600)
ROI: We’re creating unique AI Safety related connections at $6 per connection, with additional network effects as members continue to attend and connect beyond their initial meetup.
Great image selection!
Thanks for asking! So you’re saying I can use the bot to summarise any post just by tagging it in the comments?
I didn’t want to read all of @LintzA’s post on the “The Game Board has been Flipped” and all 43+ comments, so I copy/pasted the entire webpage into Claude with the following prompt: “Please give me a summary of the authors argument (dot points, explained simply) and then give me a summary of the kinds of support and push back they got (dot points, explained simply, thematised, giving me a sense of the concentration/popularity of themes in the push back)”
Below is the result (the Forum team might want to consider how posts with large numbers of comments can be read quickly):
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Main Arguments:Recent developments require a complete rethink of AI safety strategy, particularly:
AI timelines are getting much shorter (leaders like Sam Altman expecting AGI within 3 years)
Trump’s likely presidency changes the political landscape for regulation
New technical developments (like Deepseek and inference scaling) mean capabilities are advancing faster than expected
China is closer to US capabilities than previously thought
AI labs are becoming more secretive about their work
Key implications according to the author:
Safety strategies that take several years may be too slow to matter
Need to completely rethink how to communicate AI risk to appeal to conservative audiences
Working inside AI labs may be more important as capabilities become more hidden
The US has less of an advantage over China than previously thought
International agreements may be more important than previously believed
Common Themes in Response (ordered by apparent prominence in comments):
Strong Agreement/Supporting Points:
Many commenters appreciated the comprehensive overview of recent developments
Several agreed with the need to rethink strategies given shorter timelines
Major Points of Disagreement:
Working at AI Labs
Multiple prominent commenters (including Buck and Holly Elmore) pushed back strongly against the author’s suggestion that working at AI labs is increasingly important
They argued that lab workers have limited influence on safety and risk being “captured” by lab interests
Some suggested external pressure might be more effective
Strategy and Movement Focus:
Several commenters argued for more emphasis on trying to pause AI development
Some questioned whether shifting focus away from EU/UK engagement was wise
Discussion about whether mass movement building should be prioritized
Technical/Factual Corrections:
Some corrections on specific claims about timeline estimates
Discussion about terminology (e.g., “open source” vs “open weights”)
Other Notable Points:
Questions about the US vs China framing
Debate about whether compute advantages still matter given recent developments
Discussion about the value of different political strategies (bipartisan vs partisan approaches)
Overall Tone of Reception: The piece appears to have been well-received as a useful overview of recent developments, but with significant pushback on specific strategic recommendations, particularly around working at AI labs and political strategy.
One axis where Capabilities and Safety people pull apart the most, with high consequences is on “asking for forgiveness instead of permission.”
1) Safety people need to get out there and start making stuff without their high prestige ally nodding first
2) Capabilities people need to consider more seriously that they’re building something many people simply do not want
If they’re from ANZ (or coming to ANZ) I’m happy to chat with them :)
Larry Ellison, who will invest tens of billions in Stargate said uberveillance via AGI will be great because then police and the populace would always have to be on their best behaviour. It is best to assume the people pushing 8 billion of us into the singularity have psychopathy (or similar disorders). This matters because we need to know who we’re going up against: there is no rationalising with these people. They aren’t counting the QALYs!
Footage of Larry’s point of view starts around 12.00 on Matt Wolf’s video
In the last 48 hours Dario said 3-5 years until AGI!
You should consider posting this to LessWrong :)
If I wasn’t working on AI Safety I’d work on near term (< 5 years) animal welfare interventions.
Fwiw www.aisafetyanz.com.au was a pretty easy setup using wix. Maybe 10 hours of work (initially).
AI Safety has less money, talent, political capital, tech and time. We have only one distinct advantage: support from the general public. We need to start working that advantage immediately.
This comment is in no man’s land: not funny enough to be a good joke, not relevant enough to add value.
Consider asking an LLM for feedback before posting. Unless the goal is to troll?
I find it slightly concerning that many EAs come from privileged backgrounds, and the default community building strategy for acquiring new members is to target people from … privileged backgrounds.
Whenever you’re a Hammer and the solution you’ve arrived at is to look for Nails, I think an extra layer of scepticism should be applied.