Creating superintelligent artificial agents without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (n.b. we already have AGI).
yanni kyriacos
It is time to start war gaming for AGI
NotebookLM is basically magic. Just take whatever Forum post you can’t be bothered reading but know you should and use NotebookLM to convert it into a podcast.
It seems reasonable that in 6 − 12 months there will be a button inside each Forum post that converts said post into a podcast (i.e. you won’t need to visit NotebookLM to do it).
thanks!
thanks!
If i found a place that raised cows that had predictably net positive lives, what would be the harm in eating beef from this farm?
I’ve been ostrovegan for ~ 7 years but open to changing my mind with new information.
I’m going to leave the most excruciatingly annoying comment, but in doing so, prove my point: it is possible to take positive and negative feedback without it affecting you much, if at all.
If you view yourself as unconditionally lovable (as I do myself), then one of two things happen:
-
someone gives me a compliment, I absorb it like “duh, I know I’m extremely lovable”
-
someone gives me criticism, I’m like “yeah that’s a point, also I’m extremely lovable”
I think the reason it can feel painful is because what our minds hear during public criticism from an evo psych perspective is;
‘this community hates me’ → ‘I might get kicked out of this community’ → ‘When I get kicked out of community I die’
And I think self love / esteem is a buttress for fear of death.
The reason this is an annoying comment is because I’m not pointing at a problem the community has (which could also be true!), but suggesting the information an individual receives passes through an interpretative matrix in their minds before landing as “harmful”, and that need not be the case.
As the Buddhists like to say: the reality we experience is the one our minds construct.
This is can be an extremely hard path, but is transformational if successful.
Shantideva: “You can’t cover the whole world with leather to make it smooth, but you can wear sandals.”
Shunryu Suzuki: “Each of you is perfect the way you are … and you can use a little improvement.”
-
I expect that over the next couple of years GenAI products will continue to improve, but concerns about AI risk won’t. For some reason we can’t “feel” the improvements. Then (e.g. around 2026) we will have pretty capable digital agents and there will be a surge of concern (similar to when ChatGPT was released, maybe bigger), and then possibly another perceived (but not real) plateau.
I am 90% sure that most AI Safety talent aren’t thinking hard enough about what Neglectedness. The industry is so nascent that you could look at 10 analogous industries, see what processes or institutions are valuable and missing and build an organisation around the highest impact one.
The highest impact job ≠ the highest impact opportunity for you!
AI Safety (in the broadest possible sense, i.e. including ethics & bias) is going be taken very seriously soon by Government decision makers in many countries. But without high quality talent staying in their home countries (i.e. not moving to UK or US), there is a reasonable chance that x/c-risk won’t be considered problems worth trying to solve. X/c-risk sympathisers need seats at the table. IMO local AIS movement builders should be thinking hard about how to either keep talent local (if they’re experiencing brain drain) OR increase the amount of local talent coming into x/c-risk Safety, such that outgoing talent leakage isn’t a problem.
I find it weird anyone is disagreeing with Peter’s comment. I’d be interested to hear a disagreer explain their position.
Big AIS news imo: “The initial members of the International Network of AI Safety Institutes are Australia, Canada, the European Union, France, Japan, Kenya, the Republic of Korea, Singapore, the United Kingdom, and the United States.”
H/T @shakeel
We’ve invited the Voice Actors association to our next advocacy event :)
I beta tested a new movement building format last night: online networking. It seems to have legs.
V quick theory of change:
> problem it solves: not enough people in AIS across Australia (especially) and New Zealand are meeting each other (this is bad for the movement and people’s impact).
> we need to brute force serendipity to create collabs.
> this initiative has v low cost
quantitative results:
> I purposefully didn’t market it hard because it was a beta. I literally got more people that I hoped for
> 22 RSVPs and 18 attendees
> this says to me I could easily get 40+
> average score for below question was 7.27, which is very good for a beta test
I used Zoom, which was extremely clunky. These results suggest to me I should;
> invest in software designed for this use case, not zoom> segment by career stream (governance vs technical) and/or experience (beginner vs advanced)
> run it every second month
I have heaps of qualitative feedback from participants but don’t have time to share it here.
Email me if interested: yanni@aisafetyanz.com.au
I’d like to add that I’ve dealt with Dušan on a few occasions and always come away thinking he is very competent.
Is anyone in the AI Governance-Comms space working on what public outreach should look like if lots of jobs start getting automated in < 3 years?
I point to Travel Agents a lot not to pick on them, but because they’re salient and there are lots of them. I think there is a reasonable chance in 3 years that industry loses 50% of its workers (3 million globally).
People are going to start freaking out about this. Which means we’re in “December 2019” all over again, and we all remember how bad Government Comms were during COVID.
Now is the time to start working on the messaging!
That sucks :(
But hammers do like nails :/
My 2 cents Holly is that while you’re pointing at something acute to PauseAI, this is affecting AI Safety in general.
The majority of people entering the Safety community space in Australia & New Zealand now are NOT coming from EA.
Potentially ~ 75/25!
And honestly, I think this is a good thing.
lol someone has to write a post “How to make an upvoted joke on the forum that isn’t cringe”
Literally never even considered it. Would you mind sharing an example of this being done well?
Good to know:
Can you share more about these efforts?
What makes you think it isn’t neglected? I.e. what makes there being two efforts mean it isn’t neglected? Part of me wonders whether many national governments should consider such exercises (but I wouldn’t want to take it to military, only to have them become excited by capabilities).