Currently doing local AI safety Movement Building in Australia and NZ.
Chris Leong
Your AI timelines would likely be an important factor here.
This is amazing. I expect this to noticably increase the number of links included in articles.
Does scaling make sense with a principles-first strategy? My intuition would be that with a principles-first strategy it makes more sense to focus on quality over quantity.
Must have been a bug showing it twice then.
This was posted twice.
Linkpost: “Imagining and building wise machines: The centrality of AI metacognition” by Johnson, Karimi, Bengio, et al.
I think a key crux here is whether you think AI timelines are short or long. If they’re short, there’s more pressure to focus on immediately applicable work. If they’re long, then there’s more benefit to having philosophers develop ideas which gradually trickle down.
Someone really needs to make Asterisk meetup groups a thing.
Okay, that makes more sense then.
One thing that is very confusing to me here: the experiment comparing entrepreneurs in charity entrepreneurship and random folk in in Kenya.
It seems pretty obvious to me that the value of treating a charity entrepreneur is at least a hundred or a thousand times greater than treating a random person. So I don’t know why you would compare the two, given that if it works for the entrepreneurs at all, it’d be clearly higher impact. Assuming it works for the entrepreneurs, you’re not going to get an effect a hundred or a thousand times greater for the Kenyans.
Ironically, I think one of the best ways to address this is more movement building. Lots of groups provide professional training to their movement builders and more of this (in terms of AI/AI safety knowledge) would reduce the chance that someone who could and wants to do technical work gets stuck in a community building role.
Did something happen?
I guess the main thing to be aware of is how hiring non-value aligned people can lead to drift which isn’t significant at first, but becomes significant over time. That said, I also agree that a certain level of professionalism within organisation becomes more important as they scale.
Just wanted to mention that if anyone liked my submissions (3rd prize, An Overview of “Obvious” Approaches to Training Wise AI Advisors, Some Preliminary Notes on the Promise of a Wisdom Explosion),
I’ll be running a project related to this work as part of AI Safety Camp. Join me if you want to help innovate a new paradigm in AI safety.
This appears to have been double posted
My position (to be articulated in an upcoming sequence) is the exact opposite of this, but fascinating post anyway and congrats on winning a prize!
Would you consider organising an AI Safety mentorship program? Selecting participants and matching them to mentors sounds like it would match up with your skills. It wouldn’t take a huge amount of work and there’s a decent chance it counterfactually shifts a few people’s careers into AIS and accelerate other people’s careers. PM’s open if interested.
Yep vs. mailchimp
What’s the main advantage of substack?
Maybe I’m missing something, but I think it’s a negative sign that mirror bacteria seems to have pretty much not been discussed within the EA community until now (that said, what really matters is the percent of biosecurity folk in the community who have heard of this issue).