Currently doing local AI safety Movement Building in Australia and NZ.
Chris Leong
This was posted twice.
I think a key crux here is whether you think AI timelines are short or long. If they’re short, there’s more pressure to focus on immediately applicable work. If they’re long, then there’s more benefit to having philosophers develop ideas which gradually trickle down.
Someone really needs to make Asterisk meetup groups a thing.
Okay, that makes more sense then.
One thing that is very confusing to me here: the experiment comparing entrepreneurs in charity entrepreneurship and random folk in in Kenya.
It seems pretty obvious to me that the value of treating a charity entrepreneur is at least a hundred or a thousand times greater than treating a random person. So I don’t know why you would compare the two, given that if it works for the entrepreneurs at all, it’d be clearly higher impact. Assuming it works for the entrepreneurs, you’re not going to get an effect a hundred or a thousand times greater for the Kenyans.
Ironically, I think one of the best ways to address this is more movement building. Lots of groups provide professional training to their movement builders and more of this (in terms of AI/AI safety knowledge) would reduce the chance that someone who could and wants to do technical work gets stuck in a community building role.
Did something happen?
I guess the main thing to be aware of is how hiring non-value aligned people can lead to drift which isn’t significant at first, but becomes significant over time. That said, I also agree that a certain level of professionalism within organisation becomes more important as they scale.
Just wanted to mention that if anyone liked my submissions (3rd prize, An Overview of “Obvious” Approaches to Training Wise AI Advisors, Some Preliminary Notes on the Promise of a Wisdom Explosion),
I’ll be running a project related to this work as part of AI Safety Camp. Join me if you want to help innovate a new paradigm in AI safety.
This appears to have been double posted
My position (to be articulated in an upcoming sequence) is the exact opposite of this, but fascinating post anyway and congrats on winning a prize!
Would you consider organising an AI Safety mentorship program? Selecting participants and matching them to mentors sounds like it would match up with your skills. It wouldn’t take a huge amount of work and there’s a decent chance it counterfactually shifts a few people’s careers into AIS and accelerate other people’s careers. PM’s open if interested.
Yep vs. mailchimp
What’s the main advantage of substack?
I agree. I would love to see someone invest the time in writing up a full post arguing for this. I think it would be high EV if well done.
a) I agree that it would be better if the names were reversed, however, I also agree that it’s locked in now.
b) “AIM should be the face of EA and should be feeding in A LOT more to general outreach efforts”—They’re an excellent org, but I disagree. I tried writing up an explanation of why, but I struggled to produce something clear.
Very exciting! I would love to see folk create versions for other cause areas as well.
There is a world that needs to be saved. Saving the world is a team sport. All we can do is to contribute our part of the puzzle, whatever that may be and no matter how small, and trust in our companions to handle the rest. There is honor in that, no matter how things turn out in the end.
EA or LW. Just less dependent on a single editor adding/approving changes.
Must have been a bug showing it twice then.