It’s also a long list, so your point stands. But it’s a list of projects not a list of groups. I would not send the list of communities to someone new, that’s for when you know a bit more what you want to do and what community you are looking for.
I would give the list of project to someone looking to help with community building, but more importantly, I’d point them to the discord. Which I did successfully link above. https://discord.gg/dRPdsEhYmY
I’m not saying there is a perfect system for onboarding community builder, just saying that there is something, and you should know about it. There are always more organising work to do, including meta organising.
Although in the spirit of FMB, it might be a good idea to do some regular movement building before you do meta movement building?
Although in the spirit of FMB, it might be a good idea to do some regular movement building before you do meta movement building?
Yes, this seems right. I have done a lot of EA movement building and a little AI Safety movement building. I suspect that there is a still a lot to be learned from doing more movement building. I plan to do some in a few months so that should help me to revalidate if my various model/ideas make sense.
@PeterSlattery I want to push back on the idea about “regular” movement building versus “meta”. It sounds like you have a fair amount of experience in movement building. I’m not sure I agree that you went meta here, but if you had, am not convinced that would be a bad thing, particularly given the subject matter.
I have only read one of your posts so far, but appreciated it. I think you are wise to try and facilitate the creation of a more cohesive theory of change, especially if inadvertently doing harm is a significant risk.
As someone on the periphery and not working in AI safety but who has tried to understand it a bit, I feel pretty confused as I haven’t encountered much in the way of strategy and corresponding tactics. I imagine this might be quite frustrating and demotivating for those working in the field.
I agree with the anonymous submission that broader perspectives would likely be quite valuable.
I don’t want to discourage you in any way. The best person to solve a problem is often the one to spot that problem, so if you see problems and have ideas you should go for it.
However, a consistent problem is that lots of people don’t know what recourses that exist. I think a better recommendation than what I wrote before, is to find out what already exist, and then decide what to do. Maybe add more missing recourses, or help signal boosting, which ever make sense.
Also, I’m not calming to be an expert. I think I know about half of what is going on in AI Safety community building.
There are some different slacks and discords too for AIS community building but not any central one. Having a central one would be good. If you want to coordinate this, I’d support that, conditioned on you having plan for avoiding this problem: xkcd: Standards
That’s why there are also a discord and regular calls.
I gave the wrong link before. I menat to post this one:
alignment.dev projects · Alignment Ecosystem Development (coda.io)
But instead posted this one
aisafety.community · Alignment Ecosystem Development (coda.io)
I’ve fixed my previous comment now too.
It’s also a long list, so your point stands. But it’s a list of projects not a list of groups. I would not send the list of communities to someone new, that’s for when you know a bit more what you want to do and what community you are looking for.
I would give the list of project to someone looking to help with community building, but more importantly, I’d point them to the discord. Which I did successfully link above.
https://discord.gg/dRPdsEhYmY
With the list of projects, it looks like most of them are launched or soft launched and so don’t require further assistance?
Some yes, but some still need more work. I hear some more will be added soon, and others are welcome to add too.
There is also a related monthly call you can join for more details.
Alignment Ecosystem Development
Alignment Ecosystem Dev (google.com)
I’m not saying there is a perfect system for onboarding community builder, just saying that there is something, and you should know about it. There are always more organising work to do, including meta organising.
Although in the spirit of FMB, it might be a good idea to do some regular movement building before you do meta movement building?
Oh, no. I’m still sharing the wrong link.
This one is the right one: Alignment Ecosystem Development
Thanks there are a lot of good ideas in here!
Ok, that’s fair.
Although in the spirit of FMB, it might be a good idea to do some regular movement building before you do meta movement building?
Yes, this seems right. I have done a lot of EA movement building and a little AI Safety movement building. I suspect that there is a still a lot to be learned from doing more movement building. I plan to do some in a few months so that should help me to revalidate if my various model/ideas make sense.
@PeterSlattery I want to push back on the idea about “regular” movement building versus “meta”. It sounds like you have a fair amount of experience in movement building. I’m not sure I agree that you went meta here, but if you had, am not convinced that would be a bad thing, particularly given the subject matter.
I have only read one of your posts so far, but appreciated it. I think you are wise to try and facilitate the creation of a more cohesive theory of change, especially if inadvertently doing harm is a significant risk.
As someone on the periphery and not working in AI safety but who has tried to understand it a bit, I feel pretty confused as I haven’t encountered much in the way of strategy and corresponding tactics. I imagine this might be quite frustrating and demotivating for those working in the field.
I agree with the anonymous submission that broader perspectives would likely be quite valuable.
Thanks for the thoughts, I really appreciate that you took the time to share them.
I don’t want to discourage you in any way. The best person to solve a problem is often the one to spot that problem, so if you see problems and have ideas you should go for it.
However, a consistent problem is that lots of people don’t know what recourses that exist. I think a better recommendation than what I wrote before, is to find out what already exist, and then decide what to do. Maybe add more missing recourses, or help signal boosting, which ever make sense.
Also, I’m not calming to be an expert. I think I know about half of what is going on in AI Safety community building.
If you want to get in touch with more community builders, maybe join one of these calls?
Alignment Ecosystem Dev (google.com)
There are some different slacks and discords too for AIS community building but not any central one. Having a central one would be good. If you want to coordinate this, I’d support that, conditioned on you having plan for avoiding this problem: xkcd: Standards