I don’t want to discourage you in any way. The best person to solve a problem is often the one to spot that problem, so if you see problems and have ideas you should go for it.
However, a consistent problem is that lots of people don’t know what recourses that exist. I think a better recommendation than what I wrote before, is to find out what already exist, and then decide what to do. Maybe add more missing recourses, or help signal boosting, which ever make sense.
Also, I’m not calming to be an expert. I think I know about half of what is going on in AI Safety community building.
There are some different slacks and discords too for AIS community building but not any central one. Having a central one would be good. If you want to coordinate this, I’d support that, conditioned on you having plan for avoiding this problem: xkcd: Standards
I don’t want to discourage you in any way. The best person to solve a problem is often the one to spot that problem, so if you see problems and have ideas you should go for it.
However, a consistent problem is that lots of people don’t know what recourses that exist. I think a better recommendation than what I wrote before, is to find out what already exist, and then decide what to do. Maybe add more missing recourses, or help signal boosting, which ever make sense.
Also, I’m not calming to be an expert. I think I know about half of what is going on in AI Safety community building.
If you want to get in touch with more community builders, maybe join one of these calls?
Alignment Ecosystem Dev (google.com)
There are some different slacks and discords too for AIS community building but not any central one. Having a central one would be good. If you want to coordinate this, I’d support that, conditioned on you having plan for avoiding this problem: xkcd: Standards