Hi Kynan thanks for writing this post.
It is great to see other people looking into more rigorous community building work! I really like the objective and methodology you set out, and do think that there are currently huge inefficiencies and loss in how information is currently transferred between groups.
I think one thing I am worried about with doing this on a large scale is the loss of qualitative nuance behind quantitative data. It seems difficult to really develop good models of why things work and what the key factors to consider are, without actually visiting groups or taking the time to properly understand the culture and people there. I would guess that processing the raw numbers are useful for being able to roll out products/outreach methods that are better in expectation better than current methods, but I would still expect there to be lots of variance in outcomes without developing a richer model that groups can then adapt.
I am one of the full-time organisers of EA Oxford and am currently looking at doing some better coordination and community building research with other full-time organisers in England. I would be keen to chat if you would like to talk more about this!
CEvans
Thanks for your comment. For your first point, I definitely agree in an ideal world that benchmarks for improvement would be useful but I would be hesitant for a few reasons.
Firstly, you face quite a risk of putting people off a certain career when really you don’t have the certainty to give that advice (especially when I am not a specialist in the field), and that could be really damaging and maybe not that useful. Secondly, these things are generally really context specific for how good X amount of progress is in Y amount of time. Eg. for your example, it could depend on pre-existing technical background, the amount of guidance and support you received while learning etc. - and I think this would be hard to quantify in a useful way.
Your second point is a really good one I think and something I would like to include—I suppose if I reach the point of creating a more comprehensive collection then it should be easier to refer between them.
I think this is really cool and a great way of breaking things down—thanks for writing this up!
Do you plan on doing any research into the cruxes of disagreement with ML researchers?
I realise that there is some information on this within the qualitative data you collected (which I will admit to not reading all 60 pages of), but it surprises me that this isn’t more of a focus. From my incredibly quick scan (so apologies for any inaccurate conclusions) of the qualitative data, it seems like many of the ML researchers were familiar with basic thinking about safety but seemed to not buy it for reasons that didn’t look fully drawn out.
It seems to me that there is a risky presupposition that the arguments made in the papers you used are correct, and that what matters now is framing. To me, given the proportion of resources EA stakes on AI safety, it would be worth trying to understand why people (particularly knowledgeable ML researchers) have a different set of priorities to many in EA. It seems suspicious how little intellectual credit that ML/AI people who aren’t EA are given.
I am curious to hear your thoughts. I really appreciate the research done here and am very much in favour of more rigorous community/field building being done as you have here.