It is great to see other people looking into more rigorous community building work! I really like the objective and methodology you set out, and do think that there are currently huge inefficiencies and loss in how information is currently transferred between groups.
I think one thing I am worried about with doing this on a large scale is the loss of qualitative nuance behind quantitative data. It seems difficult to really develop good models of why things work and what the key factors to consider are, without actually visiting groups or taking the time to properly understand the culture and people there. I would guess that processing the raw numbers are useful for being able to roll out products/outreach methods that are better in expectation better than current methods, but I would still expect there to be lots of variance in outcomes without developing a richer model that groups can then adapt.
I am one of the full-time organisers of EA Oxford and am currently looking at doing some better coordination and community building research with other full-time organisers in England. I would be keen to chat if you would like to talk more about this!
Thanks for being in touch (and I enjoyed our conversation).
One thing to note is that some of the trials we are considering could be considered trials in ‘what general paths and approaches to recommend’, rather than narrowly-defined specific scripts.
E.g., “reach out to a broad group of students” vs “focus on a small number of likely high-potential students.” This could be operationalized, e.g., through which ‘paths to involvement’ (through fellowship completion or through attending meetings and events), or through ‘which courses/majors to reach out to’.
However, every university group could still have the flexibility to adopt the recommended guidelines in a manner that aligns with their unique culture and surroundings.
We could then try to focus on some generally agreed ‘aggregate outcome measures’. This could then be considered a test of ‘which recommended general approach works better’ (or, if we can have subgroup analysis, ‘which works better where’.
Agreed on all points. An important consideration is heavily involving group leaders and organisers in this process to preserve the qualitative aspects of ‘what works’ in outreach. Keeping those involved with implementing the methods engaged throughout the research process is vital for ensuring these methods transfer into the real world. Whilst some of the nuances are inevitably going to be lost through large-scale testing, we can counteract this by knowing where to allow room for flexibility and where rigidity is worthwhile.
Hi Kynan thanks for writing this post.
It is great to see other people looking into more rigorous community building work! I really like the objective and methodology you set out, and do think that there are currently huge inefficiencies and loss in how information is currently transferred between groups.
I think one thing I am worried about with doing this on a large scale is the loss of qualitative nuance behind quantitative data. It seems difficult to really develop good models of why things work and what the key factors to consider are, without actually visiting groups or taking the time to properly understand the culture and people there. I would guess that processing the raw numbers are useful for being able to roll out products/outreach methods that are better in expectation better than current methods, but I would still expect there to be lots of variance in outcomes without developing a richer model that groups can then adapt.
I am one of the full-time organisers of EA Oxford and am currently looking at doing some better coordination and community building research with other full-time organisers in England. I would be keen to chat if you would like to talk more about this!
Thanks for being in touch (and I enjoyed our conversation).
One thing to note is that some of the trials we are considering could be considered trials in ‘what general paths and approaches to recommend’, rather than narrowly-defined specific scripts.
E.g., “reach out to a broad group of students” vs “focus on a small number of likely high-potential students.” This could be operationalized, e.g., through which ‘paths to involvement’ (through fellowship completion or through attending meetings and events), or through ‘which courses/majors to reach out to’.
However, every university group could still have the flexibility to adopt the recommended guidelines in a manner that aligns with their unique culture and surroundings.
We could then try to focus on some generally agreed ‘aggregate outcome measures’. This could then be considered a test of ‘which recommended general approach works better’ (or, if we can have subgroup analysis, ‘which works better where’.
Agreed on all points. An important consideration is heavily involving group leaders and organisers in this process to preserve the qualitative aspects of ‘what works’ in outreach. Keeping those involved with implementing the methods engaged throughout the research process is vital for ensuring these methods transfer into the real world. Whilst some of the nuances are inevitably going to be lost through large-scale testing, we can counteract this by knowing where to allow room for flexibility and where rigidity is worthwhile.
I’ll be in touch, thanks!