I think one of my critiques of this is that I’m very sceptical that strong conclusions should be drawn from any individual’s experiences and those of their friends. My current view is that we just have limited evidence for any models of what good and bad community building looks like and the way to move forward is do try a wide range of stuff and do what seems to be working well.
I think I mostly disagree with your third paragraph. The assumptions I see here are:
Not being very truth seeking with new people will either select for people who aren’t very critical or will make people who are critical into not critical people
This will have second order effects on the wider community epistemics specifically in the direction of less critiques of EA ideas
i.e it’s not obvious to me it makes EA community epistemics worse in the sense that EAs make worse decisions as a result of this.
Maybe these things are true or maybe they aren’t. My experience has not been this ( for context have been doing uni group cb for 2 years) the sorts of people who get excited about EA ideas and get involved are very smart, curious people who are very good critical thinkers.
But in the sprit of the post what I’d want to see are some regressions, like I’d want to see some measure of if the average new EA at a uni group which doesn’t cb in a way that strongly promotes a kind of epistemic frankness are less critical of ideas in general than an appropriate reference class.
Like currently I don’t talk about animal welfare when first talking to people about EA because it’s reliably the thing which puts the most people off. I think the first order effect of this is very clear—more people come to stuff—and my guess is that there are ~no second-order effects. I want to see some systematic evidence that this would have bad second order effects before I give up the clearly positive first order one.
Compared to you I think intuitions are a good guide to pointing out what kinds of social interactions are appealing vs not appealing to people similar to us. I am less in favour of trying a wide range of stuff and then naively doing what seems to be working well based on simpler metrics, specifically I am less in favour of handing that strategy to new group organisers just because:
1) I trust their judgment way less than more experienced people in EA who have done useful direct work before 2) Because you miss out on all the great people you turn off because of your activities. You won’t get negative feedback from those people because they stop showing up and won’t bother 3) I think the metrics used to judge what seems to be working well are often the wrong ones (number of people who show up to your events, who do the intro fellowship, who go to EAGx from your university etc.) and noticing if you’re getting people interested who actually seem likely to have a decent probability of very high impact in the world is hard to do
I also don’t think the people you’d get into EA that way would be less critical of ideas in general than the average university student, just because the ‘average university student’ is a very low bar. I’m not sure what reference class to compare them to (one random suggestion: perhaps the libertarian society at a university?) or what sort of prediction to make besides that I don’t think people I think of as having the aptitudes that are most helpful for having massive amounts of positive impact (being good at thinking for research, being good at starting projects etc.) would enjoy the kind of environment created at most university groups.
Specifically, one of my main gripes is that university group organisers sometimes seem to just be optimising for getting in more new people instead of doing things and organising things that would have actually appealed to them. Under some assumptions, this is not a massive problem because my guess is this happens less at top universities (though unclear, friends at some other top universities also complain about the bad vibes created) and the people who would be most interested in effective altruism would get interested anyway in spite of, instead of because of, community building strategy. So the main visible effect is the dilution of the EA community and ideas, which could actually just be not that bad if you don’t particularly care about the “EA” label providing useful information about people who use it.
Yeah, I’m pretty sceptical of the judgement of experienced community builders on the sorts of questions like effect of different strategies on community epistemics. I think if I frame this as an intervention “changing community building in x way will improve EA community epistemic” I have a strong prior that it has no effect because most interventions people try to have no or small effect (see famous graph of global health interventions.)
I think the following are some examples of places where you’d think people would have good intuitions about what works well but they don’t
Parenting. We used to just systematically abuse children and think it was good for them (e.g denying children the ability to see their parents in the hospital). There’s a really interesting passage in Invisible China where the authors describe loving grandparents deeply damaging the grandchildren they care for by not giving them enough stimulation as infants.
Education. It’s really really hard to find education interventions which work in rich countries. It’s also interesting that in the US there’s lots of opposition from teachers over teaching phonics despite it being one of the few rich country education interventions with large effect sizes (although it’s hard to judge how much of this is for self-interested reasons)
I think it’s unclear how well you’d expect people to do on the economics examples I gave. I probably would have expected people to do well with cash transfers since in fact lots of people do get cash transfers (e.g pensions, child benefits, inheritance) and do ok with minimum wage since at least some fraction of people have a sense of how the place they work for hires people.
Psychotherapy. We only good treatments that worked for specific mental health conditions (rather than to generally improve people’s lives, I haven’t read anything on this) other than mild-moderate depression when we started doing RCTs. I’m most familiar with OCD treatment specifically and the current best practice was only developed in the late 60s.
Hmm, would you then also say that we should be skeptical about claims about the overall usefulness of university group organising. If you frame it as an intervention of “run x program (intro fellowship, retreat, etc.) that will increase probability someone has a large positive impact”, would you also have a strong prior that it has no effect because most interventions people try especially education interventions which is a lot of what uni groups try to do have no or small effect?
I think one of my critiques of this is that I’m very sceptical that strong conclusions should be drawn from any individual’s experiences and those of their friends. My current view is that we just have limited evidence for any models of what good and bad community building looks like and the way to move forward is do try a wide range of stuff and do what seems to be working well.
I think I mostly disagree with your third paragraph. The assumptions I see here are:
Not being very truth seeking with new people will either select for people who aren’t very critical or will make people who are critical into not critical people
This will have second order effects on the wider community epistemics specifically in the direction of less critiques of EA ideas
i.e it’s not obvious to me it makes EA community epistemics worse in the sense that EAs make worse decisions as a result of this.
Maybe these things are true or maybe they aren’t. My experience has not been this ( for context have been doing uni group cb for 2 years) the sorts of people who get excited about EA ideas and get involved are very smart, curious people who are very good critical thinkers.
But in the sprit of the post what I’d want to see are some regressions, like I’d want to see some measure of if the average new EA at a uni group which doesn’t cb in a way that strongly promotes a kind of epistemic frankness are less critical of ideas in general than an appropriate reference class.
Like currently I don’t talk about animal welfare when first talking to people about EA because it’s reliably the thing which puts the most people off. I think the first order effect of this is very clear—more people come to stuff—and my guess is that there are ~no second-order effects. I want to see some systematic evidence that this would have bad second order effects before I give up the clearly positive first order one.
Compared to you I think intuitions are a good guide to pointing out what kinds of social interactions are appealing vs not appealing to people similar to us. I am less in favour of trying a wide range of stuff and then naively doing what seems to be working well based on simpler metrics, specifically I am less in favour of handing that strategy to new group organisers just because:
1) I trust their judgment way less than more experienced people in EA who have done useful direct work before
2) Because you miss out on all the great people you turn off because of your activities. You won’t get negative feedback from those people because they stop showing up and won’t bother
3) I think the metrics used to judge what seems to be working well are often the wrong ones (number of people who show up to your events, who do the intro fellowship, who go to EAGx from your university etc.) and noticing if you’re getting people interested who actually seem likely to have a decent probability of very high impact in the world is hard to do
I also don’t think the people you’d get into EA that way would be less critical of ideas in general than the average university student, just because the ‘average university student’ is a very low bar. I’m not sure what reference class to compare them to (one random suggestion: perhaps the libertarian society at a university?) or what sort of prediction to make besides that I don’t think people I think of as having the aptitudes that are most helpful for having massive amounts of positive impact (being good at thinking for research, being good at starting projects etc.) would enjoy the kind of environment created at most university groups.
Specifically, one of my main gripes is that university group organisers sometimes seem to just be optimising for getting in more new people instead of doing things and organising things that would have actually appealed to them. Under some assumptions, this is not a massive problem because my guess is this happens less at top universities (though unclear, friends at some other top universities also complain about the bad vibes created) and the people who would be most interested in effective altruism would get interested anyway in spite of, instead of because of, community building strategy. So the main visible effect is the dilution of the EA community and ideas, which could actually just be not that bad if you don’t particularly care about the “EA” label providing useful information about people who use it.
Yeah, I’m pretty sceptical of the judgement of experienced community builders on the sorts of questions like effect of different strategies on community epistemics. I think if I frame this as an intervention “changing community building in x way will improve EA community epistemic” I have a strong prior that it has no effect because most interventions people try to have no or small effect (see famous graph of global health interventions.)
I think the following are some examples of places where you’d think people would have good intuitions about what works well but they don’t
Parenting. We used to just systematically abuse children and think it was good for them (e.g denying children the ability to see their parents in the hospital). There’s a really interesting passage in Invisible China where the authors describe loving grandparents deeply damaging the grandchildren they care for by not giving them enough stimulation as infants.
Education. It’s really really hard to find education interventions which work in rich countries. It’s also interesting that in the US there’s lots of opposition from teachers over teaching phonics despite it being one of the few rich country education interventions with large effect sizes (although it’s hard to judge how much of this is for self-interested reasons)
I think it’s unclear how well you’d expect people to do on the economics examples I gave. I probably would have expected people to do well with cash transfers since in fact lots of people do get cash transfers (e.g pensions, child benefits, inheritance) and do ok with minimum wage since at least some fraction of people have a sense of how the place they work for hires people.
Psychotherapy. We only good treatments that worked for specific mental health conditions (rather than to generally improve people’s lives, I haven’t read anything on this) other than mild-moderate depression when we started doing RCTs. I’m most familiar with OCD treatment specifically and the current best practice was only developed in the late 60s.
Hmm, would you then also say that we should be skeptical about claims about the overall usefulness of university group organising. If you frame it as an intervention of “run x program (intro fellowship, retreat, etc.) that will increase probability someone has a large positive impact”, would you also have a strong prior that it has no effect because most interventions people try especially education interventions which is a lot of what uni groups try to do have no or small effect?