I’m such a big fan of “outreach is an offer, not persuasion”.
In general, my personal attitude to outreach in student groups is not to ‘get’ the best people via attraction and sales, but to just do something awesome that seems to produce value (e.g. build a research group around a question, organise workshops around a thinking tool, write a talk on a topic you’re confused about and want to discuss), and then the best people will join you on your quest. (Think quests, not sales.)
If your quest involves sales as a side-effect (e.g. you’re running an EAGx) then that’s okay, as long as the core of what you’re doing is trying to solve a real problem and make progress on an open question you have. Run EAGxes around a goal of moving the needle forward on certain questions, on making projects happen, solving some coordination problem in the community, or some other concrete problem-based metric. Not just “get more EAs”.
I think the reason this post (and all other writing on the topic) has had difficulty suggesting particular quests is that they tend to be deeply tied up in someone’s psyche. Nonetheless l think this is what’s necessary.
I think it captures a few different notions. I’ll try and spell out a few salient ones
1) Pushes back against the idea that an outreach talk needs to cover all aspects of EA. e.g. I think some intro EA 45min talks end up being really unsatisfactory as they only have time to skim across loads of different concepts and cause areas lightly. Instead I think it could be OK and even better to do outreach talks that don’t introduce all of EA but do demonstrate a cool and interesting facet of EA epistemology. e.g. I could imagine a talk on differential vs absolute technological progress as being a way to attract new people.
2) Pushes back against running introductory discussion groups. Sometimes it feels like you need to guide someone through the basics, but I’ve found that often you can just lend people books or send them articles and they’ll be able to pick up the same stuff without it taking up your time.
3) Reframes particular community niches, such as a technical AI safety paper reading group, as also a potential entry-point into the broader community. e.g. People find out about the AI group since they study computer science and find it interesting and then get introduced to EA.
I’m still confused: Intuitively, I would understand “Don’t introduce EA” as “Don’t do introductory EA talks”. The “don’t teach” bit also confuses me.
My personal best guess is that EA groups should do regular EA intro talks (maybe 1-2 per year), and should make people curious by touching on some of the core concepts to motivate the audience to read up on these EA ideas on their own. In particular, presenting arguments where relatively uncontroversial assumptions lead to surprising and interesting conclusions (“showing how deep the rabbit hole goes”) often seems to spark such curiosity. My current best guess is that we should aim to “teach” such ideas in “introductory” EA talks, so I’d be interested whether you disagree with this.
Interesting stuff, thanks guys. I wanted to discuss one point:
From conversations with James, I believe Cambridge has a pretty different model of how they run it- in particular, a much more hands on approach, which calls for formal commitment from more people e.g. giving everyone specific roles, which is the “excessive formalist” approach. Are there reasons you guys have access to which favour your model of outreach over theirs? Or alternate frame; what’s the best argument in favour of the Cambridge model of giving everyone an explicit role, and why does that not succeed (if it doesn’t)?
For example, is it possible that Cambridge get a significantly higher number of people involved, which then cancels out the effects of immediately high-fidelity models in due course (e.g. suppose lots of people are low fidelity while at Cam, but then a section become more high-fidelity later, and it ends up not making that much difference in the long run)? Or does the Cambridge model use roles as an effective commitment device? Or does one model ensure less movement drift, or less lost value from movement drift? (see here http://effective-altruism.com/ea/1ne/empirical_data_on_value_drift/?refresh=true) There’s a comment from David Moss here suggesting there’s an “open question” about the value of focussing on more engaged individuals, given the risks of attrition in large movements (assuming the value of the piece, which is subject to lots of methodological caveats).
The qs above might be contradictory- I’m not advocating any of the above, but instead clarifying whether there’s anything missed by your suggestions.
I’ve spoken a lot with the Cambridge lot about this. I guess the cruxes of my disagreement with their approach are:
1) I think their committee model selects more for willingness to do menial tasks for the prestige of being in the committee, rather than actual enthusiasm for effective altruism. So something like what you described happens where “a section become more high-fidelity later, and it ends up not making that much difference”, as people who aren’t actually interested drop out. But it comes at the cost of more engaged people spending time on management.
2) From my understanding, Cambridge viewed the 1 year roles as a way of being able to ‘lock in’ people to engage with EA for 1 year and create a norm of committee attending events. But my model of someone who ends up being very engaged in EA is that excitement about the content drives most of the motivation, rather than external commitment devices. So I suppose roles only play a limited role in committing people to engage, but comes at the cost of people spending X hours on admin, when they could have spent X hours on learning more about EA.
It’s worth noting that I think Cambridge have recently been thinking hard about this, and also I expect their models for how their committee provides value to be much more nuanced than I present. Nevertheless, I think (1) and (2) capture useful points of disagreement I’ve had with them in the past.
as people who aren’t actually interested drop out.
This depends on what you mean by ‘drop out’. Only around 10% (~5) of our committee dropped out during last year, although maybe 1/3rd chose not to rejoin the committee this year (and about another 1/3rd are graduating)
2) From my understanding, Cambridge viewed the 1 year roles as a way of being able to ‘lock in’ people to engage with EA for 1 year and create a norm of committee attending events.
This does not ring especially true to me, see my reply to Josh.
To jump in as the ex-co-president of EA: Cambridge from last year:
I think the differences mostly come in things which were omitted from this post, as opposed to the explicit points made, which I mostly agree with.
There is a fairly wide distinction between the EA community in Cambridge and the EA: Cam committee, and we don’t try to force people from the former into the latter (although we hope for the reverse!).
I largely view a big formal committee (ours was over 40 people last year) as an addition to the attempts to build a community as outlined in this post. A formal committee in my mind significantly improves the ability to get stuff done vs the ‘conspirators’ approach.
The getting stuff done can then translate to things such as an increased campus presence, and generally a lot more chances to get people into the first stage of the ‘funnel’. Last year we ran around 8 events a week, with several of them aimed at engaging and on-boarding new interested people (Those being hosting 1 or 2 speakers a week, running outreach focused socials, introductionary discussion groups and careers workshops.) This large organisational capacity also let us run ~4 community focused events a week.
I think it is mostly these mechanisms that make the large committee helpful, as opposed to most of the committee members becoming ‘core EAs’ (I think conversion ratio is perhaps 1⁄5 or 1⁄10). There is also some sense in which the above allow us to form a campus presence that helps people hear about us, and I think perhaps makes us more attractive to high-achieving people, although I am pretty uncertain about this.
I think EA: Cam is a significant outlier in terms of EA student groups, and if a group is starting out it probably makes more sense to stick to the kind of advice given in this article. However I think in the long term Community + Big formal committee is probably better than just a community with an informal committee.
Been thinking about EA ideas for the past year now but am new to the forum. This is one of the first posts I’ve read closely, and I just wanted to say I really appreciate these ideas. I will definitely change the way I communicate EA ideas in the future because of this (and will act more as a signpost than a teacher).
I think most will agree that it’s not advisable to simply try to persuade as many people as possible. That said, given the widespread recognition that poor or inept messaging can put people off EA ideas, the question of persuasion doesn’t seem to be one that we can entirely put aside.
A couple of questions (among others) will be relevant to how far we should merely offer and not try to persuade: how many people we think will be initially (genuinely) interested in EA and how many people we think would be potentially (genuinely) interested in EA were it suitably presented.
A very pessimistic view across these questions is that very few people are inclined to be interested in EA initially and very few would be interested after persuasion (e.g. because EA is a weird idea compelling only to a minority who are weird on a number of dimensions, and most people are highly averse to its core demands). On this view, offering and not trying to persuade, seems appealing, because few will be interested, persuasion won’t help, and all you can do is hope some of the well inclined minority will hear your message.
If you think very few will be initially inclined but (relatively) many more would be inclined with suitable persuasion (e.g. because EA ideas are very counter-intuitive, inclined to sound very off-putting, but can be appealing if framed adroitly), then the opposite conclusion follows: it seems like persuasion is high value (indeed a necessity).
Conversely, if you are a more optimistic (many people intuitively like EA: it’s just “doing the most good you can do + good evidence!”) then persuasion looks less important (unless you also think that persuasion can bring many additional gains even above the high baseline of EA-acceptance already).
-
Another big distinction which I assume is, perhaps, motivating the “offer, don’t persuade” prescription, is whether people think that persuasion tends to influence the quality of those counterfactual recruits negatively, neutrally or positively. The negative view might be motivated by thinking that persuading people (especially via dubious representations of EA) who wouldn’t otherwise have liked EA’s offer will disproportionately bring in people who don’t really accept EA. The neutral view might be motivated by positing that many people are turned-off (or attracted to) EA by considerations orthogonal to actual EA content (e.g. nuances of framing, or whether they instinctively non-rationally like/dislike ideas things EA happens to be associated with (e.g. sci-fi)). The positive view, might be motivated by thinking that certain groups are turned off, disproportionately, by unpersuasive messages (e.g. women and minorities do not find EA attractive, but would do with more carefully crafted, symbolically not off-putting outreach), and thinking that getting more of these groups would be epistemically salutary for some reason.
-
Another major consideration would simply be how many EAs we presently have relative to desired numbers. If we think we have plenty (or even, more than we can train/onboard), then working to persuade/attract more people seems unappealing and conversely if we highly value having more people, then the converse. I think it’s very reasonable that we switch our priorities from trying to attract more people to not, depending on present needs. I’m somewhat concerned that perceived present needs get reflected in a kind of ‘folk EA wisdom’ i.e. when we lack(ed) people, the general idea that movement building is many times more effective than most direct work, was popularised, whereas now we have more people (for certain needs), the general idea of ‘quality trumps quantity’ gets popularised. But I’m worried the very general memes aren’t especially sensitive to actual supply/demand/needs and would be hard/slow to update, if needs were different. This also becomes very tricky when different groups have different needs/shortages.
Very helpful post. As someone running an german EA group i didn’t really find anything that doesn’t apply to us in the same way it did for you.
One interesting thing is your focus on 1on1 conversations: We have never attempted something like this, mostly because we thought it would be at least a bit weird for both parties involved. Did you have the same fear and where proven wrong or is this a problem you run into with some people?
We have never attempted something like this, mostly because we thought it would be at least a bit weird for both parties involved.
If that’s helpful: EA Berlin has been using 1:1s for a while now, so there doesn’t seem to be a cultural context that would make a difference.
That said, I usually distinguish between 1:1s with people interested joining the group, and with existing group members. We’ve done the former and are only starting to do the latter (partly because it seemed like a really good idea after talking to James). Introducing that wasn’t weird at all, when messaging people saying “we’re trying this new thing that might be good for a bunch of different reasons”, they seemed quite happy about it, perhaps only a bit confused about what was supposed to happen during the 1:1.
I’d also emphasise the active element of reaching out to people that seem particularly interested instead of just having 1:1s with anyone who approaches you. I like Tobias’s suggestion to approach people based on answers they write in a feedback form, but I’m not sure how much effort it’d take to implement that.
I think there are easy ways to make it not weird. Some tips:
1) Emailing from an official email account, rather than a personal one, if you’ve never met the person before.
2) Mention explicitly that this is ‘something you do’ and that, for newcomers, you’d like to welcome them into the community. This makes it less strange that you’re reaching out to them personally.
3) Mention explicitly that you’ll be talking about EA, and not other stuff.
4) It’s useful to meet people in real life at an event first and say hello and introduce yourself there.
5) Don’t feel like you have an agenda or anything; keep it informal. Treat it as if you were getting to know a friend better and have an enjoyable time.
6) Absolutely don’t pressure people, just reach out and offer to meet up if they’d find it useful
I’m such a big fan of “outreach is an offer, not persuasion”.
In general, my personal attitude to outreach in student groups is not to ‘get’ the best people via attraction and sales, but to just do something awesome that seems to produce value (e.g. build a research group around a question, organise workshops around a thinking tool, write a talk on a topic you’re confused about and want to discuss), and then the best people will join you on your quest. (Think quests, not sales.)
If your quest involves sales as a side-effect (e.g. you’re running an EAGx) then that’s okay, as long as the core of what you’re doing is trying to solve a real problem and make progress on an open question you have. Run EAGxes around a goal of moving the needle forward on certain questions, on making projects happen, solving some coordination problem in the community, or some other concrete problem-based metric. Not just “get more EAs”.
I think the reason this post (and all other writing on the topic) has had difficulty suggesting particular quests is that they tend to be deeply tied up in someone’s psyche. Nonetheless l think this is what’s necessary.
Hey! Thanks for the comment.
I think it captures a few different notions. I’ll try and spell out a few salient ones
1) Pushes back against the idea that an outreach talk needs to cover all aspects of EA. e.g. I think some intro EA 45min talks end up being really unsatisfactory as they only have time to skim across loads of different concepts and cause areas lightly. Instead I think it could be OK and even better to do outreach talks that don’t introduce all of EA but do demonstrate a cool and interesting facet of EA epistemology. e.g. I could imagine a talk on differential vs absolute technological progress as being a way to attract new people.
2) Pushes back against running introductory discussion groups. Sometimes it feels like you need to guide someone through the basics, but I’ve found that often you can just lend people books or send them articles and they’ll be able to pick up the same stuff without it taking up your time.
3) Reframes particular community niches, such as a technical AI safety paper reading group, as also a potential entry-point into the broader community. e.g. People find out about the AI group since they study computer science and find it interesting and then get introduced to EA.
I’m still confused: Intuitively, I would understand “Don’t introduce EA” as “Don’t do introductory EA talks”. The “don’t teach” bit also confuses me.
My personal best guess is that EA groups should do regular EA intro talks (maybe 1-2 per year), and should make people curious by touching on some of the core concepts to motivate the audience to read up on these EA ideas on their own. In particular, presenting arguments where relatively uncontroversial assumptions lead to surprising and interesting conclusions (“showing how deep the rabbit hole goes”) often seems to spark such curiosity. My current best guess is that we should aim to “teach” such ideas in “introductory” EA talks, so I’d be interested whether you disagree with this.
I think that makes sense and I agree with you. We also have run the sort of things you describe in Oxford.
Maybe don’t teach can be understood as ‘prefer using resources as a way of conveying ideas, rather than you teaching’.
I agree that we should aim to ‘outreach’, in ‘(on-topic) introductory’ EA talks, and don’t disagree here.
Interesting stuff, thanks guys. I wanted to discuss one point:
From conversations with James, I believe Cambridge has a pretty different model of how they run it- in particular, a much more hands on approach, which calls for formal commitment from more people e.g. giving everyone specific roles, which is the “excessive formalist” approach. Are there reasons you guys have access to which favour your model of outreach over theirs? Or alternate frame; what’s the best argument in favour of the Cambridge model of giving everyone an explicit role, and why does that not succeed (if it doesn’t)?
For example, is it possible that Cambridge get a significantly higher number of people involved, which then cancels out the effects of immediately high-fidelity models in due course (e.g. suppose lots of people are low fidelity while at Cam, but then a section become more high-fidelity later, and it ends up not making that much difference in the long run)? Or does the Cambridge model use roles as an effective commitment device? Or does one model ensure less movement drift, or less lost value from movement drift? (see here http://effective-altruism.com/ea/1ne/empirical_data_on_value_drift/?refresh=true) There’s a comment from David Moss here suggesting there’s an “open question” about the value of focussing on more engaged individuals, given the risks of attrition in large movements (assuming the value of the piece, which is subject to lots of methodological caveats).
The qs above might be contradictory- I’m not advocating any of the above, but instead clarifying whether there’s anything missed by your suggestions.
Thanks for the comment JoshP!
I’ve spoken a lot with the Cambridge lot about this. I guess the cruxes of my disagreement with their approach are:
1) I think their committee model selects more for willingness to do menial tasks for the prestige of being in the committee, rather than actual enthusiasm for effective altruism. So something like what you described happens where “a section become more high-fidelity later, and it ends up not making that much difference”, as people who aren’t actually interested drop out. But it comes at the cost of more engaged people spending time on management.
2) From my understanding, Cambridge viewed the 1 year roles as a way of being able to ‘lock in’ people to engage with EA for 1 year and create a norm of committee attending events. But my model of someone who ends up being very engaged in EA is that excitement about the content drives most of the motivation, rather than external commitment devices. So I suppose roles only play a limited role in committing people to engage, but comes at the cost of people spending X hours on admin, when they could have spent X hours on learning more about EA.
It’s worth noting that I think Cambridge have recently been thinking hard about this, and also I expect their models for how their committee provides value to be much more nuanced than I present. Nevertheless, I think (1) and (2) capture useful points of disagreement I’ve had with them in the past.
This depends on what you mean by ‘drop out’. Only around 10% (~5) of our committee dropped out during last year, although maybe 1/3rd chose not to rejoin the committee this year (and about another 1/3rd are graduating)
This does not ring especially true to me, see my reply to Josh.
To jump in as the ex-co-president of EA: Cambridge from last year:
I think the differences mostly come in things which were omitted from this post, as opposed to the explicit points made, which I mostly agree with.
There is a fairly wide distinction between the EA community in Cambridge and the EA: Cam committee, and we don’t try to force people from the former into the latter (although we hope for the reverse!).
I largely view a big formal committee (ours was over 40 people last year) as an addition to the attempts to build a community as outlined in this post. A formal committee in my mind significantly improves the ability to get stuff done vs the ‘conspirators’ approach.
The getting stuff done can then translate to things such as an increased campus presence, and generally a lot more chances to get people into the first stage of the ‘funnel’. Last year we ran around 8 events a week, with several of them aimed at engaging and on-boarding new interested people (Those being hosting 1 or 2 speakers a week, running outreach focused socials, introductionary discussion groups and careers workshops.) This large organisational capacity also let us run ~4 community focused events a week.
I think it is mostly these mechanisms that make the large committee helpful, as opposed to most of the committee members becoming ‘core EAs’ (I think conversion ratio is perhaps 1⁄5 or 1⁄10). There is also some sense in which the above allow us to form a campus presence that helps people hear about us, and I think perhaps makes us more attractive to high-achieving people, although I am pretty uncertain about this.
I think EA: Cam is a significant outlier in terms of EA student groups, and if a group is starting out it probably makes more sense to stick to the kind of advice given in this article. However I think in the long term Community + Big formal committee is probably better than just a community with an informal committee.
Been thinking about EA ideas for the past year now but am new to the forum. This is one of the first posts I’ve read closely, and I just wanted to say I really appreciate these ideas. I will definitely change the way I communicate EA ideas in the future because of this (and will act more as a signpost than a teacher).
Nice and useful post. I’m trying to find its sequel, on ‘projects compatible with these heuristics’. Is it ready? where do I find it?
Comment mostly copied from Facebook:
I think most will agree that it’s not advisable to simply try to persuade as many people as possible. That said, given the widespread recognition that poor or inept messaging can put people off EA ideas, the question of persuasion doesn’t seem to be one that we can entirely put aside.
A couple of questions (among others) will be relevant to how far we should merely offer and not try to persuade: how many people we think will be initially (genuinely) interested in EA and how many people we think would be potentially (genuinely) interested in EA were it suitably presented.
A very pessimistic view across these questions is that very few people are inclined to be interested in EA initially and very few would be interested after persuasion (e.g. because EA is a weird idea compelling only to a minority who are weird on a number of dimensions, and most people are highly averse to its core demands). On this view, offering and not trying to persuade, seems appealing, because few will be interested, persuasion won’t help, and all you can do is hope some of the well inclined minority will hear your message.
If you think very few will be initially inclined but (relatively) many more would be inclined with suitable persuasion (e.g. because EA ideas are very counter-intuitive, inclined to sound very off-putting, but can be appealing if framed adroitly), then the opposite conclusion follows: it seems like persuasion is high value (indeed a necessity).
Conversely, if you are a more optimistic (many people intuitively like EA: it’s just “doing the most good you can do + good evidence!”) then persuasion looks less important (unless you also think that persuasion can bring many additional gains even above the high baseline of EA-acceptance already).
-
Another big distinction which I assume is, perhaps, motivating the “offer, don’t persuade” prescription, is whether people think that persuasion tends to influence the quality of those counterfactual recruits negatively, neutrally or positively. The negative view might be motivated by thinking that persuading people (especially via dubious representations of EA) who wouldn’t otherwise have liked EA’s offer will disproportionately bring in people who don’t really accept EA. The neutral view might be motivated by positing that many people are turned-off (or attracted to) EA by considerations orthogonal to actual EA content (e.g. nuances of framing, or whether they instinctively non-rationally like/dislike ideas things EA happens to be associated with (e.g. sci-fi)). The positive view, might be motivated by thinking that certain groups are turned off, disproportionately, by unpersuasive messages (e.g. women and minorities do not find EA attractive, but would do with more carefully crafted, symbolically not off-putting outreach), and thinking that getting more of these groups would be epistemically salutary for some reason.
-
Another major consideration would simply be how many EAs we presently have relative to desired numbers. If we think we have plenty (or even, more than we can train/onboard), then working to persuade/attract more people seems unappealing and conversely if we highly value having more people, then the converse. I think it’s very reasonable that we switch our priorities from trying to attract more people to not, depending on present needs. I’m somewhat concerned that perceived present needs get reflected in a kind of ‘folk EA wisdom’ i.e. when we lack(ed) people, the general idea that movement building is many times more effective than most direct work, was popularised, whereas now we have more people (for certain needs), the general idea of ‘quality trumps quantity’ gets popularised. But I’m worried the very general memes aren’t especially sensitive to actual supply/demand/needs and would be hard/slow to update, if needs were different. This also becomes very tricky when different groups have different needs/shortages.
Very helpful post. As someone running an german EA group i didn’t really find anything that doesn’t apply to us in the same way it did for you.
One interesting thing is your focus on 1on1 conversations: We have never attempted something like this, mostly because we thought it would be at least a bit weird for both parties involved. Did you have the same fear and where proven wrong or is this a problem you run into with some people?
If that’s helpful: EA Berlin has been using 1:1s for a while now, so there doesn’t seem to be a cultural context that would make a difference. That said, I usually distinguish between 1:1s with people interested joining the group, and with existing group members. We’ve done the former and are only starting to do the latter (partly because it seemed like a really good idea after talking to James). Introducing that wasn’t weird at all, when messaging people saying “we’re trying this new thing that might be good for a bunch of different reasons”, they seemed quite happy about it, perhaps only a bit confused about what was supposed to happen during the 1:1.
I’d also emphasise the active element of reaching out to people that seem particularly interested instead of just having 1:1s with anyone who approaches you. I like Tobias’s suggestion to approach people based on answers they write in a feedback form, but I’m not sure how much effort it’d take to implement that.
Hi!
I think there are easy ways to make it not weird. Some tips:
1) Emailing from an official email account, rather than a personal one, if you’ve never met the person before.
2) Mention explicitly that this is ‘something you do’ and that, for newcomers, you’d like to welcome them into the community. This makes it less strange that you’re reaching out to them personally.
3) Mention explicitly that you’ll be talking about EA, and not other stuff.
4) It’s useful to meet people in real life at an event first and say hello and introduce yourself there.
5) Don’t feel like you have an agenda or anything; keep it informal. Treat it as if you were getting to know a friend better and have an enjoyable time.
6) Absolutely don’t pressure people, just reach out and offer to meet up if they’d find it useful