There’s a paradox I’m confused about, where if someone from a group I’m not in—let’s say Christians, came to me on a college campus and smiled at me and asked about my interests and connected all of them to Jesus and then I found out I’d been logged in a spreadsheet as “potential convert” or something and then found the questions they’d asked me in a blog post or “Christian evangelist top questions” I might very well feel extremely weird about that (though I think less than others, I kind of respect the hustle).
BUT, when I think about how one gets there, I think, ok:
You’re a christian, you care about saving other people from hell
You want to talk to people about this and get a community together + persuade people via arguments you think are in fact persuasive
Other people want to do the same, you discuss approaches
Other people have framings and types of questions that seem better to you than yours, so you switch
You’re talking to a lot of people and it’s hard to keep track of what each of them said and what they wanted out of a community or worldview, so you start writing it down
You don’t want people to get approached for the same conversations over and over again, so you share what you’ve written with your fellow Christian evangelists
It doesn’t seem useful to anyone to keep talking to people who don’t seem interested in Christianity, so you let your fellow evangelists know which folks are in that category
People who seem excited about Christianity would probably get a lot out of going to conferences or reading more about it, so you recommend conferences and books and try to make it as easy as possible for them to access those, without having annoying atheists who just want to cause trouble showing up.
This is probably too charitable, there is definitely a thing where you actively want to persuade people because you think your thing is important, and you might lose interest in people who aren’t excited about what you’re excited about, but those things also seem reasonable to me.
A process that seems bad:
Want to maximize number of EAs
Use framings, arguments and examples that you don’t think hold water but work at getting people to join your group [I don’t think EAs do this, I’m gesturing at the extreme other end]
Make people feel weird and bad for disagreeing with you, whether on purpose or not
Encourage people to repress their disagreements
Get energy and labor from people that they won’t endorse having given in a few years, or if they knew things you knew
3-5 seem like the worst parts here. 1 seems like a reasonable implication of their beliefs, though I do think we all have to cooperate to not destroy the commons.
2 is complicated—when people have different cruxes than you is it dishonest to talk about what should convince them based on their cruxes?
3 and 4 are bad, also hard to avoid.
5 seems really bad, and something I’d like to strongly avoid via things like transparency and some other percolating advice I might end up endorsing for people new to EA, like not letting your feet go faster than your brain, figuring out how much deference you endorse, seeing avoiding resentment as a crucial consideration in your life choices, staying grounded, etc.
I also think the processes can feel pretty similar from the inside (therefore danger alert!) but also look similar from the outside when they aren’t. I certainly have systematically underestimated the moral seriousness and earnestness of many an EA.
What’s the difference?
I think people are going to want to say something like “treating people as ends” but I don’t know where that obligation stops. I think I want to say something like “are you acting in the interests of the people you’re talking to”, but that doesn’t work either—I’m not! being an EA has a decent chance of being less pleasant than the other thing they were doing, and either way it’s not a crux. Ex: I endorse protecting the time and energy of other people by not telling everyone who I would talk to if I had a certain question or needed help in a certain way.
I do think it’s more about whether you’re doing things in such a way that if they knew why you were doing them, they’d mostly not be bothered (ie passing the red face test). But that doesn’t really solve the problem that digital sentience is a weird reason to do a lot of things, and there are lots of things I endorse it being inappropriate to be too explicit about.
[This is separate from the instrumental reasons to act differently because it weirds people out etc.]
....................................................................................................................... Later musings:
Presumably the strongest argument is that these feelings are tracking a bunch of the bad stuff that’s hard to point at:
people not actually understanding the arguments they’re making
people not having your best interests in mind
people being overconfident their thing is correct
people not being able to address your ideas / cruxes
I do think it’s more about whether you’re doing things in such a way that if they knew why you were doing them, they’d mostly not be bothered (ie passing the red face test). But that doesn’t really solve the problem that digital sentience is a weird reason to do a lot of things, and there are lots of things I endorse it being inappropriate to be too explicit about.
Of course this is a spectrum, and we shouldn’t put up a public website listing all our beliefs including the most controversial ones or something like that (no one in EA is very close to this extreme). But the implicit jump from “some things shouldn’t be explicit” to “digital sentience might weird some people out so there’s a decent chance we shouldn’t be that explicit about it” seems very non-obvious to me, given how central it is to a lot of longtermist’s worldviews and honestly I think it wouldn’t turn off many of the most promising people (in the long run; in the short run, it might get an initial “huh??” reaction).
Oh, sorry, those were two different thoughts. “digital sentience is a weird reason to do a lot of things” is one thing, where it’s not most people’s crux and so maybe not the first thing you say, but agree, should definitely come up, and separately, “there are lots of things I endorse it being inappropriate to be too explicit about”, like the granularity of assessment you might be making of a person at any given time (though possibly more transparency about the fact that you’re being assessed in a bunch of contexts would be very good!)
I think steps 1 and 2 in your chain are also questionable, not just 3-5.
Want to maximize number of EAs
Why do we want to maximize number of EAs, this seems very non-obvious to me? Some people would add much more to the community than others via epistemics, culture, direct talent, etc. If we added enough of certain types of people to the community, especially too quickly, it could easily be net negative.
2. Use framings, arguments and examples that you don’t think hold water but work at getting people to join your group [I don’t think EAs do this, I’m gesturing at the extreme other end]
[...]
2 is complicated—when people have different cruxes than you is it dishonest to talk about what should convince them based on their cruxes?
I think sometimes/often talking about people’s cruxes rather than your own is good and fine. The issue is Goodharting via an optimal message to convert as many people to EA as quickly as possible, rather than messages that will lead to a healthy community over the long run.
I think there are two separate processes going on when you think about systematizing and outreach and one of them is acceptable to systematize and the other is not.
The first process is deciding where to put your energy. This could be deciding whether to set up a booth at a college’s involvement fair, buying ads, door-to-door canvassing, etc. It could also be deciding who to follow up with after these interactions, from the email list collected, to who’s door to go to a second time, to which places to spend money on in your second round of ad buys. These things all lend themselves to systematization. They can be data driven and you can make forecasts on how likely each person was to respond positively and join an event, revisit those forecasts and update them over time.
The second process is the actual interaction/conversation with people. I think this should not be systematized and should be as authentic as possible. Some of this is a focus on treating people as individuals. Even if there are certain techniques/arguments/framings that you find work better than others, I’d expect there to be significant variation among people where some work better than others. A skilled recruiter would be able to figure out what the person they are talking to cares about and focus on that more, but I think this is just good social skills. They shouldn’t be focusing on optimizing for recruitment. They should try to be a likeable person that others will want to be around and that goes a long way to recruitment in and of itself.
I see what you’re pointing at, I think, but I don’t know that this resolves all my edge cases. For instance, where does “I know this person is especially interested in animal welfare, so talk about that” fall?
I separately don’t want to optimize for recruitment in the metric of number of people because of my model of what good additions to the community looks like (e.g. I want especially thoughtful people who have a good sense of the relevant ideas and arguments and what they buy and what their uncertainties are”) - maybe your approach comes from that? Or are you saying even if one were trying to maximize numbers, they shouldn’t systematize?
Thanks so much for writing this! I think it could be a top-level post, I’m sure many others would find it very helpful.
My 2 cents:
2 is complicated—when people have different cruxes than you is it dishonest to talk about what should convince them based on their cruxes?
I think it’s definitely bad to “Use framings, arguments and examples that you don’t think hold water but work at getting people to join your group”. If I understand correctly it can cause point 5. Also “getting people to join your group” is rarely an instrumental goal, and “getting people to join your group for the wrong reasons” is probably not that useful in the long term.
Something that I think is very important that seems missing from this is that there’s a significant probability that we’re wrong about important things (i.e. EA as a question). We could be wrong about the impact of bednets, wrong about AI being the most important thing, wrong about population ethics, etc. I think it’s a huge difference from the “cult” mindset.
I think I want to say something like “are you acting in the interests of the people you’re talking to”, but that doesn’t work either—I’m not! being an EA has a decent chance of being less pleasant than the other thing they were doing, and either way it’s not a crux.
The way I think about this, on first approximation, is that I want people to work on maximising their values (and not their wellbeing). If they think altruism is not important and are solipsistic egoists and only value their own wellbeing, I don’t think EA can help them. If they value the wellbeing of others then EA can help them achieve their values better. From my personal perspective this is strongly related to the point on uncertainty: I don’t want to push other people to work on my values because from an outside view I don’t think my values are more important than their values, or more likely to be “correct”. I don’t know if it makes any sense, really curious to hear your thoughts, you have certainly thought about this more than I.
I think it’s definitely bad to “Use framings, arguments and examples that you don’t think hold water but work at getting people to join your group”. If I understand correctly it can cause point 5. Also “getting people to join your group” is rarely an instrumental goal, and “getting people to join your group for the wrong reasons” is probably not that useful in the long term.
Agree about the “not holding water”, I was trying to say that “addresses cruxes you don’t have” might look similar to this bad thing, but I’m not totally sure that’s true.
I disagree about getting people to join your group—that definitely seems like an instrumental goal, though definitely “get the relevant people to join your group” is more the thing—but different people might have different views on how relevant they need to be, or what their goal with the group is.
Something that I think is very important that seems missing from this is that there’s a significant probability that we’re wrong about important things (i.e. EA as a question).
I kind of agree here; I think there are things in EA I’m not particularly uncertain of, and while I’m open to being shown I’m wrong, I don’t want to pretend more uncertainty than I have.
The way I think about this, on first approximation, is that I want people to work on maximising their values (and not their wellbeing). If they think altruism is not important and are solipsistic egoists and only value their own wellbeing, I don’t think EA can help them. If value the wellbeing of others then EA can help them achieve their values better.
I’ve definitely heard that frame, but it honestly doesn’t resonate for me. I think some people are wrong about what values are right and arguing with me sometimes convinces them of that. I’ve definitely had my values changed by argumentation! Or at least values on some level of abstraction—not on the level of solipsism vs altruism, but there are many layers between that and “just an empirical question”.
I don’t want to push other people to work on my values because from an outside view I don’t think my values are more important than their values, or more likely to be “correct”
I incorporate an inside view on my values—if I didn’t think they were right, I’d do something else with my time!
Poke holes in my systematizing outreach apologism
Re: Ick at systematizing outreach and human interactions
There’s a paradox I’m confused about, where if someone from a group I’m not in—let’s say Christians, came to me on a college campus and smiled at me and asked about my interests and connected all of them to Jesus and then I found out I’d been logged in a spreadsheet as “potential convert” or something and then found the questions they’d asked me in a blog post or “Christian evangelist top questions” I might very well feel extremely weird about that (though I think less than others, I kind of respect the hustle).
BUT, when I think about how one gets there, I think, ok:
You’re a christian, you care about saving other people from hell
You want to talk to people about this and get a community together + persuade people via arguments you think are in fact persuasive
Other people want to do the same, you discuss approaches
Other people have framings and types of questions that seem better to you than yours, so you switch
You’re talking to a lot of people and it’s hard to keep track of what each of them said and what they wanted out of a community or worldview, so you start writing it down
You don’t want people to get approached for the same conversations over and over again, so you share what you’ve written with your fellow Christian evangelists
It doesn’t seem useful to anyone to keep talking to people who don’t seem interested in Christianity, so you let your fellow evangelists know which folks are in that category
People who seem excited about Christianity would probably get a lot out of going to conferences or reading more about it, so you recommend conferences and books and try to make it as easy as possible for them to access those, without having annoying atheists who just want to cause trouble showing up.
This is probably too charitable, there is definitely a thing where you actively want to persuade people because you think your thing is important, and you might lose interest in people who aren’t excited about what you’re excited about, but those things also seem reasonable to me.
A process that seems bad:
Want to maximize number of EAs
Use framings, arguments and examples that you don’t think hold water but work at getting people to join your group [I don’t think EAs do this, I’m gesturing at the extreme other end]
Make people feel weird and bad for disagreeing with you, whether on purpose or not
Encourage people to repress their disagreements
Get energy and labor from people that they won’t endorse having given in a few years, or if they knew things you knew
3-5 seem like the worst parts here. 1 seems like a reasonable implication of their beliefs, though I do think we all have to cooperate to not destroy the commons.
2 is complicated—when people have different cruxes than you is it dishonest to talk about what should convince them based on their cruxes?
3 and 4 are bad, also hard to avoid.
5 seems really bad, and something I’d like to strongly avoid via things like transparency and some other percolating advice I might end up endorsing for people new to EA, like not letting your feet go faster than your brain, figuring out how much deference you endorse, seeing avoiding resentment as a crucial consideration in your life choices, staying grounded, etc.
I also think the processes can feel pretty similar from the inside (therefore danger alert!) but also look similar from the outside when they aren’t. I certainly have systematically underestimated the moral seriousness and earnestness of many an EA.
What’s the difference?
I think people are going to want to say something like “treating people as ends” but I don’t know where that obligation stops. I think I want to say something like “are you acting in the interests of the people you’re talking to”, but that doesn’t work either—I’m not! being an EA has a decent chance of being less pleasant than the other thing they were doing, and either way it’s not a crux. Ex: I endorse protecting the time and energy of other people by not telling everyone who I would talk to if I had a certain question or needed help in a certain way.
I do think it’s more about whether you’re doing things in such a way that if they knew why you were doing them, they’d mostly not be bothered (ie passing the red face test). But that doesn’t really solve the problem that digital sentience is a weird reason to do a lot of things, and there are lots of things I endorse it being inappropriate to be too explicit about.
[This is separate from the instrumental reasons to act differently because it weirds people out etc.]
.......................................................................................................................
Later musings:
Presumably the strongest argument is that these feelings are tracking a bunch of the bad stuff that’s hard to point at:
people not actually understanding the arguments they’re making
people not having your best interests in mind
people being overconfident their thing is correct
people not being able to address your ideas / cruxes
people having bad epistemics
Of course this is a spectrum, and we shouldn’t put up a public website listing all our beliefs including the most controversial ones or something like that (no one in EA is very close to this extreme). But the implicit jump from “some things shouldn’t be explicit” to “digital sentience might weird some people out so there’s a decent chance we shouldn’t be that explicit about it” seems very non-obvious to me, given how central it is to a lot of longtermist’s worldviews and honestly I think it wouldn’t turn off many of the most promising people (in the long run; in the short run, it might get an initial “huh??” reaction).
Oh, sorry, those were two different thoughts. “digital sentience is a weird reason to do a lot of things” is one thing, where it’s not most people’s crux and so maybe not the first thing you say, but agree, should definitely come up, and separately, “there are lots of things I endorse it being inappropriate to be too explicit about”, like the granularity of assessment you might be making of a person at any given time (though possibly more transparency about the fact that you’re being assessed in a bunch of contexts would be very good!)
I think steps 1 and 2 in your chain are also questionable, not just 3-5.
Why do we want to maximize number of EAs, this seems very non-obvious to me? Some people would add much more to the community than others via epistemics, culture, direct talent, etc. If we added enough of certain types of people to the community, especially too quickly, it could easily be net negative.
I think sometimes/often talking about people’s cruxes rather than your own is good and fine. The issue is Goodharting via an optimal message to convert as many people to EA as quickly as possible, rather than messages that will lead to a healthy community over the long run.
I think there are two separate processes going on when you think about systematizing and outreach and one of them is acceptable to systematize and the other is not.
The first process is deciding where to put your energy. This could be deciding whether to set up a booth at a college’s involvement fair, buying ads, door-to-door canvassing, etc. It could also be deciding who to follow up with after these interactions, from the email list collected, to who’s door to go to a second time, to which places to spend money on in your second round of ad buys. These things all lend themselves to systematization. They can be data driven and you can make forecasts on how likely each person was to respond positively and join an event, revisit those forecasts and update them over time.
The second process is the actual interaction/conversation with people. I think this should not be systematized and should be as authentic as possible. Some of this is a focus on treating people as individuals. Even if there are certain techniques/arguments/framings that you find work better than others, I’d expect there to be significant variation among people where some work better than others. A skilled recruiter would be able to figure out what the person they are talking to cares about and focus on that more, but I think this is just good social skills. They shouldn’t be focusing on optimizing for recruitment. They should try to be a likeable person that others will want to be around and that goes a long way to recruitment in and of itself.
I see what you’re pointing at, I think, but I don’t know that this resolves all my edge cases. For instance, where does “I know this person is especially interested in animal welfare, so talk about that” fall?
I separately don’t want to optimize for recruitment in the metric of number of people because of my model of what good additions to the community looks like (e.g. I want especially thoughtful people who have a good sense of the relevant ideas and arguments and what they buy and what their uncertainties are”) - maybe your approach comes from that? Or are you saying even if one were trying to maximize numbers, they shouldn’t systematize?
Thanks so much for writing this! I think it could be a top-level post, I’m sure many others would find it very helpful.
My 2 cents:
I think it’s definitely bad to “Use framings, arguments and examples that you don’t think hold water but work at getting people to join your group”. If I understand correctly it can cause point 5. Also “getting people to join your group” is rarely an instrumental goal, and “getting people to join your group for the wrong reasons” is probably not that useful in the long term.
Something that I think is very important that seems missing from this is that there’s a significant probability that we’re wrong about important things (i.e. EA as a question).
We could be wrong about the impact of bednets, wrong about AI being the most important thing, wrong about population ethics, etc. I think it’s a huge difference from the “cult” mindset.
The way I think about this, on first approximation, is that I want people to work on maximising their values (and not their wellbeing). If they think altruism is not important and are solipsistic egoists and only value their own wellbeing, I don’t think EA can help them. If they value the wellbeing of others then EA can help them achieve their values better.
From my personal perspective this is strongly related to the point on uncertainty: I don’t want to push other people to work on my values because from an outside view I don’t think my values are more important than their values, or more likely to be “correct”.
I don’t know if it makes any sense, really curious to hear your thoughts, you have certainly thought about this more than I.
Thanks, Lorenzo!
Agree about the “not holding water”, I was trying to say that “addresses cruxes you don’t have” might look similar to this bad thing, but I’m not totally sure that’s true.
I disagree about getting people to join your group—that definitely seems like an instrumental goal, though definitely “get the relevant people to join your group” is more the thing—but different people might have different views on how relevant they need to be, or what their goal with the group is.
I kind of agree here; I think there are things in EA I’m not particularly uncertain of, and while I’m open to being shown I’m wrong, I don’t want to pretend more uncertainty than I have.
I’ve definitely heard that frame, but it honestly doesn’t resonate for me. I think some people are wrong about what values are right and arguing with me sometimes convinces them of that. I’ve definitely had my values changed by argumentation! Or at least values on some level of abstraction—not on the level of solipsism vs altruism, but there are many layers between that and “just an empirical question”.
I incorporate an inside view on my values—if I didn’t think they were right, I’d do something else with my time!