Sure. But Lila complained about small things that are far from universal to effective altruism. The vast majority of people who differ in their opinions on the points described in the OP do not leave EA. As I mentioned in my top level comment, Lila is simply confused about many of the foundational philosophical issues which she thinks pose an obstacle to her being in effective altruism. Some people will always fall through the cracks, and in this case one of them decided to write about it. Don’t over-update based on an example like this.
Note also that someone who engages with EA to the extent of reading one of these books will mostly ignore the short taglines accompanying marketing messages, which seem to be what you’re after. And people who engage with the community will mostly ignore both books and marketing messages when it comes to making an affective judgement.
Especially texts that appeal to moral obligation (which I share) signal that the reader needs to find an objective flaw in them to be able to reject them. That, I’m afraid, leads to people attacking EA for all sorts of made-up or not-actually-evil reasons. That can result in toxoplasma and opposition. If they could just feel like they can ignore us without attacking us first, we could avoid that.
And texts that don’t appeal to moral obligation make a weak argument that is simply ignored. That results in apathy and a frivolous approach.
A lot of your objections take the form of likely-sounding counternarratives to my narratives.
Yes, and it’s sufficient. You are proposing a policy which will necessarily hurt short term movement growth. The argument depends on being establish a narrative to support its value.
We shouldn’t only count those that join the movement for a while and then part ways with it again but also those that hear about it and ignore it, publish a nonconstructive critique of it, tell friends why EA is bad, etc.
But on my side, we shouldn’t only count those who join the movement and stay; we should also count those who hear about it and are lightly positive about it, share some articles and books with their friends, publish a positive critique about it, start a conversation with their friends about EA, like it on social media, etc.
With small rhetorical tweaks of the type that I’m proposing, we can probably increase the number of those that ignore it solely at the expense of the numbers who would’ve smeared it and not at the expense of the numbers who would’ve joined.
I don’t see how. The more restrictive your message, the less appealing and widespread it is.
The positive appeals may stay the same but be joined by something to the effect that if they think they can’t come to terms with values X and Y, EA may not be for them.
What a great way to signal-boost messages which harm our movement. Time for the outside view: do you see any organization in the whole world which does this? Why?
Are you really advocating messages like “EA is great but if you don’t agree with universally following expected value calculations then it may not be for you?” If we had done this with any of the things described here, we’d be intellectually dishonest—since EA does not assume absurd expected value calculations, or invertebrate sentience, or moral realism.
It’s one thing to try to help people out by being honest with them… it’s quite another to be dishonest in a paternalistic bid to keep them from “wasting time” by contributing to our movement.
but saying it will communicate that they can ignore EA without first finding fault with it or attacking it.
That is what the vast majority of people who read about EA already do.
It’s well possible that I’m overly sensitive to being attacked (by outside critics),
Not only that, but you’re sensitive to the extent that you’re advocating caving in to their ideas and giving up the ideological space they want.
This is why we like rule consequentialism and heuristics instead of doing act-consequentialist calculations all the time. A movement that gets emotionally affected by its critics and shaken by people leaving will fall apart. A movement that makes itself subservient to the people it markets to will stagnate. And a movement whose response to criticism is to retreat to narrower and narrower ideological space will become irrelevant. But a movement that practices strength and assures its value on multiple fronts will succeed.
You get way too riled up over this. I started out being like “Uh, cloudy outside. Should we all pack umbrellas?” I’m not interested in an adversarial debate over the merits of packing umbrellas, one where there is winning and losing and all that nonsense. I’m not backing down; I was never interested in that format to begin with. It would incentivize me to exaggerate my confidence into the merits of packing umbrellas, which has been low all along; incentivize me to not be transparent about my epistemic status, as it were, my suspected biases and such; and so would incentivize an uncooperative setup for the discussion. The same probably applies to you.
I’m updating down from 70% for packing umbrellas to 50% for packing umbrellas. So I guess I won’t pack one unless it happens to be in the bag already. But I’m worried I’m over-updating because of everything I don’t know about why you never assumed what ended up as “my position” in this thread.
As you pointed out yourself, people around here systematically spend too much time on the negative-sum activity (http://lesswrong.com/lw/3h/why_our_kind_cant_cooperate/) of speculating on their personal theories for what’s wrong with EA, usually from a position of lacking formal knowledge or seasoned experience with social movements. So when some speculation of the sort is presented, I say exactly what is flawed about the ideas and methodology, and will continue to do so until epistemic standards improve. People should not take every opportunity to question whether we should all pack umbrellas; they should go about their ordinary business until they find a sufficiently compelling reason for everyone to pack umbrellas, and then state their case.
And, if my language seems too “adversarial”… honestly, I expect people to deal with it. I don’t communicate in any way which is out of bounds for ordinary Internet or academic discourse. So, I’m not “riled up”, I feel entirely normal. And insisting upon a pathological level of faux civility is itself a kind of bias which inhibits subtle ingredients of communication.
We’ve been communicating so badly that I would’ve thought you’d be one to reject an article like the one you linked. Establishing the sort of movement that Eliezer is talking about was the central motivation for making my suggestion in the first place.
If you think you can use a cooperative type of discourse in a private conversation where there is no audience that you need to address at the same time, then I’d like to remember that for the next time when I think we can learn something from each other on some topic.
Sure. But Lila complained about small things that are far from universal to effective altruism. The vast majority of people who differ in their opinions on the points described in the OP do not leave EA. As I mentioned in my top level comment, Lila is simply confused about many of the foundational philosophical issues which she thinks pose an obstacle to her being in effective altruism. Some people will always fall through the cracks, and in this case one of them decided to write about it. Don’t over-update based on an example like this.
Note also that someone who engages with EA to the extent of reading one of these books will mostly ignore the short taglines accompanying marketing messages, which seem to be what you’re after. And people who engage with the community will mostly ignore both books and marketing messages when it comes to making an affective judgement.
And texts that don’t appeal to moral obligation make a weak argument that is simply ignored. That results in apathy and a frivolous approach.
Yes, and it’s sufficient. You are proposing a policy which will necessarily hurt short term movement growth. The argument depends on being establish a narrative to support its value.
But on my side, we shouldn’t only count those who join the movement and stay; we should also count those who hear about it and are lightly positive about it, share some articles and books with their friends, publish a positive critique about it, start a conversation with their friends about EA, like it on social media, etc.
I don’t see how. The more restrictive your message, the less appealing and widespread it is.
What a great way to signal-boost messages which harm our movement. Time for the outside view: do you see any organization in the whole world which does this? Why?
Are you really advocating messages like “EA is great but if you don’t agree with universally following expected value calculations then it may not be for you?” If we had done this with any of the things described here, we’d be intellectually dishonest—since EA does not assume absurd expected value calculations, or invertebrate sentience, or moral realism.
It’s one thing to try to help people out by being honest with them… it’s quite another to be dishonest in a paternalistic bid to keep them from “wasting time” by contributing to our movement.
That is what the vast majority of people who read about EA already do.
Not only that, but you’re sensitive to the extent that you’re advocating caving in to their ideas and giving up the ideological space they want.
This is why we like rule consequentialism and heuristics instead of doing act-consequentialist calculations all the time. A movement that gets emotionally affected by its critics and shaken by people leaving will fall apart. A movement that makes itself subservient to the people it markets to will stagnate. And a movement whose response to criticism is to retreat to narrower and narrower ideological space will become irrelevant. But a movement that practices strength and assures its value on multiple fronts will succeed.
You get way too riled up over this. I started out being like “Uh, cloudy outside. Should we all pack umbrellas?” I’m not interested in an adversarial debate over the merits of packing umbrellas, one where there is winning and losing and all that nonsense. I’m not backing down; I was never interested in that format to begin with. It would incentivize me to exaggerate my confidence into the merits of packing umbrellas, which has been low all along; incentivize me to not be transparent about my epistemic status, as it were, my suspected biases and such; and so would incentivize an uncooperative setup for the discussion. The same probably applies to you.
I’m updating down from 70% for packing umbrellas to 50% for packing umbrellas. So I guess I won’t pack one unless it happens to be in the bag already. But I’m worried I’m over-updating because of everything I don’t know about why you never assumed what ended up as “my position” in this thread.
As you pointed out yourself, people around here systematically spend too much time on the negative-sum activity (http://lesswrong.com/lw/3h/why_our_kind_cant_cooperate/) of speculating on their personal theories for what’s wrong with EA, usually from a position of lacking formal knowledge or seasoned experience with social movements. So when some speculation of the sort is presented, I say exactly what is flawed about the ideas and methodology, and will continue to do so until epistemic standards improve. People should not take every opportunity to question whether we should all pack umbrellas; they should go about their ordinary business until they find a sufficiently compelling reason for everyone to pack umbrellas, and then state their case.
And, if my language seems too “adversarial”… honestly, I expect people to deal with it. I don’t communicate in any way which is out of bounds for ordinary Internet or academic discourse. So, I’m not “riled up”, I feel entirely normal. And insisting upon a pathological level of faux civility is itself a kind of bias which inhibits subtle ingredients of communication.
We’ve been communicating so badly that I would’ve thought you’d be one to reject an article like the one you linked. Establishing the sort of movement that Eliezer is talking about was the central motivation for making my suggestion in the first place.
If you think you can use a cooperative type of discourse in a private conversation where there is no audience that you need to address at the same time, then I’d like to remember that for the next time when I think we can learn something from each other on some topic.