I wanted to pose a question (that I found plausible), and now you’ve understood what I was asking, so my work here is pretty much done.
But I can also, for a moment longer, stay in my role and argue for the other side, because I think there are a few more good arguments to be made.
The forum which takes option A looks respectable and strong. They cut to the object level instead of dancing around on the meta level. They look like they know what they are talking about, and someone who has the same opinions of the OP would—if reading the thread—tend to be attracted to the forum. Option B? I’m not sure if it looks snobbish, or just pathetic.
It’s true that I hadn’t considered the “online charisma” of the situation, but I don’t feel like Option B is what I’d like to argue for. Neither is Option A.
Option A looks really great until we consider the cost side of things. Several people with a comprehensive knowledge of economics, history, and politics investing hours of their time (per person leaving) on explaining things that must seem like complete basics to these experts? They could be using that time to push their own boundaries of knowledge or write a textbook or plan political activism or conduct prioritization research. And they will. Few people will have the patience to explain the same basics more than, say, five or ten times.
They’ll write FAQs, but then find that people are not satisfied when they pour out their most heartfelt irritation with the group only to be linked an FAQ entry that only fits their case so so.
It’s really just the basic Eternal September Effect that I’m describing, part of what Durkheim described as anomie.
Option B doesn’t have much to do with anything. I’m hoping to lower the churn rate by helping people predict from the outset whether they’ll want to stick with EA long term. Whatever tone we’ll favor for forum discussions is orthogonal to that.
But the kind of strategy I am referring to also increases the rate at which new people enter the movement, so there will be no such lethargy.
That’s also why a movement with a high churn rate like that would be doomed to having discussions only on a very shallow and, for many, tedious level.
When you speculate too much on complicated movement dynamics, it’s easy to overlook things like this via motivated reasoning.
Also what Fluttershy said. If you imagine me as some sort of ideologue with fixed or even just strong opinions, then I can assure you that neither is the case. My automatic reaction to your objections is, “Oh, I must’ve been wrong!” then “Well, good thing I didn’t state my opinion strongly. That’d be embarrassing,” and only after some deliberation I’ll remember that I had already considered many of these objections and gradually update back in the direction of my previous hypothesis. My opinions are quite unusually fluid.
Like I pointed out elsewhere, other social movements don’t worry about this sort of thing.
Other social movements end up like feminism, with oppositions and toxoplasma. Successful social movements don’t happen by default by not worrying about these sorts of dynamics, or I don’t think they do. That doesn’t mean that my stab at a partial solution goes in the correct direction, but it currently seems to me like an improvement.
Yes. And that’s exactly why this constant second-guessing and language policing—“oh, we have to be more nice,” “we have a lying problem,” “we have to respect everybody’s intellectual autonomy and give huge disclaimers about our movement,” etc—must be prevented from being pursued to a pathological extent.
Let’s exclude the last example or it’ll get recursive. How would you realize that? I’ve been a lurker in a very authoritarian forum for a while. They had some rules and the core users trusted the authorities to interpret them justly. Someone got banned every other week or so, but they were also somewhat secretive, never advertised the forum to more than one specific person at a time and only when they knew the person well enough to tell that they’d be a good fit for the forum. The core users all loved the forum as a place where they could safely express themselves.
I would’ve probably done great there, but the authoritarian thing scared me on a System 1 level. The latter (about careful advertisement) is roughly what I’m proposing here. (And if it turns out that we need more authoritarianism than I’ll accept that too.)
The lying problem thing is a point in case. She didn’t identify with the movement, just picked out some quotes, invented a story around them, and later took most of it back. Why does she even write something about a community she doesn’t feel part of? If most of her friends had been into badminton and she didn’t like it, she wouldn’t have caused a stir in the badminton community accusing it of having a lying or cheating problem or something. She would’ve tried it for a few hours and then largely ignored it, not needing to make up any excuse for disliking it.
It’s in the nature of moral intuitions that we think everyone should share ours, and maybe there’ll come a time when we have the power to change values in all of society and have the knowledge to know in what direction to change them and by how much, but we’re only starting in that direction now. We can still easily wink out again if we don’t play nice with other moral systems or don’t try to be ignored by them.
Moral trades are Pareto improvements, not compromises.
What’s the formal definition of “compromise”? My intuitive one included Pareto improvements.
Nobody who has left EA has done so with a loud public bang.
I counted this post as a loud, public bang.
People losing interest in EA is bad, but that’s kind of irrelevant—the issue here is whether it’s better for someone to join then leave, or never come at all. And people joining-then-leaving is generally better for the movement than people never coming at all.
I don’t think so, or at least when put into less extreme terms. I’d love to get input on this from an expert in social movements or organizational culture at companies.
Consultancy firms are known for their high churn rates, but that seems like an exception to me. Otherwise high onboarding costs (which we definitely have in EA), a gradual lowering of standards, minimization of communication overhead, and surely many other factors drive a lot of companies toward rather hiring with high precision and low recall than the other way around and then investing greatly into retaining the good employees they have. (Someone at Google, for example, said “The number one thing was to have an incredibly high bar for talent and never compromise.” They don’t want to get lots of people in, get them up to speed, hope they’ll contribute something, and lose most of them again after a year. They want to rather grow more slowly than get diluted like that.)
We probably can’t interview and reject people who are interested in EA, so the closest thing we can do is to help them decide as well as possible whether it’s really what they want to become part of long-term.
I don’t think this sort of thing, from Google or from EAs, would come off as pathetic.
But again, this is the sort of thing where I would love to ask an expert like Laszlo Bock for advise rather than trying to piece together some consistent narrative from a couple books and interviews. I’m really a big fan of just asking experts.
Option A looks really great until we consider the cost side of things. Several people with a comprehensive knowledge of economics, history, and politics investing hours of their time (per person leaving) on explaining things that must seem like complete basics to these experts? They could be using that time to push their own boundaries of knowledge or write a textbook or plan political activism or conduct prioritization research. And they will. Few people will have the patience to explain the same basics more than, say, five or ten times.
What I wrote in response to the OP took me maybe half an hour. If you want to save time then you can easily make quicker, smaller points, especially if you’re a subject matter expert. The issue at stake is more about the type of attitude and response than the length. What you’re worried about here applies equally well against all methods of online discourse, unless you want people to generally ignore posts.
They’ll write FAQs, but then find that people are not satisfied when they pour out their most heartfelt irritation with the group only to be linked an FAQ entry that only fits their case so so.
The purpose is not to satisfy the person writing the OP. That person has already made up their mind, as we’ve observed in this thread. The purpose is to make observers and forum members realize that we know what we are talking about.
Option B doesn’t have much to do with anything. I’m hoping to lower the churn rate by helping people predict from the outset whether they’ll want to stick with EA long term. Whatever tone we’ll favor for forum discussions is orthogonal to that.
Okay, so what kinds of things are you thinking of? I’m kind of lost here. The Wikipedia article on EA, the books by MacAskill and Singer, the EA Handbook, all seem to be a pretty good overview of what we do and stand for. You said that the one sentence descriptions of EA aren’t good enough, but they can’t possibly be, and no one joins a social movement based on its one sentence description.
That’s also why a movement with a high churn rate like that would be doomed to having discussions only on a very shallow and, for many, tedious level.
The addition of new members does not prevent old members from having high quality discussions. It only increases the amount of new person discussions, which seems perfect good to me.
If you imagine me as some sort of ideologue with fixed or even just strong opinions, then I can assure you that neither is the case.
I’m not. But the methodology you’re using here is suspect and prone to bias.
Other social movements end up like feminism, with oppositions and toxoplasma.
Or they end up successful and achieve major progress.
If you want to prevent oppositions and toxoplasma, narrowing who is invited in accomplishes very little. The smaller your ideological circle, the finer the factions become.
Successful social movements don’t happen by default by not worrying about these sorts of dynamics, or I don’t think they do.
No social movement has done things like this, i.e. trying to save time and effort for outsiders who are interested in joining by pushing off their interest, at the expense of its own short term goals. And no other social movement has had this level of obsessive theorizing about movement dynamics.
How would you realize that?
By calling out such behavior when I see it.
The latter (about careful advertisement) is roughly what I’m proposing here. (And if it turns out that we need more authoritarianism than I’ll accept that too.)
That sounds like a great way to ensure intellectual homogeneity as well as slow growth. The whole side of this which I ignored in my above post is that it’s completely wrong to think that restricting your outward messages will not result in false negatives among potential additions to the movement. So how many good donors and leaders would you want to ignore for the ability to keep one insufficiently likeminded person from joining? Since most EAs don’t leave, at least not in any bad way, it’s going to be >1.
Why does she even write something about a community she doesn’t feel part of?
She’s been with the rationalist community since early days as a member of MetaMed, so maybe that has something to do with it.
Movements really get criticized by people who are on the opposite spectrum and completely uninvolved. Every political faction gets its worst criticism from ideological opponents. Rationalists and EAs get most of their criticism from ideological opponents. I just don’t see much of this hypothesized twilight zone criticism that comes from nearly-aligned people, and when it does come it tends to be interesting and worth listening to. You only think of it as unduly significant because you are more exposed to it; you have no idea of the extent and audience of much more negative pieces written by people outside the EA social circle.
It’s in the nature of moral intuitions that we think everyone should share ours, and maybe there’ll come a time when we have the power to change values in all of society and have the knowledge to know in what direction to change them and by how much, but we’re only starting in that direction now. We can still easily wink out again if we don’t play nice with other moral systems or don’t try to be ignored by them.
I am not talking about not playing nice with other value systems. This is about whether to make conscious attempts to homogenize our community with a single value system and to prevent people with other value systems from making the supposed mistake of exploring our community. It’s not cooperation, it’s sacrificial, and it’s not about moral systems, it’s about people and their apparently precious time.
What’s the formal definition of “compromise”? My intuitive one included Pareto improvements.
Stipulate any definition, the point will be the same; you should not be worried about EAs making too many moral trades, because they’re going to be Pareto improvements.
I counted this post as a loud, public bang.
Then you should be much less worried about loud public bangs and much more worried about getting people interested in effective altruism.
I’d love to get input on this from an expert in social movements or organizational culture at companies.
Companies experience enormous costs in training new talent and opportunity costs if their talent needs to be replaced. Our onboarding costs are very low in comparison. Companies also have a limited amount of talent they can hire, while a social movement can grow very quickly, so it makes sense for companies to be selective in ways that social movements shouldn’t be. If a company could hire people for free then it would be much less selective. Finally, the example you selected (Google) is one of the more unusually selective companies, compared to other ones.
The Wikipedia article on EA, the books by MacAskill and Singer, the EA Handbook, all seem to be a pretty good overview of what we do and stand for.
Lila has probably read those. I think Singer’s book contained something to the effect that the book is probably not meant for anyone who wouldn’t pull out the child. MacAskill’s book is more of a how-to; such a meta question would feel out of place there, but I’m not sure; it’s been a while since I read it.
Especially texts that appeal to moral obligation (which I share) signal that the reader needs to find an objective flaw in them to be able to reject them. That, I’m afraid, leads to people attacking EA for all sorts of made-up or not-actually-evil reasons. That can result in toxoplasma and opposition. If they could just feel like they can ignore us without attacking us first, we could avoid that.
If you want to prevent oppositions and toxoplasma, narrowing who is invited in accomplishes very little. The smaller your ideological circle, the finer the factions become.
A lot of your objections take the form of likely-sounding counternarratives to my narratives. They don’t make me feel like my narratives are less likely than yours, but I increasingly feel like this discussion is not going to go anywhere unless someone jumps in with solid knowledge of history or organizational culture, historical precedents and empirical studies to cite, etc.
So how many good donors and leaders would you want to ignore for the ability to keep one insufficiently likeminded person from joining? Since most EAs don’t leave, at least not in any bad way, it’s going to be >1.
That’s a good way to approach the question! We shouldn’t only count those that join the movement for a while and then part ways with it again but also those that hear about it and ignore it, publish a nonconstructive critique of it, tell friends why EA is bad, etc. With small rhetorical tweaks of the type that I’m proposing, we can probably increase the number of those that ignore it solely at the expense of the numbers who would’ve smeared it and not at the expense of the numbers who would’ve joined. Once we exhaust our options for such tweaks, the problem becomes as hairy as you put it.
I haven’t really dared to take a stab at how such an improvement should be worded. I’d rather base this on a bit of survey data among people who feel that EA values are immoral from their perspective. The positive appeals may stay the same but be joined by something to the effect that if they think they can’t come to terms with values X and Y, EA may not be for them. They’ll probably already have known that (and the differences may be too subtle to have helped Lila), but saying it will communicate that they can ignore EA without first finding fault with it or attacking it.
And no other social movement has had this level of obsessive theorizing about movement dynamics.
Oh dear, yeah! We should both be writing our little five-hour research summaries on possible cause areas rather than starting yet another marketing discussion. I know someone at CEA who’d get cross with me if he saw me doing this again. xD
It’s well possible that I’m overly sensitive to being attacked (by outside critics), and I should just ignore it and carry on doing my EA things, but I don’t think I overestimate this threat to the extend that I think further investment of our time into this discussion would be proportional.
Sure. But Lila complained about small things that are far from universal to effective altruism. The vast majority of people who differ in their opinions on the points described in the OP do not leave EA. As I mentioned in my top level comment, Lila is simply confused about many of the foundational philosophical issues which she thinks pose an obstacle to her being in effective altruism. Some people will always fall through the cracks, and in this case one of them decided to write about it. Don’t over-update based on an example like this.
Note also that someone who engages with EA to the extent of reading one of these books will mostly ignore the short taglines accompanying marketing messages, which seem to be what you’re after. And people who engage with the community will mostly ignore both books and marketing messages when it comes to making an affective judgement.
Especially texts that appeal to moral obligation (which I share) signal that the reader needs to find an objective flaw in them to be able to reject them. That, I’m afraid, leads to people attacking EA for all sorts of made-up or not-actually-evil reasons. That can result in toxoplasma and opposition. If they could just feel like they can ignore us without attacking us first, we could avoid that.
And texts that don’t appeal to moral obligation make a weak argument that is simply ignored. That results in apathy and a frivolous approach.
A lot of your objections take the form of likely-sounding counternarratives to my narratives.
Yes, and it’s sufficient. You are proposing a policy which will necessarily hurt short term movement growth. The argument depends on being establish a narrative to support its value.
We shouldn’t only count those that join the movement for a while and then part ways with it again but also those that hear about it and ignore it, publish a nonconstructive critique of it, tell friends why EA is bad, etc.
But on my side, we shouldn’t only count those who join the movement and stay; we should also count those who hear about it and are lightly positive about it, share some articles and books with their friends, publish a positive critique about it, start a conversation with their friends about EA, like it on social media, etc.
With small rhetorical tweaks of the type that I’m proposing, we can probably increase the number of those that ignore it solely at the expense of the numbers who would’ve smeared it and not at the expense of the numbers who would’ve joined.
I don’t see how. The more restrictive your message, the less appealing and widespread it is.
The positive appeals may stay the same but be joined by something to the effect that if they think they can’t come to terms with values X and Y, EA may not be for them.
What a great way to signal-boost messages which harm our movement. Time for the outside view: do you see any organization in the whole world which does this? Why?
Are you really advocating messages like “EA is great but if you don’t agree with universally following expected value calculations then it may not be for you?” If we had done this with any of the things described here, we’d be intellectually dishonest—since EA does not assume absurd expected value calculations, or invertebrate sentience, or moral realism.
It’s one thing to try to help people out by being honest with them… it’s quite another to be dishonest in a paternalistic bid to keep them from “wasting time” by contributing to our movement.
but saying it will communicate that they can ignore EA without first finding fault with it or attacking it.
That is what the vast majority of people who read about EA already do.
It’s well possible that I’m overly sensitive to being attacked (by outside critics),
Not only that, but you’re sensitive to the extent that you’re advocating caving in to their ideas and giving up the ideological space they want.
This is why we like rule consequentialism and heuristics instead of doing act-consequentialist calculations all the time. A movement that gets emotionally affected by its critics and shaken by people leaving will fall apart. A movement that makes itself subservient to the people it markets to will stagnate. And a movement whose response to criticism is to retreat to narrower and narrower ideological space will become irrelevant. But a movement that practices strength and assures its value on multiple fronts will succeed.
You get way too riled up over this. I started out being like “Uh, cloudy outside. Should we all pack umbrellas?” I’m not interested in an adversarial debate over the merits of packing umbrellas, one where there is winning and losing and all that nonsense. I’m not backing down; I was never interested in that format to begin with. It would incentivize me to exaggerate my confidence into the merits of packing umbrellas, which has been low all along; incentivize me to not be transparent about my epistemic status, as it were, my suspected biases and such; and so would incentivize an uncooperative setup for the discussion. The same probably applies to you.
I’m updating down from 70% for packing umbrellas to 50% for packing umbrellas. So I guess I won’t pack one unless it happens to be in the bag already. But I’m worried I’m over-updating because of everything I don’t know about why you never assumed what ended up as “my position” in this thread.
As you pointed out yourself, people around here systematically spend too much time on the negative-sum activity (http://lesswrong.com/lw/3h/why_our_kind_cant_cooperate/) of speculating on their personal theories for what’s wrong with EA, usually from a position of lacking formal knowledge or seasoned experience with social movements. So when some speculation of the sort is presented, I say exactly what is flawed about the ideas and methodology, and will continue to do so until epistemic standards improve. People should not take every opportunity to question whether we should all pack umbrellas; they should go about their ordinary business until they find a sufficiently compelling reason for everyone to pack umbrellas, and then state their case.
And, if my language seems too “adversarial”… honestly, I expect people to deal with it. I don’t communicate in any way which is out of bounds for ordinary Internet or academic discourse. So, I’m not “riled up”, I feel entirely normal. And insisting upon a pathological level of faux civility is itself a kind of bias which inhibits subtle ingredients of communication.
We’ve been communicating so badly that I would’ve thought you’d be one to reject an article like the one you linked. Establishing the sort of movement that Eliezer is talking about was the central motivation for making my suggestion in the first place.
If you think you can use a cooperative type of discourse in a private conversation where there is no audience that you need to address at the same time, then I’d like to remember that for the next time when I think we can learn something from each other on some topic.
I wanted to pose a question (that I found plausible), and now you’ve understood what I was asking, so my work here is pretty much done.
But I can also, for a moment longer, stay in my role and argue for the other side, because I think there are a few more good arguments to be made.
It’s true that I hadn’t considered the “online charisma” of the situation, but I don’t feel like Option B is what I’d like to argue for. Neither is Option A.
Option A looks really great until we consider the cost side of things. Several people with a comprehensive knowledge of economics, history, and politics investing hours of their time (per person leaving) on explaining things that must seem like complete basics to these experts? They could be using that time to push their own boundaries of knowledge or write a textbook or plan political activism or conduct prioritization research. And they will. Few people will have the patience to explain the same basics more than, say, five or ten times.
They’ll write FAQs, but then find that people are not satisfied when they pour out their most heartfelt irritation with the group only to be linked an FAQ entry that only fits their case so so.
It’s really just the basic Eternal September Effect that I’m describing, part of what Durkheim described as anomie.
Option B doesn’t have much to do with anything. I’m hoping to lower the churn rate by helping people predict from the outset whether they’ll want to stick with EA long term. Whatever tone we’ll favor for forum discussions is orthogonal to that.
That’s also why a movement with a high churn rate like that would be doomed to having discussions only on a very shallow and, for many, tedious level.
Also what Fluttershy said. If you imagine me as some sort of ideologue with fixed or even just strong opinions, then I can assure you that neither is the case. My automatic reaction to your objections is, “Oh, I must’ve been wrong!” then “Well, good thing I didn’t state my opinion strongly. That’d be embarrassing,” and only after some deliberation I’ll remember that I had already considered many of these objections and gradually update back in the direction of my previous hypothesis. My opinions are quite unusually fluid.
Other social movements end up like feminism, with oppositions and toxoplasma. Successful social movements don’t happen by default by not worrying about these sorts of dynamics, or I don’t think they do. That doesn’t mean that my stab at a partial solution goes in the correct direction, but it currently seems to me like an improvement.
Let’s exclude the last example or it’ll get recursive. How would you realize that? I’ve been a lurker in a very authoritarian forum for a while. They had some rules and the core users trusted the authorities to interpret them justly. Someone got banned every other week or so, but they were also somewhat secretive, never advertised the forum to more than one specific person at a time and only when they knew the person well enough to tell that they’d be a good fit for the forum. The core users all loved the forum as a place where they could safely express themselves.
I would’ve probably done great there, but the authoritarian thing scared me on a System 1 level. The latter (about careful advertisement) is roughly what I’m proposing here. (And if it turns out that we need more authoritarianism than I’ll accept that too.)
The lying problem thing is a point in case. She didn’t identify with the movement, just picked out some quotes, invented a story around them, and later took most of it back. Why does she even write something about a community she doesn’t feel part of? If most of her friends had been into badminton and she didn’t like it, she wouldn’t have caused a stir in the badminton community accusing it of having a lying or cheating problem or something. She would’ve tried it for a few hours and then largely ignored it, not needing to make up any excuse for disliking it.
It’s in the nature of moral intuitions that we think everyone should share ours, and maybe there’ll come a time when we have the power to change values in all of society and have the knowledge to know in what direction to change them and by how much, but we’re only starting in that direction now. We can still easily wink out again if we don’t play nice with other moral systems or don’t try to be ignored by them.
What’s the formal definition of “compromise”? My intuitive one included Pareto improvements.
I counted this post as a loud, public bang.
I don’t think so, or at least when put into less extreme terms. I’d love to get input on this from an expert in social movements or organizational culture at companies.
Consultancy firms are known for their high churn rates, but that seems like an exception to me. Otherwise high onboarding costs (which we definitely have in EA), a gradual lowering of standards, minimization of communication overhead, and surely many other factors drive a lot of companies toward rather hiring with high precision and low recall than the other way around and then investing greatly into retaining the good employees they have. (Someone at Google, for example, said “The number one thing was to have an incredibly high bar for talent and never compromise.” They don’t want to get lots of people in, get them up to speed, hope they’ll contribute something, and lose most of them again after a year. They want to rather grow more slowly than get diluted like that.)
We probably can’t interview and reject people who are interested in EA, so the closest thing we can do is to help them decide as well as possible whether it’s really what they want to become part of long-term.
I don’t think this sort of thing, from Google or from EAs, would come off as pathetic.
But again, this is the sort of thing where I would love to ask an expert like Laszlo Bock for advise rather than trying to piece together some consistent narrative from a couple books and interviews. I’m really a big fan of just asking experts.
What I wrote in response to the OP took me maybe half an hour. If you want to save time then you can easily make quicker, smaller points, especially if you’re a subject matter expert. The issue at stake is more about the type of attitude and response than the length. What you’re worried about here applies equally well against all methods of online discourse, unless you want people to generally ignore posts.
The purpose is not to satisfy the person writing the OP. That person has already made up their mind, as we’ve observed in this thread. The purpose is to make observers and forum members realize that we know what we are talking about.
Okay, so what kinds of things are you thinking of? I’m kind of lost here. The Wikipedia article on EA, the books by MacAskill and Singer, the EA Handbook, all seem to be a pretty good overview of what we do and stand for. You said that the one sentence descriptions of EA aren’t good enough, but they can’t possibly be, and no one joins a social movement based on its one sentence description.
The addition of new members does not prevent old members from having high quality discussions. It only increases the amount of new person discussions, which seems perfect good to me.
I’m not. But the methodology you’re using here is suspect and prone to bias.
Or they end up successful and achieve major progress.
If you want to prevent oppositions and toxoplasma, narrowing who is invited in accomplishes very little. The smaller your ideological circle, the finer the factions become.
No social movement has done things like this, i.e. trying to save time and effort for outsiders who are interested in joining by pushing off their interest, at the expense of its own short term goals. And no other social movement has had this level of obsessive theorizing about movement dynamics.
By calling out such behavior when I see it.
That sounds like a great way to ensure intellectual homogeneity as well as slow growth. The whole side of this which I ignored in my above post is that it’s completely wrong to think that restricting your outward messages will not result in false negatives among potential additions to the movement. So how many good donors and leaders would you want to ignore for the ability to keep one insufficiently likeminded person from joining? Since most EAs don’t leave, at least not in any bad way, it’s going to be >1.
She’s been with the rationalist community since early days as a member of MetaMed, so maybe that has something to do with it.
Movements really get criticized by people who are on the opposite spectrum and completely uninvolved. Every political faction gets its worst criticism from ideological opponents. Rationalists and EAs get most of their criticism from ideological opponents. I just don’t see much of this hypothesized twilight zone criticism that comes from nearly-aligned people, and when it does come it tends to be interesting and worth listening to. You only think of it as unduly significant because you are more exposed to it; you have no idea of the extent and audience of much more negative pieces written by people outside the EA social circle.
I am not talking about not playing nice with other value systems. This is about whether to make conscious attempts to homogenize our community with a single value system and to prevent people with other value systems from making the supposed mistake of exploring our community. It’s not cooperation, it’s sacrificial, and it’s not about moral systems, it’s about people and their apparently precious time.
Stipulate any definition, the point will be the same; you should not be worried about EAs making too many moral trades, because they’re going to be Pareto improvements.
Then you should be much less worried about loud public bangs and much more worried about getting people interested in effective altruism.
Companies experience enormous costs in training new talent and opportunity costs if their talent needs to be replaced. Our onboarding costs are very low in comparison. Companies also have a limited amount of talent they can hire, while a social movement can grow very quickly, so it makes sense for companies to be selective in ways that social movements shouldn’t be. If a company could hire people for free then it would be much less selective. Finally, the example you selected (Google) is one of the more unusually selective companies, compared to other ones.
Lila has probably read those. I think Singer’s book contained something to the effect that the book is probably not meant for anyone who wouldn’t pull out the child. MacAskill’s book is more of a how-to; such a meta question would feel out of place there, but I’m not sure; it’s been a while since I read it.
Especially texts that appeal to moral obligation (which I share) signal that the reader needs to find an objective flaw in them to be able to reject them. That, I’m afraid, leads to people attacking EA for all sorts of made-up or not-actually-evil reasons. That can result in toxoplasma and opposition. If they could just feel like they can ignore us without attacking us first, we could avoid that.
A lot of your objections take the form of likely-sounding counternarratives to my narratives. They don’t make me feel like my narratives are less likely than yours, but I increasingly feel like this discussion is not going to go anywhere unless someone jumps in with solid knowledge of history or organizational culture, historical precedents and empirical studies to cite, etc.
That’s a good way to approach the question! We shouldn’t only count those that join the movement for a while and then part ways with it again but also those that hear about it and ignore it, publish a nonconstructive critique of it, tell friends why EA is bad, etc. With small rhetorical tweaks of the type that I’m proposing, we can probably increase the number of those that ignore it solely at the expense of the numbers who would’ve smeared it and not at the expense of the numbers who would’ve joined. Once we exhaust our options for such tweaks, the problem becomes as hairy as you put it.
I haven’t really dared to take a stab at how such an improvement should be worded. I’d rather base this on a bit of survey data among people who feel that EA values are immoral from their perspective. The positive appeals may stay the same but be joined by something to the effect that if they think they can’t come to terms with values X and Y, EA may not be for them. They’ll probably already have known that (and the differences may be too subtle to have helped Lila), but saying it will communicate that they can ignore EA without first finding fault with it or attacking it.
Oh dear, yeah! We should both be writing our little five-hour research summaries on possible cause areas rather than starting yet another marketing discussion. I know someone at CEA who’d get cross with me if he saw me doing this again. xD
It’s well possible that I’m overly sensitive to being attacked (by outside critics), and I should just ignore it and carry on doing my EA things, but I don’t think I overestimate this threat to the extend that I think further investment of our time into this discussion would be proportional.
Sure. But Lila complained about small things that are far from universal to effective altruism. The vast majority of people who differ in their opinions on the points described in the OP do not leave EA. As I mentioned in my top level comment, Lila is simply confused about many of the foundational philosophical issues which she thinks pose an obstacle to her being in effective altruism. Some people will always fall through the cracks, and in this case one of them decided to write about it. Don’t over-update based on an example like this.
Note also that someone who engages with EA to the extent of reading one of these books will mostly ignore the short taglines accompanying marketing messages, which seem to be what you’re after. And people who engage with the community will mostly ignore both books and marketing messages when it comes to making an affective judgement.
And texts that don’t appeal to moral obligation make a weak argument that is simply ignored. That results in apathy and a frivolous approach.
Yes, and it’s sufficient. You are proposing a policy which will necessarily hurt short term movement growth. The argument depends on being establish a narrative to support its value.
But on my side, we shouldn’t only count those who join the movement and stay; we should also count those who hear about it and are lightly positive about it, share some articles and books with their friends, publish a positive critique about it, start a conversation with their friends about EA, like it on social media, etc.
I don’t see how. The more restrictive your message, the less appealing and widespread it is.
What a great way to signal-boost messages which harm our movement. Time for the outside view: do you see any organization in the whole world which does this? Why?
Are you really advocating messages like “EA is great but if you don’t agree with universally following expected value calculations then it may not be for you?” If we had done this with any of the things described here, we’d be intellectually dishonest—since EA does not assume absurd expected value calculations, or invertebrate sentience, or moral realism.
It’s one thing to try to help people out by being honest with them… it’s quite another to be dishonest in a paternalistic bid to keep them from “wasting time” by contributing to our movement.
That is what the vast majority of people who read about EA already do.
Not only that, but you’re sensitive to the extent that you’re advocating caving in to their ideas and giving up the ideological space they want.
This is why we like rule consequentialism and heuristics instead of doing act-consequentialist calculations all the time. A movement that gets emotionally affected by its critics and shaken by people leaving will fall apart. A movement that makes itself subservient to the people it markets to will stagnate. And a movement whose response to criticism is to retreat to narrower and narrower ideological space will become irrelevant. But a movement that practices strength and assures its value on multiple fronts will succeed.
You get way too riled up over this. I started out being like “Uh, cloudy outside. Should we all pack umbrellas?” I’m not interested in an adversarial debate over the merits of packing umbrellas, one where there is winning and losing and all that nonsense. I’m not backing down; I was never interested in that format to begin with. It would incentivize me to exaggerate my confidence into the merits of packing umbrellas, which has been low all along; incentivize me to not be transparent about my epistemic status, as it were, my suspected biases and such; and so would incentivize an uncooperative setup for the discussion. The same probably applies to you.
I’m updating down from 70% for packing umbrellas to 50% for packing umbrellas. So I guess I won’t pack one unless it happens to be in the bag already. But I’m worried I’m over-updating because of everything I don’t know about why you never assumed what ended up as “my position” in this thread.
As you pointed out yourself, people around here systematically spend too much time on the negative-sum activity (http://lesswrong.com/lw/3h/why_our_kind_cant_cooperate/) of speculating on their personal theories for what’s wrong with EA, usually from a position of lacking formal knowledge or seasoned experience with social movements. So when some speculation of the sort is presented, I say exactly what is flawed about the ideas and methodology, and will continue to do so until epistemic standards improve. People should not take every opportunity to question whether we should all pack umbrellas; they should go about their ordinary business until they find a sufficiently compelling reason for everyone to pack umbrellas, and then state their case.
And, if my language seems too “adversarial”… honestly, I expect people to deal with it. I don’t communicate in any way which is out of bounds for ordinary Internet or academic discourse. So, I’m not “riled up”, I feel entirely normal. And insisting upon a pathological level of faux civility is itself a kind of bias which inhibits subtle ingredients of communication.
We’ve been communicating so badly that I would’ve thought you’d be one to reject an article like the one you linked. Establishing the sort of movement that Eliezer is talking about was the central motivation for making my suggestion in the first place.
If you think you can use a cooperative type of discourse in a private conversation where there is no audience that you need to address at the same time, then I’d like to remember that for the next time when I think we can learn something from each other on some topic.