I’m sad to hear you think users of the Forum should censor our conversations in a fashion similar to mainstream political parties—groups not especially known for their free thinking or original ideas. Personally I think most of the value from EA comes from (and will continue to be) its novel intellectual frameworks for working on important problems, and not it’s political presence. (For example, OpenPhil says its main advantage over other actors in the space is its strategic cause selection, not its superior political sway.)
In that vein I think it’s incredibly valuable for a small intellectual community to understand what views it is silencing/punishing, and I found the above very valuable. I think there’s issues of adversarial bias with it being fully public (e.g. people writing inaccurate/false-flag entries out of spite) and it could be better in future to do a version with Forum users with >100 karma.
I see little upside in knowing almost all of what is said here, but see lots of downside.
(1) For some (most?) of these opinions, there isn’t any social pressure not to air them. Indeed, as several people have already noted, some of these topics are already the subject of extensive public debate by people who like EA. (negative utilitarianism is plausible, utilitarianism is false, human enhancement is good, abortion is bad, remote working might lead to burnout, scepticism about polyamory, mental health is important etc). No value is added in airing these things anonymously.
(2) Some seem to be discussed less often but it is not clear why. eg if people want to have a go at CFAR publicly, I don’t really see what is stopping them as long as their arguments are sensible. It’s not as though criticising EA orgs is forbidden. I’ve criticised ACE publicly and as far as I know, this hasn’t negatively affected me. People have pretty brutally criticised the long-term future fund formation and grants. etc.
(3) A small minority of these might reveal truths about flaws in the movement that there is social pressure not to air. (this is where the positive value comes from).
(4) For the most important subset of beyond the pale views, there is a clear risk of people not wholly bought into EA seeing this and this being extremely offputting. This is a publicly published document which could be found by the media or major philanthropists when they are googling what effective altruism is. It could be shared on facebook by someone saying “look at all the unpleasant things that effective altruists think”. In general, this post allows people to pass on reputational damage they might personally bear from themselves to the movement as a whole.
Unfortunately, I can speak from first hand experience on the harm that this post has done. This post has been shared within the organisation I work for and I think could do very large damage to the reputation of EA within my org. I suspect that this alone makes the impact of this poll clearly net negative. I hope the person who set up this post sees that and reconsiders setting up a similar poll in the future.
For some (most?) of these opinions, there isn’t any social pressure not to air them. Indeed, as several people have already noted, some of these topics are already the subject of extensive public debate by people who like EA.
First: Many positions in the public discourse are still strongly silenced. To borrow an idea from Scott Alexander, the measure of how silenced something is is not how many people talk publicly about it, but the ratio of people who talk publicly about it to the people who believe it. If a lot of people in a form say they believe something but are afraid to talk about it, I think it’s a straightforward sign that they do feel silenced. I think you should indeed update that, to borrow some of your examples, when someone makes an argument for negative utilitarianism, or human enhancement, or abortion, or mental health, that several people are feeling grateful that the person is stepping out and watching with worry to see whether the person gets attacked/dismissed/laughed at. I’m pretty sure I personally have experienced seeing people lose points socially for almost every single example you listed, to varying degrees.
Second: Even for social and political movements, it’s crucial to know what people actually believe but don’t want to say publicly. The conservative right in the US of the last few decades would probably have liked to know that many people felt silenced about how much they liked gay marriage, given the very sudden swing in public opinion on that topic; they could then have chosen not to build major political infrastructure around the belief that their constituents would stand by that policy position. More recently I think the progressive left of many countries in Europe, Australia and the US would appreciate knowing when people are secretly more supportive of right wing policies, as there has been (IIRC) a series of elections and votes where the polls predicted a strong left-wing victory and in reality there was a slight right-wing victory.
Third: I think the public evidence of the quality of the character of people working on important EA projects is very strong and not easily overcome. You explain that it’s important to you that folks at your org saw it and they felt worried that EA contains lots of bad people, or people who believe unsavoury things, or something. I guess my sense here is that there is a lot of strong, public evidence about the quality of people who are working on EA problems, about the insights that many public figures in the community have, and about the integrity of many of the individuals and organisation.
You can see how Holden Karnofsky went around being brutally honest yet rigorous in his analysis of charities in the global health space.
You can see how Toby Ord and many others have committed to giving a substantial portion of their lifetime resources to altruistic causes instead of personal ones.
You can see how Eliezer Yudkowsky and Nick Bostrom spent several decades of their lives attempting to lay out a coherent philosophy and argument that allowed people to identify a key under-explored problem for humanity.
You can read the writings of Scott Alexander and see how carefully he thinks about ethics, morality and community.
You can listen to the podcast of and read the public writings by Julia Galef and see how carefully she thinks about complex and controversial topics and the level of charity she gives to people on both sides of debates.
You can read the extensive writing of The Unit Of Caring by Kelsey Piper and see how much she cares about both people and principles, and how she will spend a great deal of her time trying to help people figure out their personal and ethical problems.
I could keep listing examples, but I hope the above gets my point across.
I am interested in being part of a network of people who build trust through costly (yet worthwhile) acts of ethics, integrity, and work on important problems, and I do not think the above public Form is a risk to the connections of that network.
Fourth: It’s true that many social movements have been able to muster a lot of people and political power behind solving important problems, and that this required them to care a lot about PR and hold very tight constraints on what they can be publicly associated with (and thus what they’re allowed to say publicly). I think however, that these social movements are not capable of making scientific and conceptual progress on difficult high-level questions like cause prioritisation and the discovery of crucial considerations.
They’re very inflexible; by this I don’t merely mean that they’re hard to control and can take on negative affect (e.g. new atheism is often considered aggressive or unkind), but that they often cannot course correct or change their minds (e.g. environmentalism on nuclear energy, I think) in a way that entirely prohibits intellectual progress. Like, I don’t think you can get ‘environmentalism, but for cause prioritisation’ or ‘feminism, but for crucial considerations’. I think the thing we actually want here is something much closer to ‘science’, or ‘an intellectual movement’. And I think your points are much less applicable to a healthy scientific community.
I hope this helps to communicate where I’m coming from.
1. I think we disagree on the empirical facts here. EA seems to me unusually open to considering rational arguments for unfashionable positions. People in my experience lose points for bad arguments, not for weird conclusions. I’d be very perplexed if someone were not willing to discuss whether or not utilitarianism is false (or whether remote working is bad etc) in front of EAs, and would think someone was overcome by irrational fear if they declined to do so. Michael Plant believes one of the allegedly taboo opinions here (mental health should be a priority) and is currently on a speaking tour of EA events across the Far East.
2. This is a good point and updates me towards the usefulness of the survey, but I wonder whether there is a better way to achieve this that doesn’t carry such clear reputational risks for EA.
3. The issue is not whether my colleagues have sufficient public accessible reason to believe that EA is full of good people acting in good faith (which they do), but whether this survey weighs heavily or not in the evidence that they will actually consider. i.e. this might lead them not to consider the rest of the evidence that EA is mostly full of good people working in good faith. I think there is a serious risk of that.
4. As mentioned elsewhere in the thread, I’m not saying that EA should embrace political level self-restraint. What I am saying is that there are sometimes reasons to self-censor holding forth on all of your opinions in public when you represent a community of people trying to achieve something important. The respondents to this poll implicitly agree with that given that they want to remain anonymous. For some of these statements, the reputational risk of airing them anonymously does not transfer from them to the EA movement as a whole. For other statements, the reputational risk does transfer from them to the community as a whole.
Do you think anyone in the community should ever self-censor for the sake of the reputation of the movement? Do you think scientists should ever self-censor their views?
“People in my experience lose points for bad arguments, not for weird conclusions.”
I just want to note that in my experience this only happens if you’re challenging something that’s mainstream in EA. If I tell an EA “I’m a utilitarian,” that’s fine. If I say, “I’m not a utilitarian,” I need to provide arguments for why. That’s scary, because I’ve never studied philosophy, and I’m often being stared down by a room full of people with philosophy degrees.
So basically, some of us are not smart enough to make good arguments for everything we believe—and we’ll only lose social points for that if we mention that we have weird beliefs.
I might have more to say later. On (1), I want to state that, to me, my position seems like the conservative one. If certain views are being politically silenced, my sense is that it’s good for people to have the opportunity to state that. In the alternative, people are only allowed to do this if you already believe that they’re subject to unfair political pressure. Looking over the list and thinking “Hm, about 100 people say they feel silenced or that their opinions feel taboo, but I think they’re wrong about being silenced (or else I think that their opinions should be taboo!), so they shouldn’t have this outlet to say that” seems like a strong case for a potential correlated failure. Like, I don’t fully trust my own personal sense of which of the listed positions actually is and isn’t taboo in this way, and would feel quite bad dictating who was allowed to anonymously say they felt politically pressured based on who I believed was being politically pressured.
There are two issues here. The less important one is - (1) are people’s beliefs that many of these opinions are taboo rational? I think not, and have discussed the reasons why above.
The more important one is (2) - this poll is a blunt instrument that encourages people to enter offensive opinions that threaten the reputation of the movement. If there were a way to do this with those opinions laundered out, then I wouldn’t have a problem.
This has been done in a very careless way without due thought to the very obvious risks
If there were a way to do this with those opinions laundered out, then I wouldn’t have a problem.
I interpret [1] you here as saying “if you press the button of ‘make people search for all their offensive and socially disapproved beliefs, and collect the responses in a single place’ you will inevitably have a bad time. There are complex reasons lots of beliefs have evolved to be socially punished, and tearing down those fences might be really terrible. Even worse, there are externalities such that one person saying something crazy is going to negatively effect *everyone* in the community, and one must be very careful when setting up systems that create such externalities. Importantly though, these costs aren’t intrinsically tied up with the benefits of this poll—you *can* have good ways of dispelling bubbles and encouraging important whistle-blowing, without opening a Pandora’s box of reputational hazards.”
1) Curious if this seems right to you?
2) More importantly, I’m curious about what concrete versions of this you would be fine with, or support?
Someone suggested:
a version with Forum users with >100 karma
Would that address your concerns? Is there anything else that would?
[1] This is to a large extent: “the most plausible version of something similar to what you’re saying, that I understand from my own position”, rather than than “something I’m very confident you actually belief”.
Thanks John, really useful to hear specifically how this has been used and why that was problematic. I certainly wouldn’t have predicted this would be the kind of thing that would be of interest to your org such that it got shared around and commented on, and it makes me aware of a risk I wouldn’t have considered.
Just as a sign of social support: I am grateful to whoever organized this poll, and would be deeply saddened to be part of a community where we punish people who organize polls like this. Obviously it’s fine for Halstead to have his perspective, but it seemed valuable to provide a counterpoint to communicate that I would be happy to defend anyone who organizes polls like this, and put a significant fraction of my social capital behind our collective ability to do things like this.
I respect your view Oli, but I don’t think the person organising it put sufficient thought into the downsides of doing a poll such as this. They didn’t discuss any of the obvious risks in the ‘why this is a valuable exercise’ section.
I do think that I am quite hesitant to promote a norm that you are no longer allowed to ask people questions about their honest opinion in public, without having written a whole essay about the possible reasons for why that might be bad. I don’t think this is the type of question that one should have to justify, it’s the type of question that our community should make as easy as possible.
There exist risks, of course, but I think those risks should be analyzed by core members of the community and then communicated via norms and social expectations. I don’t think it’s reasonable to expect every member of the community to fully justify actions like this.
Hi, you start with a straw man here—I’m not requesting that they write a whole essay, I’m just requesting that they put some thought into the potential downsides, rather than zero thought (as occurred here). As I understand your view, you think the person has no obligation to put any thought into whether publishing this post is a good idea or not. I have to say I find this an implausible and strange position.
It is unclear whether the author has put thought into the downsides, all we know is that the author did not emphasize potential downsides in the writeup.
I don’t think the person doesn’t have to put any thought into whether publishing a post like this is a good idea or not, only that they don’t have to put significant effort into publicly making a case for the benefits outweighing the cost. The burden of making that case is much larger than the burden of just thinking about it, and would be large enough to get rid of most people just asking honest questions of others in public.
They have a section on ‘why do this?’ and don’t discuss any of the obvious risks which suggests they haven’t thought properly about the issue. I think a good norm to propagate would be—people put a lot of thought into whether they should publish posts that could potentially damage the movement. Do you agree?
Suppose I am going to run a poll on ‘what’s the most offensive thing you believe—anonymous public poll for effective altruists’. (1) do you think I should have to publicly explain why I am doing this? (2) do you think I should run this poll and publish the results?
I do indeed generally think that whether their writings will “damage the movement” should not be particularly high in their list of considerations to think about when asking other people questions, or writing up their thoughts. I think being overly concerned with reputation has a long history of squashing intellectual generativity, and I very explicitly would not want people to feel like they have to think about how every sentence of theirs might reflect on the movement from the perspective of an uncharitable observer.
I prefer people first thinking about all the following type of considerations, and if the stakes seem high-enough, maybe also add reputation concerns, though the vast majority of time the author in question shouldn’t get that far down the list (and I also note that you are advocating for a policy that is in direct conflict with at least one item on this list, which I consider to be much more important than short-term reputation concerns):
Are you personally actually interested in the point you are making or the question you are asking?
Does the answer to the question you are asking, or answering, likely matter a lot in the big picture?
Is the thing that you are saying true?
Are you being personally honest about your behavior and actions?
Are you making it easier for other people to model you and to accurately predict your behavior in the future?
Does your question or answer address a felt need that you yourself, or someone you closely interacted, with actually has?
Are you propagating any actually dangerous technological insights, or other information hazards?
I would strongly object to the norm “before you post to the forum, think very hard about whether this will damage the reputation of the movement”, which I am quite confident would ensure that very little of interest would be said on this forum, since almost all interesting ideas that have come out of EA are quite controversial to many people, and also tended to have started out in their least polished and most-repugnant form.
I also remember the closing talk of EAG 2017, with the theme being “stay weird”, that explicitly advocated for being open and welcoming to people who say things that might sound strange or unpopular. I think that reflected an understanding that it is essential for EA to be very welcoming of ideas that sound off putting and heretical at first, in particular if they are otherwise likely to be punished or disincentivized by most of society.
But I got a chance to talk to [Will MacAskill] – just for a few minutes, before he had to run off and achieve something – and I was shocked at how much he knew about all the weirdest aspects of the community, and how protective he felt of them. And in his closing speech, he urged the attendees to “keep EA weird”, giving examples of times when seemingly bizarre ideas won out and became accepted by the mainstream.
I think a key example in this space would be a lot of the work by Brian Tomasik, whose writing I think is highly repugnant to large fractions of society, but has strongly influenced me in my thinking, and is what I consider to be one of the most valuable bodies of work to come out of the community (and to embody its core spirit, of taking ethical ideas seriously and seeing where they lead you), even though I strongly disagree with him on almost every one of his conclusions.
So no, I don’t think this is a good norm, and would strongly advise against elevating that consideration to the short list of things that people actually have the mental energy for to do when posting here. Maybe when you are writing an article about EA in a major newspaper, but definitely not for this forum, the most private space for public discourse that we have, and the primary space in which we can evaluate and engage with ideas in their early stages.
I think an anonymous poll of that type is probably fine, though just asking for offensive ideas is probably less likely to get valuable responses than the OP, so I feel less strongly about people being able to make that type of poll happen.
I do however still think that knowing the answers to that poll would be reasonably useful, and I still expect this to help me and others build better models of what others believe, and also think there is a good chance that a poll like this can break an equilibrium in which a silent majority is unwilling to speak up, which I think happens quite a bit and is usually bad.
So yeah, I think it would be fine to organize that poll. It’s a bit of a weird filter, so I would have some preference for the person adding an explicit disclaimer that this is an anonymous internet poll and ultimately this is primarily a tool for hypothesis generation, not a representative survey, but with that it seems likely reasonably positive to me. I don’t feel like that survey is as important as the type of survey that the OP organized, but I wouldn’t want to punish a person for organizing it, or filling it out.
(This is publicly available information, so I hope it’s fine if I share this. I noticed some people had downvoted this comment earlier on, so I am a bit hesitant, but after thinking more about it, I can’t think of any particular reason why this question should go unanswered.)
I think there’s issues of adversarial bias with it being fully public (e.g. people writing inaccurate/false-flag entries out of spite) and it could be better in future to do a version with Forum users with >100 karma.
Indeed. Anon open forms are maximally vulnerable to this: not only can detractors write stuff (for example, this poll did show up on reddits that are archly critical of EA etc.), but you can signal-boost your own renegade opinion if you’re willing to make the trivial effort to repeatedly submit it (e.g. “I think Alice sucks and people should stop paying attention to her”, “I completely agree with the comment above—Alice is just really toxic to this community”, “Absolutely agreed re. Alice, but I feel I can’t say anything publicly because she might retaliate against me”, etc.)
On detractors writing: Given some of the comments on the survey, I would be surprised if quite a few answers hadn’t come from people who have no connection to the EA community save as critics. For example:
EA is a waste of money and time. Another example of tech minded people trying to reinvent the wheel.
This doesn’t seem like someone who actually spends time on the EA Forum (or, if they do, I wish they’d do something they found more enjoyable).
This set-up does seem like it could be exploitable in an adversarial manner… but my impression from reading the poll results, is that this is weak evidence against that actually being a failure mode—since it doesn’t seem to have happened.
I didn’t notice any attempts to frame a particular person multiple times. The cases where there were repeated criticism of some orgs seemed to plausibly come from different accounts, since they often offered different reasons for the criticism or seemed stylistically different.
Moreover, if asked beforehand about the outcomes of something that can be read as “an open invitation to anonymous trolling that will get read by a huge amount of people in the movement”… I would have expected to see things way, way worse than what I actually saw. In fact, I’ve seen many public and identifiable comments sections on Facebook, YouTube or Twitter that were much worse than this anonymous poll.
(I claim these things weakly based on having read through all the responses in the sheet. I didn’t analyse them in-depth with an eye to finding traces of adversarial action, and don’t expect my approach here would have caught more sophisticated attempts.)
I don’t object to this activity. I found it really interesting to read what others think and can’t say. Still, I think there are times when it’s in a community’s best interest to self-censor, or at least not to post their least acceptable views online.
I agree with you; I just want to point clearly toward the end of the spectrum that is “a healthy intellectual community” rather than “a unified voting block that doesn’t allow its members to step out of line”.
The political analogy was an example; it was not meant to say that standard political constraints should apply to EA. The thought applies to any social movement, e.g. for people involved in environmentalism, radical exchange or libertarianism. If I were a libertarian and someone came to me saying “why don’t we run a poll of libertarians on opinions they are scared to air publicly and then publish those opinions online for the world to see”, I think it would be pretty obvious that this would be an extremely bad idea.
I’m sad to hear you think users of the Forum should censor our conversations in a fashion similar to mainstream political parties—groups not especially known for their free thinking or original ideas. Personally I think most of the value from EA comes from (and will continue to be) its novel intellectual frameworks for working on important problems, and not it’s political presence. (For example, OpenPhil says its main advantage over other actors in the space is its strategic cause selection, not its superior political sway.)
In that vein I think it’s incredibly valuable for a small intellectual community to understand what views it is silencing/punishing, and I found the above very valuable. I think there’s issues of adversarial bias with it being fully public (e.g. people writing inaccurate/false-flag entries out of spite) and it could be better in future to do a version with Forum users with >100 karma.
Hi Ben,
I see little upside in knowing almost all of what is said here, but see lots of downside.
(1) For some (most?) of these opinions, there isn’t any social pressure not to air them. Indeed, as several people have already noted, some of these topics are already the subject of extensive public debate by people who like EA. (negative utilitarianism is plausible, utilitarianism is false, human enhancement is good, abortion is bad, remote working might lead to burnout, scepticism about polyamory, mental health is important etc). No value is added in airing these things anonymously.
(2) Some seem to be discussed less often but it is not clear why. eg if people want to have a go at CFAR publicly, I don’t really see what is stopping them as long as their arguments are sensible. It’s not as though criticising EA orgs is forbidden. I’ve criticised ACE publicly and as far as I know, this hasn’t negatively affected me. People have pretty brutally criticised the long-term future fund formation and grants. etc.
(3) A small minority of these might reveal truths about flaws in the movement that there is social pressure not to air. (this is where the positive value comes from).
(4) For the most important subset of beyond the pale views, there is a clear risk of people not wholly bought into EA seeing this and this being extremely offputting. This is a publicly published document which could be found by the media or major philanthropists when they are googling what effective altruism is. It could be shared on facebook by someone saying “look at all the unpleasant things that effective altruists think”. In general, this post allows people to pass on reputational damage they might personally bear from themselves to the movement as a whole.
Unfortunately, I can speak from first hand experience on the harm that this post has done. This post has been shared within the organisation I work for and I think could do very large damage to the reputation of EA within my org. I suspect that this alone makes the impact of this poll clearly net negative. I hope the person who set up this post sees that and reconsiders setting up a similar poll in the future.
Hey John,
First: Many positions in the public discourse are still strongly silenced. To borrow an idea from Scott Alexander, the measure of how silenced something is is not how many people talk publicly about it, but the ratio of people who talk publicly about it to the people who believe it. If a lot of people in a form say they believe something but are afraid to talk about it, I think it’s a straightforward sign that they do feel silenced. I think you should indeed update that, to borrow some of your examples, when someone makes an argument for negative utilitarianism, or human enhancement, or abortion, or mental health, that several people are feeling grateful that the person is stepping out and watching with worry to see whether the person gets attacked/dismissed/laughed at. I’m pretty sure I personally have experienced seeing people lose points socially for almost every single example you listed, to varying degrees.
Second: Even for social and political movements, it’s crucial to know what people actually believe but don’t want to say publicly. The conservative right in the US of the last few decades would probably have liked to know that many people felt silenced about how much they liked gay marriage, given the very sudden swing in public opinion on that topic; they could then have chosen not to build major political infrastructure around the belief that their constituents would stand by that policy position. More recently I think the progressive left of many countries in Europe, Australia and the US would appreciate knowing when people are secretly more supportive of right wing policies, as there has been (IIRC) a series of elections and votes where the polls predicted a strong left-wing victory and in reality there was a slight right-wing victory.
Third: I think the public evidence of the quality of the character of people working on important EA projects is very strong and not easily overcome. You explain that it’s important to you that folks at your org saw it and they felt worried that EA contains lots of bad people, or people who believe unsavoury things, or something. I guess my sense here is that there is a lot of strong, public evidence about the quality of people who are working on EA problems, about the insights that many public figures in the community have, and about the integrity of many of the individuals and organisation.
You can see how Holden Karnofsky went around being brutally honest yet rigorous in his analysis of charities in the global health space.
You can see how Toby Ord and many others have committed to giving a substantial portion of their lifetime resources to altruistic causes instead of personal ones.
You can see how Eliezer Yudkowsky and Nick Bostrom spent several decades of their lives attempting to lay out a coherent philosophy and argument that allowed people to identify a key under-explored problem for humanity.
You can read the writings of Scott Alexander and see how carefully he thinks about ethics, morality and community.
You can listen to the podcast of and read the public writings by Julia Galef and see how carefully she thinks about complex and controversial topics and the level of charity she gives to people on both sides of debates.
You can read the extensive writing of The Unit Of Caring by Kelsey Piper and see how much she cares about both people and principles, and how she will spend a great deal of her time trying to help people figure out their personal and ethical problems.
I could keep listing examples, but I hope the above gets my point across.
I am interested in being part of a network of people who build trust through costly (yet worthwhile) acts of ethics, integrity, and work on important problems, and I do not think the above public Form is a risk to the connections of that network.
Fourth: It’s true that many social movements have been able to muster a lot of people and political power behind solving important problems, and that this required them to care a lot about PR and hold very tight constraints on what they can be publicly associated with (and thus what they’re allowed to say publicly). I think however, that these social movements are not capable of making scientific and conceptual progress on difficult high-level questions like cause prioritisation and the discovery of crucial considerations.
They’re very inflexible; by this I don’t merely mean that they’re hard to control and can take on negative affect (e.g. new atheism is often considered aggressive or unkind), but that they often cannot course correct or change their minds (e.g. environmentalism on nuclear energy, I think) in a way that entirely prohibits intellectual progress. Like, I don’t think you can get ‘environmentalism, but for cause prioritisation’ or ‘feminism, but for crucial considerations’. I think the thing we actually want here is something much closer to ‘science’, or ‘an intellectual movement’. And I think your points are much less applicable to a healthy scientific community.
I hope this helps to communicate where I’m coming from.
Hi Ben,
Thanks for this, this is useful (upvoted)
1. I think we disagree on the empirical facts here. EA seems to me unusually open to considering rational arguments for unfashionable positions. People in my experience lose points for bad arguments, not for weird conclusions. I’d be very perplexed if someone were not willing to discuss whether or not utilitarianism is false (or whether remote working is bad etc) in front of EAs, and would think someone was overcome by irrational fear if they declined to do so. Michael Plant believes one of the allegedly taboo opinions here (mental health should be a priority) and is currently on a speaking tour of EA events across the Far East.
2. This is a good point and updates me towards the usefulness of the survey, but I wonder whether there is a better way to achieve this that doesn’t carry such clear reputational risks for EA.
3. The issue is not whether my colleagues have sufficient public accessible reason to believe that EA is full of good people acting in good faith (which they do), but whether this survey weighs heavily or not in the evidence that they will actually consider. i.e. this might lead them not to consider the rest of the evidence that EA is mostly full of good people working in good faith. I think there is a serious risk of that.
4. As mentioned elsewhere in the thread, I’m not saying that EA should embrace political level self-restraint. What I am saying is that there are sometimes reasons to self-censor holding forth on all of your opinions in public when you represent a community of people trying to achieve something important. The respondents to this poll implicitly agree with that given that they want to remain anonymous. For some of these statements, the reputational risk of airing them anonymously does not transfer from them to the EA movement as a whole. For other statements, the reputational risk does transfer from them to the community as a whole.
Do you think anyone in the community should ever self-censor for the sake of the reputation of the movement? Do you think scientists should ever self-censor their views?
“People in my experience lose points for bad arguments, not for weird conclusions.”
I just want to note that in my experience this only happens if you’re challenging something that’s mainstream in EA. If I tell an EA “I’m a utilitarian,” that’s fine. If I say, “I’m not a utilitarian,” I need to provide arguments for why. That’s scary, because I’ve never studied philosophy, and I’m often being stared down by a room full of people with philosophy degrees.
So basically, some of us are not smart enough to make good arguments for everything we believe—and we’ll only lose social points for that if we mention that we have weird beliefs.
I might have more to say later. On (1), I want to state that, to me, my position seems like the conservative one. If certain views are being politically silenced, my sense is that it’s good for people to have the opportunity to state that. In the alternative, people are only allowed to do this if you already believe that they’re subject to unfair political pressure. Looking over the list and thinking “Hm, about 100 people say they feel silenced or that their opinions feel taboo, but I think they’re wrong about being silenced (or else I think that their opinions should be taboo!), so they shouldn’t have this outlet to say that” seems like a strong case for a potential correlated failure. Like, I don’t fully trust my own personal sense of which of the listed positions actually is and isn’t taboo in this way, and would feel quite bad dictating who was allowed to anonymously say they felt politically pressured based on who I believed was being politically pressured.
There are two issues here. The less important one is - (1) are people’s beliefs that many of these opinions are taboo rational? I think not, and have discussed the reasons why above.
The more important one is (2) - this poll is a blunt instrument that encourages people to enter offensive opinions that threaten the reputation of the movement. If there were a way to do this with those opinions laundered out, then I wouldn’t have a problem.
This has been done in a very careless way without due thought to the very obvious risks
I interpret [1] you here as saying “if you press the button of ‘make people search for all their offensive and socially disapproved beliefs, and collect the responses in a single place’ you will inevitably have a bad time. There are complex reasons lots of beliefs have evolved to be socially punished, and tearing down those fences might be really terrible. Even worse, there are externalities such that one person saying something crazy is going to negatively effect *everyone* in the community, and one must be very careful when setting up systems that create such externalities. Importantly though, these costs aren’t intrinsically tied up with the benefits of this poll—you *can* have good ways of dispelling bubbles and encouraging important whistle-blowing, without opening a Pandora’s box of reputational hazards.”
1) Curious if this seems right to you?
2) More importantly, I’m curious about what concrete versions of this you would be fine with, or support?
Someone suggested:
Would that address your concerns? Is there anything else that would?
[1] This is to a large extent: “the most plausible version of something similar to what you’re saying, that I understand from my own position”, rather than than “something I’m very confident you actually belief”.
Thanks John, really useful to hear specifically how this has been used and why that was problematic. I certainly wouldn’t have predicted this would be the kind of thing that would be of interest to your org such that it got shared around and commented on, and it makes me aware of a risk I wouldn’t have considered.
Just as a sign of social support: I am grateful to whoever organized this poll, and would be deeply saddened to be part of a community where we punish people who organize polls like this. Obviously it’s fine for Halstead to have his perspective, but it seemed valuable to provide a counterpoint to communicate that I would be happy to defend anyone who organizes polls like this, and put a significant fraction of my social capital behind our collective ability to do things like this.
I respect your view Oli, but I don’t think the person organising it put sufficient thought into the downsides of doing a poll such as this. They didn’t discuss any of the obvious risks in the ‘why this is a valuable exercise’ section.
I do think that I am quite hesitant to promote a norm that you are no longer allowed to ask people questions about their honest opinion in public, without having written a whole essay about the possible reasons for why that might be bad. I don’t think this is the type of question that one should have to justify, it’s the type of question that our community should make as easy as possible.
There exist risks, of course, but I think those risks should be analyzed by core members of the community and then communicated via norms and social expectations. I don’t think it’s reasonable to expect every member of the community to fully justify actions like this.
Hi, you start with a straw man here—I’m not requesting that they write a whole essay, I’m just requesting that they put some thought into the potential downsides, rather than zero thought (as occurred here). As I understand your view, you think the person has no obligation to put any thought into whether publishing this post is a good idea or not. I have to say I find this an implausible and strange position.
It is unclear whether the author has put thought into the downsides, all we know is that the author did not emphasize potential downsides in the writeup.
I don’t think the person doesn’t have to put any thought into whether publishing a post like this is a good idea or not, only that they don’t have to put significant effort into publicly making a case for the benefits outweighing the cost. The burden of making that case is much larger than the burden of just thinking about it, and would be large enough to get rid of most people just asking honest questions of others in public.
They have a section on ‘why do this?’ and don’t discuss any of the obvious risks which suggests they haven’t thought properly about the issue. I think a good norm to propagate would be—people put a lot of thought into whether they should publish posts that could potentially damage the movement. Do you agree?
Suppose I am going to run a poll on ‘what’s the most offensive thing you believe—anonymous public poll for effective altruists’. (1) do you think I should have to publicly explain why I am doing this? (2) do you think I should run this poll and publish the results?
I do indeed generally think that whether their writings will “damage the movement” should not be particularly high in their list of considerations to think about when asking other people questions, or writing up their thoughts. I think being overly concerned with reputation has a long history of squashing intellectual generativity, and I very explicitly would not want people to feel like they have to think about how every sentence of theirs might reflect on the movement from the perspective of an uncharitable observer.
I prefer people first thinking about all the following type of considerations, and if the stakes seem high-enough, maybe also add reputation concerns, though the vast majority of time the author in question shouldn’t get that far down the list (and I also note that you are advocating for a policy that is in direct conflict with at least one item on this list, which I consider to be much more important than short-term reputation concerns):
Are you personally actually interested in the point you are making or the question you are asking?
Does the answer to the question you are asking, or answering, likely matter a lot in the big picture?
Is the thing that you are saying true?
Are you being personally honest about your behavior and actions?
Are you making it easier for other people to model you and to accurately predict your behavior in the future?
Does your question or answer address a felt need that you yourself, or someone you closely interacted, with actually has?
Are you propagating any actually dangerous technological insights, or other information hazards?
I would strongly object to the norm “before you post to the forum, think very hard about whether this will damage the reputation of the movement”, which I am quite confident would ensure that very little of interest would be said on this forum, since almost all interesting ideas that have come out of EA are quite controversial to many people, and also tended to have started out in their least polished and most-repugnant form.
I also remember the closing talk of EAG 2017, with the theme being “stay weird”, that explicitly advocated for being open and welcoming to people who say things that might sound strange or unpopular. I think that reflected an understanding that it is essential for EA to be very welcoming of ideas that sound off putting and heretical at first, in particular if they are otherwise likely to be punished or disincentivized by most of society.
From a blogpost by Scott Alexander:
I think a key example in this space would be a lot of the work by Brian Tomasik, whose writing I think is highly repugnant to large fractions of society, but has strongly influenced me in my thinking, and is what I consider to be one of the most valuable bodies of work to come out of the community (and to embody its core spirit, of taking ethical ideas seriously and seeing where they lead you), even though I strongly disagree with him on almost every one of his conclusions.
So no, I don’t think this is a good norm, and would strongly advise against elevating that consideration to the short list of things that people actually have the mental energy for to do when posting here. Maybe when you are writing an article about EA in a major newspaper, but definitely not for this forum, the most private space for public discourse that we have, and the primary space in which we can evaluate and engage with ideas in their early stages.
What do you make of my ‘offensive beliefs’ poll idea and questions?
I think an anonymous poll of that type is probably fine, though just asking for offensive ideas is probably less likely to get valuable responses than the OP, so I feel less strongly about people being able to make that type of poll happen.
I do however still think that knowing the answers to that poll would be reasonably useful, and I still expect this to help me and others build better models of what others believe, and also think there is a good chance that a poll like this can break an equilibrium in which a silent majority is unwilling to speak up, which I think happens quite a bit and is usually bad.
So yeah, I think it would be fine to organize that poll. It’s a bit of a weird filter, so I would have some preference for the person adding an explicit disclaimer that this is an anonymous internet poll and ultimately this is primarily a tool for hypothesis generation, not a representative survey, but with that it seems likely reasonably positive to me. I don’t feel like that survey is as important as the type of survey that the OP organized, but I wouldn’t want to punish a person for organizing it, or filling it out.
ok cheers. I disagree with that but feel we have reached the end of productive argument
*nods* seems good.
Would you mind sharing, at least in general terms, which organisation you work for? I confess that if I knew I have forgotten.
(This is publicly available information, so I hope it’s fine if I share this. I noticed some people had downvoted this comment earlier on, so I am a bit hesitant, but after thinking more about it, I can’t think of any particular reason why this question should go unanswered.)
Halstead works at Founders Pledge.
Indeed. Anon open forms are maximally vulnerable to this: not only can detractors write stuff (for example, this poll did show up on reddits that are archly critical of EA etc.), but you can signal-boost your own renegade opinion if you’re willing to make the trivial effort to repeatedly submit it (e.g. “I think Alice sucks and people should stop paying attention to her”, “I completely agree with the comment above—Alice is just really toxic to this community”, “Absolutely agreed re. Alice, but I feel I can’t say anything publicly because she might retaliate against me”, etc.)
On detractors writing: Given some of the comments on the survey, I would be surprised if quite a few answers hadn’t come from people who have no connection to the EA community save as critics. For example:
This doesn’t seem like someone who actually spends time on the EA Forum (or, if they do, I wish they’d do something they found more enjoyable).
This set-up does seem like it could be exploitable in an adversarial manner… but my impression from reading the poll results, is that this is weak evidence against that actually being a failure mode—since it doesn’t seem to have happened.
I didn’t notice any attempts to frame a particular person multiple times. The cases where there were repeated criticism of some orgs seemed to plausibly come from different accounts, since they often offered different reasons for the criticism or seemed stylistically different.
Moreover, if asked beforehand about the outcomes of something that can be read as “an open invitation to anonymous trolling that will get read by a huge amount of people in the movement”… I would have expected to see things way, way worse than what I actually saw. In fact, I’ve seen many public and identifiable comments sections on Facebook, YouTube or Twitter that were much worse than this anonymous poll.
(I claim these things weakly based on having read through all the responses in the sheet. I didn’t analyse them in-depth with an eye to finding traces of adversarial action, and don’t expect my approach here would have caught more sophisticated attempts.)
I don’t object to this activity. I found it really interesting to read what others think and can’t say. Still, I think there are times when it’s in a community’s best interest to self-censor, or at least not to post their least acceptable views online.
This post actively encourages people to post their least acceptable views online, so seems bad by this argument.
I agree with you; I just want to point clearly toward the end of the spectrum that is “a healthy intellectual community” rather than “a unified voting block that doesn’t allow its members to step out of line”.
The political analogy was an example; it was not meant to say that standard political constraints should apply to EA. The thought applies to any social movement, e.g. for people involved in environmentalism, radical exchange or libertarianism. If I were a libertarian and someone came to me saying “why don’t we run a poll of libertarians on opinions they are scared to air publicly and then publish those opinions online for the world to see”, I think it would be pretty obvious that this would be an extremely bad idea.