I’ve weak karma downvoted and disagreed with this, then hit the “insightful” button. Definitely made me think and learn.
I agree that this is really tricky question, and some of those philosophical conversations (including this one) are important and should happen, but I don’t think this particular EA forum is the best place for them, for a few reasons.
1) I think there are better places to have these often awkward, fraught conversations. I think they are often better had in-person where you can connect, preface, soften and easily retract. I recently got into a mini online-tiff, when a wise onlooker noted...
”Online discussions can turn that way with a few misinterpretations creating a doom loop that wouldn’t happen with a handshake and a drink”
Or alternatively perhaps in a more academic/narrow forum where people have similar discussion norms and understandings. This forum has a particularly wide range of users, from nerds to philosophers to practitioners to managers to donors so there’s a very wide range of norms and understandings.
2) There’s potential reputational damage for all the people doing great EA work across the spectrum here. These kinds of discussions could lead to more hit-pieces and reduced funding. It would be a pity if the AI apocalypse hit us because of funding cuts due to these discussions. (OK now I’m strawmanning a bit :D)
3) The forum might be an entry-point for some people into EA things. I don’t think its a good idea for this to be these discussions to be the first thing someone looking into EA sees on the internet.
4) It might be a bit of a strawman to say our “discourse will forever be at the mercy of whoever is most hostile to you, or whoever is craziest.” I think people hostile to EA don’t like many things said here on the forum, but we aren’t forever at the mercy of them and we keep talking. I think there are just a few particular topics which give people more ammunition for public take-downs, and there is wisdom in sometimes avoiding loading balls into your opponents cannons.
5) I think if you (like Singer) write your own opinion in their own book its a different situation—you are the one writing and take full responsibility for your work—on a public forum it at least feels like there is at least a smidgeon of shared accountability for what is said. Forms of this debate that has been going on for sometime about what is posted on Twitter / Facebook etc.
6) I agree with you the quote from the Hamas charter is more dangerous—and think we shouldn’t be publishing or discussing that on the forum either.
I have great respect for these free speech arguments, and think this is a super hard question where the “best” thing to do might well change a lot over time, but right now I don’t think allowing these discussions and arguments on this particular EA forum will lead to more good in the long run.
I think there are better places to have these often awkward, fraught conversations.
You are literally talking about the sort of conversations that created EA. If people don’t have these conversations on the forum (the single best way to create common knowledge in the EA commmunity), then it will be much harder to course-correct places where fundamental ideas are mistaken. I think your comment proceeds from the implicit assumption that we’re broadly right about stuff, and mostly just need to keep our heads down and do the work. I personally think that a version of EA that doesn’t have the ability to course-correct in big ways would be net negative for the world. In general it is not possible to e.g. identify ongoing moral catastrophes when you’re optimizing your main venue of conversations for avoiding seeming weird.
I agree with you the quote from the Hamas charter is more dangerous—and think we shouldn’t be publishing or discussing that on the forum either.
If you’re not able to talk about evil people and their ideologies, then you will not be able to account for them in reasoning about how to steer the world. I think EA is already far too naive about how power dynamics work at large scales, given how much influence we’re wielding; this makes it worse.
There’s potential reputational damage for all the people doing great EA work across the spectrum here.
I think there are just a few particular topics which give people more ammunition for public take-downs, and there is wisdom in sometimes avoiding loading balls into your opponents cannons.
Insofar as you’re thinking about this as a question of coalitional politics, I can phrase it in those terms too: the more censorious EA becomes, the more truth-seeking people will disaffiliate from it. Habryka, who was one of the most truth-seeking people involved in EA, has already done so; I wouldn’t say it was directly because of EA not being truth-seeking enough, but I think that was one big issue for him amongst a cluster of related issues. I don’t currently plan to, but I’ve considered the possibility, and the quality of EA’s epistemic norms is one of my major considerations (of course, the forum’s norms are only a small part of that).
However, having said this, I don’t think you should support more open forum norms mostly as a concession to people like me, but rather in order to pursue your own goals more effectively. Movements that aren’t able to challenge foundational assumptions end up like environmentalists: actively harming the causes they’re trying to support.
Just to narrow in on a single point—I have found the ‘EA fundamentally depends on uncomfortable conversations’ point to be a bit unnuanced in the past. It seems like we could be more productive by delineating which kinds of discomfort we want to defend—for example, most people here don’t want to have uncomfortable conversations about age of consent laws (thankfully), but do want to have them about factory farming.
When I think about the founding myths of EA, most of them seem to revolve around the discomfort of applying utilitarianism in practice, or on how far we should expand our moral circles. I think EA would’ve broadly survived intact by lightly moderating other kinds of discomfort (or it may have even expanded).
I’m not keen to take a stance on whether this post should or shouldn’t be allowed on the forum, but I am curious to hear if and where you would draw this line :)
Narrowing in even further on the example you gave, as an illustration: I just had an uncomfortable conversation about age of consent laws literally yesterday with an old friend of mine. Specifically, my friend was advocating that the most important driver of crime is poverty, and I was arguing that it’s cultural acceptance of crime. I pointed to age of consent laws varying widely across different countries as evidence that there are some cultures which accept behavior that most westerners think of as deeply immoral (and indeed criminal).
Picturing some responses you might give to this:
That’s not the sort of uncomfortable claim you’re worried about
But many possible continuations of this conversation would in fact have gotten into more controversial territory. E.g. maybe a cultural relativist would defend those other countries having lower age of consent laws. I find cultural relativism kinda crazy (for this and related reasons) but it’s a pretty mainstream position.
I could have made the point in more sensitive ways
Maybe? But the whole point of the conversation was about ways in which some cultures are better than others. This is inherently going to be a sensitive claim, and it’s hard to think of examples that are compelling without being controversial.
This is not the sort of thing people should be discussing on the forum
But EA as a movement is interested in things like:
Criminal justice reform (which OpenPhil has spent many tens of millions of dollars on)
Promoting women’s rights (especially in the context of global health and extreme poverty reduction)
What factors make what types of foreign aid more or less effective
More generally, the relationship between the developed and the developing world
So this sort of debate does seem pretty relevant.
I think EA would’ve broadly survived intact by lightly moderating other kinds of discomfort (or it may have even expanded).
The important point is that we didn’t know in advance which kinds of discomfort were of crucial importance. The relevant baseline here is not early EAs moderating ourselves, it’s something like “the rest of academic philosophy/society at large moderating EA”, which seems much more likely to have stifled early EA’s ability to identify important issues and interventions.
(I also think we’ve ended up at some of the wrong points on some of these issues, but that’s a longer debate.)
Do you have an example of the kind of early EA conversation which you think was really important which helped came up with core EA tenets might be frowned upon or censored on the forum now? I’m still super dubious about whether leaving out a small number of specific topics really leaves much value on the table.
And I really think conversations can be had in more sensitive ways. In the the case of the original banned post, just as good a philosophical conversation could be had without explicitly talking about killing people. The conversation already was being had on another thread “the meat eater problem”
And as a sidebar yeah I wouldn’t have any issue with that above conversation myself because we just have to practically discuss that with donors and internally when providing health care and getting confronted with tricky situations. Also (again sidebar) it’s interesting that age of marriage/consent conversations can be where classic left wing cultural relativism and gender safeguarding collide and don’t know which way to swing. We’ve had to ask that question practically in our health centers, to decide who to give family planning to and when to think of referring to police etc. Super tricky.
My point is not that the current EA forum would censor topics that were actually important early EA conversations, because EAs have now been selected for being willing to discuss those topics. My point is that the current forum might censor topics that would be important course-corrections, just as if the rest of society had been moderating early EA conversations, those conversations might have lost important contributions like impartiality between species (controversial: you’re saying human lives don’t matter very much!), the ineffectiveness of development aid (controversial: you’re attacking powerful organizations!), transhumanism (controversial, according to the people who say it’s basically eugenics), etc.
Re “conversations can be had in more sensitive ways”, I mostly disagree, because of the considerations laid out here: the people who are good at discussing topics sensitively are mostly not the ones who are good at coming up with important novel ideas.
For example, it seems plausible that genetic engineering for human intelligence enhancement is an important and highly neglected intervention. But you had to be pretty disagreeable to bring it into the public conversation a few years ago (I think it’s now a bit more mainstream).
I’ve weak karma downvoted and disagreed with this, then hit the “insightful” button. Definitely made me think and learn.
I agree that this is really tricky question, and some of those philosophical conversations (including this one) are important and should happen, but I don’t think this particular EA forum is the best place for them, for a few reasons.
1) I think there are better places to have these often awkward, fraught conversations. I think they are often better had in-person where you can connect, preface, soften and easily retract. I recently got into a mini online-tiff, when a wise onlooker noted...
”Online discussions can turn that way with a few misinterpretations creating a doom loop that wouldn’t happen with a handshake and a drink”
Or alternatively perhaps in a more academic/narrow forum where people have similar discussion norms and understandings. This forum has a particularly wide range of users, from nerds to philosophers to practitioners to managers to donors so there’s a very wide range of norms and understandings.
2) There’s potential reputational damage for all the people doing great EA work across the spectrum here. These kinds of discussions could lead to more hit-pieces and reduced funding. It would be a pity if the AI apocalypse hit us because of funding cuts due to these discussions. (OK now I’m strawmanning a bit :D)
3) The forum might be an entry-point for some people into EA things. I don’t think its a good idea for this to be these discussions to be the first thing someone looking into EA sees on the internet.
4) It might be a bit of a strawman to say our “discourse will forever be at the mercy of whoever is most hostile to you, or whoever is craziest.” I think people hostile to EA don’t like many things said here on the forum, but we aren’t forever at the mercy of them and we keep talking. I think there are just a few particular topics which give people more ammunition for public take-downs, and there is wisdom in sometimes avoiding loading balls into your opponents cannons.
5) I think if you (like Singer) write your own opinion in their own book its a different situation—you are the one writing and take full responsibility for your work—on a public forum it at least feels like there is at least a smidgeon of shared accountability for what is said. Forms of this debate that has been going on for sometime about what is posted on Twitter / Facebook etc.
6) I agree with you the quote from the Hamas charter is more dangerous—and think we shouldn’t be publishing or discussing that on the forum either.
I have great respect for these free speech arguments, and think this is a super hard question where the “best” thing to do might well change a lot over time, but right now I don’t think allowing these discussions and arguments on this particular EA forum will lead to more good in the long run.
Ty for the reply; a jumble of responses below.
You are literally talking about the sort of conversations that created EA. If people don’t have these conversations on the forum (the single best way to create common knowledge in the EA commmunity), then it will be much harder to course-correct places where fundamental ideas are mistaken. I think your comment proceeds from the implicit assumption that we’re broadly right about stuff, and mostly just need to keep our heads down and do the work. I personally think that a version of EA that doesn’t have the ability to course-correct in big ways would be net negative for the world. In general it is not possible to e.g. identify ongoing moral catastrophes when you’re optimizing your main venue of conversations for avoiding seeming weird.
If you’re not able to talk about evil people and their ideologies, then you will not be able to account for them in reasoning about how to steer the world. I think EA is already far too naive about how power dynamics work at large scales, given how much influence we’re wielding; this makes it worse.
Insofar as you’re thinking about this as a question of coalitional politics, I can phrase it in those terms too: the more censorious EA becomes, the more truth-seeking people will disaffiliate from it. Habryka, who was one of the most truth-seeking people involved in EA, has already done so; I wouldn’t say it was directly because of EA not being truth-seeking enough, but I think that was one big issue for him amongst a cluster of related issues. I don’t currently plan to, but I’ve considered the possibility, and the quality of EA’s epistemic norms is one of my major considerations (of course, the forum’s norms are only a small part of that).
However, having said this, I don’t think you should support more open forum norms mostly as a concession to people like me, but rather in order to pursue your own goals more effectively. Movements that aren’t able to challenge foundational assumptions end up like environmentalists: actively harming the causes they’re trying to support.
Just to narrow in on a single point—I have found the ‘EA fundamentally depends on uncomfortable conversations’ point to be a bit unnuanced in the past. It seems like we could be more productive by delineating which kinds of discomfort we want to defend—for example, most people here don’t want to have uncomfortable conversations about age of consent laws (thankfully), but do want to have them about factory farming.
When I think about the founding myths of EA, most of them seem to revolve around the discomfort of applying utilitarianism in practice, or on how far we should expand our moral circles. I think EA would’ve broadly survived intact by lightly moderating other kinds of discomfort (or it may have even expanded).
I’m not keen to take a stance on whether this post should or shouldn’t be allowed on the forum, but I am curious to hear if and where you would draw this line :)
Narrowing in even further on the example you gave, as an illustration: I just had an uncomfortable conversation about age of consent laws literally yesterday with an old friend of mine. Specifically, my friend was advocating that the most important driver of crime is poverty, and I was arguing that it’s cultural acceptance of crime. I pointed to age of consent laws varying widely across different countries as evidence that there are some cultures which accept behavior that most westerners think of as deeply immoral (and indeed criminal).
Picturing some responses you might give to this:
That’s not the sort of uncomfortable claim you’re worried about
But many possible continuations of this conversation would in fact have gotten into more controversial territory. E.g. maybe a cultural relativist would defend those other countries having lower age of consent laws. I find cultural relativism kinda crazy (for this and related reasons) but it’s a pretty mainstream position.
I could have made the point in more sensitive ways
Maybe? But the whole point of the conversation was about ways in which some cultures are better than others. This is inherently going to be a sensitive claim, and it’s hard to think of examples that are compelling without being controversial.
This is not the sort of thing people should be discussing on the forum
But EA as a movement is interested in things like:
Criminal justice reform (which OpenPhil has spent many tens of millions of dollars on)
Promoting women’s rights (especially in the context of global health and extreme poverty reduction)
What factors make what types of foreign aid more or less effective
More generally, the relationship between the developed and the developing world
So this sort of debate does seem pretty relevant.
The important point is that we didn’t know in advance which kinds of discomfort were of crucial importance. The relevant baseline here is not early EAs moderating ourselves, it’s something like “the rest of academic philosophy/society at large moderating EA”, which seems much more likely to have stifled early EA’s ability to identify important issues and interventions.
(I also think we’ve ended up at some of the wrong points on some of these issues, but that’s a longer debate.)
Do you have an example of the kind of early EA conversation which you think was really important which helped came up with core EA tenets might be frowned upon or censored on the forum now? I’m still super dubious about whether leaving out a small number of specific topics really leaves much value on the table.
And I really think conversations can be had in more sensitive ways. In the the case of the original banned post, just as good a philosophical conversation could be had without explicitly talking about killing people. The conversation already was being had on another thread “the meat eater problem”
And as a sidebar yeah I wouldn’t have any issue with that above conversation myself because we just have to practically discuss that with donors and internally when providing health care and getting confronted with tricky situations. Also (again sidebar) it’s interesting that age of marriage/consent conversations can be where classic left wing cultural relativism and gender safeguarding collide and don’t know which way to swing. We’ve had to ask that question practically in our health centers, to decide who to give family planning to and when to think of referring to police etc. Super tricky.
My point is not that the current EA forum would censor topics that were actually important early EA conversations, because EAs have now been selected for being willing to discuss those topics. My point is that the current forum might censor topics that would be important course-corrections, just as if the rest of society had been moderating early EA conversations, those conversations might have lost important contributions like impartiality between species (controversial: you’re saying human lives don’t matter very much!), the ineffectiveness of development aid (controversial: you’re attacking powerful organizations!), transhumanism (controversial, according to the people who say it’s basically eugenics), etc.
Re “conversations can be had in more sensitive ways”, I mostly disagree, because of the considerations laid out here: the people who are good at discussing topics sensitively are mostly not the ones who are good at coming up with important novel ideas.
For example, it seems plausible that genetic engineering for human intelligence enhancement is an important and highly neglected intervention. But you had to be pretty disagreeable to bring it into the public conversation a few years ago (I think it’s now a bit more mainstream).