Could you say more about the relevance you perceive to the theory and/or practice of effective altruism?
Part 1 frames this significantly in terms of actions by major corporations (that includes big universities) exercising their power, especially tech companies. On the one side, I’m less worried about the risk of individual people and relatively small social movements effectively suppressing speech than I am about big tech companies or even “ordinary” major corporations doing so. To the extent that an EA actor is powerful enough to achieve some suppression, the actual effect on the suppressed speech’s ability to obtain a hearing should be pretty minimal.
On the other side, I find the interests of individuals and social movements in distancing themselves from speech they find odious to be much greater than for large corporations. Some of that is about association—if you’re trying to create a certain sort of community, tolerating problematic speech is going to impede that.
So the balance implied by the linked material—which to be fair, wasn’t written with the EA community in mind—doesn’t strike me as particularly helpful for the types of “suppression” decisions that individual EAs, EA actors, and the EA community are likely to face.
Could you say more about the relevance you perceive to the theory and/or practice of effective altruism?
I was thinking the post could be helpful to make people reflect about whether they are someway elevating/suppressing content based too much on agreement/disagreement, and too little based on whether it could update views.
Could you say more about the relevance you perceive to the theory and/or practice of effective altruism?
Part 1 frames this significantly in terms of actions by major corporations (that includes big universities) exercising their power, especially tech companies. On the one side, I’m less worried about the risk of individual people and relatively small social movements effectively suppressing speech than I am about big tech companies or even “ordinary” major corporations doing so. To the extent that an EA actor is powerful enough to achieve some suppression, the actual effect on the suppressed speech’s ability to obtain a hearing should be pretty minimal.
On the other side, I find the interests of individuals and social movements in distancing themselves from speech they find odious to be much greater than for large corporations. Some of that is about association—if you’re trying to create a certain sort of community, tolerating problematic speech is going to impede that.
So the balance implied by the linked material—which to be fair, wasn’t written with the EA community in mind—doesn’t strike me as particularly helpful for the types of “suppression” decisions that individual EAs, EA actors, and the EA community are likely to face.
Thanks for the comment, Jason.
I was thinking the post could be helpful to make people reflect about whether they are someway elevating/suppressing content based too much on agreement/disagreement, and too little based on whether it could update views.