The concreteness is helpful because I think my take is that, in general, writing something like this is emotionally exhausting (not to mention time consuming!) - especially so if you’ve got skin in the game and across your life you often come up across things like this to respond to and you keep having the pressure to force your feelings into a more acceptable format.
I reckon that crafting a message like that if I were upset about something could well take half a work day. And I’d have in my head all the being upset / being angry / being scared people on the forum would find me unreasonable / resentful that people might find me unreasonable / doubting myself the whole time. (Though I know plausibly that I’m in part just the describing the human condition there. Trying to do things is hard...!)
Overall, I think I’m just more worried than you that requiring comments to be too far in this direction has too much of a chilling effect on discourse and is too costly for the individuals involved. And it really just is a matter of degree here and what tradeoffs we’re willing to make.
(It makes me think it’d be an interesting excerise to write a number of hypothetical comments arrange them on a scale of how much they major on carefully explaining priors, caveating, communicating meta-level intention etc. and then see where we’d draw the line of acceptable / not!)
There’s an angry top-level post about evaporative cooling of group beliefs in EA that I haven’t written yet, and won’t until it would no longer be an angry one. That might mean that the best moment has passed, which will make me sad for not being strong enough to have competently written it earlier. You could describe this as my having been chilled out of the discourse, but I would instead describe it as my politely waiting until I am able and ready to explain my concerns in a collected and rational manner.
I am doing this because I care about carefully articulating what I’m worried about, because I think it’s important that I communicate it clearly. I don’t want to cause people to feel ambushed and embattled; I don’t want to draw battle lines between me and the people who agree with me on 99% of everything. I don’t want to engender offense that could fester into real and lasting animosity, in the very same people who if approached collaboratively would pull with me to solve our mutual problem out of mutual respect and love for the people who do good.
I don’t want to contribute to the internal divisions growing in EA. To the extent that it is happening, we should all prefer to nip the involution in the bud—if one has ever been on team Everyone Who Logically Tries To Do The Most Good, there’s nowhere to go but down.
I think that if I wrote an angry top-level post, it would deserve to be downvoted into oblivion, though I’m not sure it would be.
I think on the margin I’m fine with posts that will start fights being chilled. Angry infighting and polarization are poisonous to what we’re trying to do.
I barely give a gosh-guldarn about FLI or Tegmark outside of their (now reduced) capacity to reduce existential risk.
Obviously I’d rather bad things not happen to people and not happen to good people in particular, but I don’t specifically know anyone from FLI and they are a feather on the scales next to the full set of strangers who I care about.
If Tegmark or FLI was wronged in the way your comments and others imply, you are correct and justified in your beliefs. But if the apology or the current facts do not make that status clear, there’s an object level problem and it’s bad to be angry that they are wronged, or build further arguments on that belief.
I think it’s pretty obvious at this point that Tegmark and FLI was seriously wronged, but I barely care about any wrong done to them and am largely uninterested in the question of whether it was wildly disproportionate or merely sickeningly disproportionate.
I care about the consequences of what we’ve done to them.
I care about how, in order to protect themselves from this community, the FLI is
working hard to continue improving the structure and process of our grantmaking processes, including more internal and (in appropriate cases) external review. For starters, for organizations not already well-known to FLI or clearly unexceptionable (e.g. major universities), we will request and evaluate more information about the organization, its personnel, and its history before moving on to additional stages.
I care about how everyone who watched this happen will also realize the need to protect themselves from us by shuffling along and taking their own pulses. I care about the new but promising EAs who no one will take a chance on, the moonshots that won’t be funded even though they’d save lives in expectation, the good ideas with “bad optics” that won’t be acted on because of fear of backdraft on this forum. I care about the lives we can save if we don’t rush to conclusions, rush to anger, if we can give each other the benefit of the doubt for five freaking minutes and consider whether it’d make any sense whatsoever for the accusation de jour to be what it looks like.
If what happened was that Max Tegmark or FLI gets many dubious grant applications, and this particular application made it a few steps through FLI’s processes before it was caught, expo.se’s story and the negative response you object to on the EA forum would be bad, destructive and false. If this was what happened, it would absolutely deserve your disapproval and alarm.
I don’t think this isn’t true. What we know is:
An established (though hostile) newspaper gave an account with actual quotes from Tegmark that contradict his apparent actions
The bespoke funding letter, signed by Tegmark, explicitly promising funding, “approved a grant” conditional on registration of the charity
The hiring of the lawyer by Tegmark
When Tegmark edited his comment with more content, I’m surprised by how positive the reception of this edit got, which simply disavowed funding extremist groups.
I’m further surprised by the reaction and changing sentiment on the forum in reaction of this post, which simply presents an exonerating story. This story itself is directly contradicted by the signed statement in the letter itself.
Contrary to the top level post, it is false that it is standard practice to hand out signed declarations of financial support, with wording like “approved a grant” if substantial vetting remains. Also, it’s extremely unusual for any non-profit to hire a lawyer to explain that a prospective grantee failed vetting in the application process. We also haven’t seen any evidence that FLI actually communicated a rejection. Expo.se seems to have a positive record—even accepting the aesthetic here that newspapers or journalists are untrustworthy, it’s costly for an outlet to outright lie or misrepresent facts.
There’s other issues with Tegmark’s/FLI statements (e.g. deflections about the lack of direct financial benefit to his brother, not addressing the material support the letter provided for registration/the reasonable suspicion this was a ploy to produce the letter).
There’s much more that is problematic that underpin this. If I had more time, I would start a long thread explaining how funding and family relationships could interact really badly in EA/longtermism for several reasons, and another about Tegmark’s insertions into geopolitical issues, which are clumsy at best.
Another comment said the EA forum reaction contributed to actual harm to Tegmark/FLI in amplifying the false narrative. I think a look at Twitter, or how the story, which continues and has been picked up in Vice, suggests to me this isn’t this is true. Unfortunately, I think the opposite is true.
The concreteness is helpful because I think my take is that, in general, writing something like this is emotionally exhausting (not to mention time consuming!) - especially so if you’ve got skin in the game and across your life you often come up across things like this to respond to and you keep having the pressure to force your feelings into a more acceptable format.
Yep, I think it absolutely is.
It’s also not an accident that my version of the comment is a lot longer and covers more topics (and therefore would presumably have taken way longer for someone to write and edit in a way they personally endorsed).
I don’t think the minimally acceptable comment needed to be quite that long or cover quite that much ground (though I think it would be praiseworthy to do so), but directionally I’m indeed asking people to do a significantly harder thing. And I expect this to be especially hard in exactly the situations where it matters most.
I reckon that crafting a message like that if I were upset about something could well take half a work day. And I’d have in my head all the being upset / being angry / being scared people on the forum would find me unreasonable / resentful that people might find me unreasonable / doubting myself the whole time. (Though I know plausibly that I’m in part just the describing the human condition there. Trying to do things is hard...!)
❤
Yeah, that sounds all too realistic!
I’m also imagining that while the author is trying to put together their comment, they might be tracking the fact that others have already rushed out their own replies (many of which probably suck from your perspective), and discussion is continuing, and the clock is ticking before the EA Forum buries this discussion entirely.
(I wonder if there’s a way to tweak how the EA Forum works so that there’s less incentive to go super fast?)
One reason I think it’s worth trying to put in this extra effort is that it produces a virtuous cycle. If I take a bit longer to draft a comment I can more fully stand by, then other people will feel less pressure to rush out their own thoughts prematurely. Slowing down the discussion a little, and adding a bit more light relative to heat, can have a positive effect on all the other discussion that happens.
I’ve mentioned NVC a few times, but I do think NVC is a good example of a thing that can help a lot at relatively little time+effort cost. Quick easy hacks are very good here, exactly because this can otherwise be such a time suck.
A related hack is to put your immediate emotional reaction inside a ‘this is my immediate emotional reaction’ frame, and then say a few words outside that frame. Like:
“Here’s my immediate emotional reaction to the OP:
[indented italicized text]
And here are my first-pass thoughts about physical reality, which are more neutral but might also need to be revised after I learn more or have more time to chew on things:
[indented italicized text]”
This is kinda similar to some stuff I put in my imaginary Shakeel comment above, but being heavy-handed about it might be a lot easier and faster than trying to make it feel like an organic whole.
And I think it has very similar effects to the stuff I was going for, where you get to express the feeling at all, but it’s in a container that makes it (a) a bit less likely that you’ll trigger others and thereby get into a heated Internet fight, and (b) a bit less likely that your initial emotional reaction will get mistaken (by you or others) for an endorsed carefully-wordsmithed description of your factual beliefs.
Overall, I think I’m just more worried than you that requiring comments to be too far in this direction has too much of a chilling effect on discourse and is too costly for the individuals involved. And it really just is a matter of degree here and what tradeoffs we’re willing to make.
Yeah, this very much sounds to me like a topic where reasonable people can disagree a lot!
(It makes me think it’d be an interesting excerise to write a number of hypothetical comments arrange them on a scale of how much they major on carefully explaining priors, caveating, communicating meta-level intention etc. and then see where we’d draw the line of acceptable / not!)
Ooooo, this sounds very fun. :) Especially if we can tangent off into science and philosophy debates when it turns out that there’s a specific underlying disagreement that explains why we feel differently about a particular case. 😛
Haha this is a great hypothetical comment!
The concreteness is helpful because I think my take is that, in general, writing something like this is emotionally exhausting (not to mention time consuming!) - especially so if you’ve got skin in the game and across your life you often come up across things like this to respond to and you keep having the pressure to force your feelings into a more acceptable format.
I reckon that crafting a message like that if I were upset about something could well take half a work day. And I’d have in my head all the being upset / being angry / being scared people on the forum would find me unreasonable / resentful that people might find me unreasonable / doubting myself the whole time. (Though I know plausibly that I’m in part just the describing the human condition there. Trying to do things is hard...!)
Overall, I think I’m just more worried than you that requiring comments to be too far in this direction has too much of a chilling effect on discourse and is too costly for the individuals involved. And it really just is a matter of degree here and what tradeoffs we’re willing to make.
(It makes me think it’d be an interesting excerise to write a number of hypothetical comments arrange them on a scale of how much they major on carefully explaining priors, caveating, communicating meta-level intention etc. and then see where we’d draw the line of acceptable / not!)
There’s an angry top-level post about evaporative cooling of group beliefs in EA that I haven’t written yet, and won’t until it would no longer be an angry one. That might mean that the best moment has passed, which will make me sad for not being strong enough to have competently written it earlier. You could describe this as my having been chilled out of the discourse, but I would instead describe it as my politely waiting until I am able and ready to explain my concerns in a collected and rational manner.
I am doing this because I care about carefully articulating what I’m worried about, because I think it’s important that I communicate it clearly. I don’t want to cause people to feel ambushed and embattled; I don’t want to draw battle lines between me and the people who agree with me on 99% of everything. I don’t want to engender offense that could fester into real and lasting animosity, in the very same people who if approached collaboratively would pull with me to solve our mutual problem out of mutual respect and love for the people who do good.
I don’t want to contribute to the internal divisions growing in EA. To the extent that it is happening, we should all prefer to nip the involution in the bud—if one has ever been on team Everyone Who Logically Tries To Do The Most Good, there’s nowhere to go but down.
I think that if I wrote an angry top-level post, it would deserve to be downvoted into oblivion, though I’m not sure it would be.
I think on the margin I’m fine with posts that will start fights being chilled. Angry infighting and polarization are poisonous to what we’re trying to do.
I think you are upset because FLI or Tegmark was wronged. Would you consider hearing another perspective about this?
I barely give a gosh-guldarn about FLI or Tegmark outside of their (now reduced) capacity to reduce existential risk.
Obviously I’d rather bad things not happen to people and not happen to good people in particular, but I don’t specifically know anyone from FLI and they are a feather on the scales next to the full set of strangers who I care about.
If Tegmark or FLI was wronged in the way your comments and others imply, you are correct and justified in your beliefs. But if the apology or the current facts do not make that status clear, there’s an object level problem and it’s bad to be angry that they are wronged, or build further arguments on that belief.
I think it’s pretty obvious at this point that Tegmark and FLI was seriously wronged, but I barely care about any wrong done to them and am largely uninterested in the question of whether it was wildly disproportionate or merely sickeningly disproportionate.
I care about the consequences of what we’ve done to them.
I care about how, in order to protect themselves from this community, the FLI is
I care about how everyone who watched this happen will also realize the need to protect themselves from us by shuffling along and taking their own pulses. I care about the new but promising EAs who no one will take a chance on, the moonshots that won’t be funded even though they’d save lives in expectation, the good ideas with “bad optics” that won’t be acted on because of fear of backdraft on this forum. I care about the lives we can save if we don’t rush to conclusions, rush to anger, if we can give each other the benefit of the doubt for five freaking minutes and consider whether it’d make any sense whatsoever for the accusation de jour to be what it looks like.
Getting to one object level issue:
If what happened was that Max Tegmark or FLI gets many dubious grant applications, and this particular application made it a few steps through FLI’s processes before it was caught, expo.se’s story and the negative response you object to on the EA forum would be bad, destructive and false. If this was what happened, it would absolutely deserve your disapproval and alarm.
I don’t think this isn’t true. What we know is:
An established (though hostile) newspaper gave an account with actual quotes from Tegmark that contradict his apparent actions
The bespoke funding letter, signed by Tegmark, explicitly promising funding, “approved a grant” conditional on registration of the charity
The hiring of the lawyer by Tegmark
When Tegmark edited his comment with more content, I’m surprised by how positive the reception of this edit got, which simply disavowed funding extremist groups.
I’m further surprised by the reaction and changing sentiment on the forum in reaction of this post, which simply presents an exonerating story. This story itself is directly contradicted by the signed statement in the letter itself.
Contrary to the top level post, it is false that it is standard practice to hand out signed declarations of financial support, with wording like “approved a grant” if substantial vetting remains. Also, it’s extremely unusual for any non-profit to hire a lawyer to explain that a prospective grantee failed vetting in the application process. We also haven’t seen any evidence that FLI actually communicated a rejection. Expo.se seems to have a positive record—even accepting the aesthetic here that newspapers or journalists are untrustworthy, it’s costly for an outlet to outright lie or misrepresent facts.
There’s other issues with Tegmark’s/FLI statements (e.g. deflections about the lack of direct financial benefit to his brother, not addressing the material support the letter provided for registration/the reasonable suspicion this was a ploy to produce the letter).
There’s much more that is problematic that underpin this. If I had more time, I would start a long thread explaining how funding and family relationships could interact really badly in EA/longtermism for several reasons, and another about Tegmark’s insertions into geopolitical issues, which are clumsy at best.
Another comment said the EA forum reaction contributed to actual harm to Tegmark/FLI in amplifying the false narrative. I think a look at Twitter, or how the story, which continues and has been picked up in Vice, suggests to me this isn’t this is true. Unfortunately, I think the opposite is true.
Yep, I think it absolutely is.
It’s also not an accident that my version of the comment is a lot longer and covers more topics (and therefore would presumably have taken way longer for someone to write and edit in a way they personally endorsed).
I don’t think the minimally acceptable comment needed to be quite that long or cover quite that much ground (though I think it would be praiseworthy to do so), but directionally I’m indeed asking people to do a significantly harder thing. And I expect this to be especially hard in exactly the situations where it matters most.
❤
Yeah, that sounds all too realistic!
I’m also imagining that while the author is trying to put together their comment, they might be tracking the fact that others have already rushed out their own replies (many of which probably suck from your perspective), and discussion is continuing, and the clock is ticking before the EA Forum buries this discussion entirely.
(I wonder if there’s a way to tweak how the EA Forum works so that there’s less incentive to go super fast?)
One reason I think it’s worth trying to put in this extra effort is that it produces a virtuous cycle. If I take a bit longer to draft a comment I can more fully stand by, then other people will feel less pressure to rush out their own thoughts prematurely. Slowing down the discussion a little, and adding a bit more light relative to heat, can have a positive effect on all the other discussion that happens.
I’ve mentioned NVC a few times, but I do think NVC is a good example of a thing that can help a lot at relatively little time+effort cost. Quick easy hacks are very good here, exactly because this can otherwise be such a time suck.
A related hack is to put your immediate emotional reaction inside a ‘this is my immediate emotional reaction’ frame, and then say a few words outside that frame. Like:
“Here’s my immediate emotional reaction to the OP:
[indented italicized text]
And here are my first-pass thoughts about physical reality, which are more neutral but might also need to be revised after I learn more or have more time to chew on things:
[indented italicized text]”
This is kinda similar to some stuff I put in my imaginary Shakeel comment above, but being heavy-handed about it might be a lot easier and faster than trying to make it feel like an organic whole.
And I think it has very similar effects to the stuff I was going for, where you get to express the feeling at all, but it’s in a container that makes it (a) a bit less likely that you’ll trigger others and thereby get into a heated Internet fight, and (b) a bit less likely that your initial emotional reaction will get mistaken (by you or others) for an endorsed carefully-wordsmithed description of your factual beliefs.
Yeah, this very much sounds to me like a topic where reasonable people can disagree a lot!
Ooooo, this sounds very fun. :) Especially if we can tangent off into science and philosophy debates when it turns out that there’s a specific underlying disagreement that explains why we feel differently about a particular case. 😛