If Scott had used language like this, my guess is that the people he was trying to convince would have completely bounced off of his post.
I mostly agree with this, I wasn’t suggesting he included that specific type of language, just that the arguments in the post don’t go through from the perspective of most leader/highly-engaged EAs. Scott has discussed similar topics on ACT here but I agree the target audience was likely different.
I do think part of his target audience was probably EAs who he thinks are too critical of themselves, as I think he’s written before, but it’s likely a small-ish fraction of his readers.
I do think it would have been clearer if he had included a caveat like “if you think that small changes in the chance of existential risk outweigh ~everything else then this post isn’t for you, read something else instead” but oh well.
Agree with that. I also think if this is the intention the title should maybe be different, instead of being called “In continued defense of effective altruism” it could be called something else like “In defense of effective altruism from X perspective”. The title seems to me to imply that effective altruism has been positive on its own terms.
Furthermore, people who identify as ~longtermists seemed to be sharing it widely on Twitter without any type of caveat you mentioned.
And it seems fine to me to argue from the basis of someone else’s premises, even if you don’t think those premises are accurate yourself.
I feel like there’s a spectrum of cases here. Let’s say I as a member of movement X in which most people aren’t libertarians write a post “libertarian case for X”, where I argue that X is good from a libertarian perspective.
Even if those in X usually don’t agree with the libertarian premises, the arguments in the post still check out from X’s perspective. Perhaps the arguments are reframed to make to show libertarians that X will lead to positive effects on their belief system as well as X’s belief system. None of the claims in the post contradict what the most influential people advocating for X think.
The case for X is distorted and statements in the piece are highly optimized for convincing libertarians. Arguments aren’t just reframed, new arguments are created that the most influential people advocating for X would disendorse.
I think pieces or informal arguments close to both (1) and (2) are common in the discourse, but I generally feel uncomfortable with ones closer to (2). Scott’s piece is somewhere in the middle and perhaps even closer to (1) than (2) but I think it’s too far toward (2) for my taste given that one of the most important claims in the piece that makes his whole argument go through may be disendorsed by the majority of the most influential people in EA.
I mostly agree with this, I wasn’t suggesting he included that specific type of language, just that the arguments in the post don’t go through from the perspective of most leader/highly-engaged EAs. Scott has discussed similar topics on ACT here but I agree the target audience was likely different.
I do think part of his target audience was probably EAs who he thinks are too critical of themselves, as I think he’s written before, but it’s likely a small-ish fraction of his readers.
Agree with that. I also think if this is the intention the title should maybe be different, instead of being called “In continued defense of effective altruism” it could be called something else like “In defense of effective altruism from X perspective”. The title seems to me to imply that effective altruism has been positive on its own terms.
Furthermore, people who identify as ~longtermists seemed to be sharing it widely on Twitter without any type of caveat you mentioned.
I feel like there’s a spectrum of cases here. Let’s say I as a member of movement X in which most people aren’t libertarians write a post “libertarian case for X”, where I argue that X is good from a libertarian perspective.
Even if those in X usually don’t agree with the libertarian premises, the arguments in the post still check out from X’s perspective. Perhaps the arguments are reframed to make to show libertarians that X will lead to positive effects on their belief system as well as X’s belief system. None of the claims in the post contradict what the most influential people advocating for X think.
The case for X is distorted and statements in the piece are highly optimized for convincing libertarians. Arguments aren’t just reframed, new arguments are created that the most influential people advocating for X would disendorse.
I think pieces or informal arguments close to both (1) and (2) are common in the discourse, but I generally feel uncomfortable with ones closer to (2). Scott’s piece is somewhere in the middle and perhaps even closer to (1) than (2) but I think it’s too far toward (2) for my taste given that one of the most important claims in the piece that makes his whole argument go through may be disendorsed by the majority of the most influential people in EA.