seems arguably higher than .0025% extinction risk and likely higher than 200,000 lives if you weight the expected value of all future people >~100x of that of current people.
If Scott had used language like this, my guess is that the people he was trying to convince would have completely bounced off of his post.
I interpreted him to be saying something like âlook Ezra Klein et al., even if we start with your assumptions and reasoning style, we still end up with the conclusion that EA is good.â
And it seems fine to me to argue from the basis of someone elseâs premises, even if you donât think those premises are accurate yourself.
I do think it would have been clearer if he had included a caveat like âif you think that small changes in the chance of existential risk outweigh ~everything else then this post isnât for you, read something else insteadâ but oh well.
If Scott had used language like this, my guess is that the people he was trying to convince would have completely bounced off of his post.
I mostly agree with this, I wasnât suggesting he included that specific type of language, just that the arguments in the post donât go through from the perspective of most leader/âhighly-engaged EAs. Scott has discussed similar topics on ACT here but I agree the target audience was likely different.
I do think part of his target audience was probably EAs who he thinks are too critical of themselves, as I think heâs written before, but itâs likely a small-ish fraction of his readers.
I do think it would have been clearer if he had included a caveat like âif you think that small changes in the chance of existential risk outweigh ~everything else then this post isnât for you, read something else insteadâ but oh well.
Agree with that. I also think if this is the intention the title should maybe be different, instead of being called âIn continued defense of effective altruismâ it could be called something else like âIn defense of effective altruism from X perspectiveâ. The title seems to me to imply that effective altruism has been positive on its own terms.
Furthermore, people who identify as ~longtermists seemed to be sharing it widely on Twitter without any type of caveat you mentioned.
And it seems fine to me to argue from the basis of someone elseâs premises, even if you donât think those premises are accurate yourself.
I feel like thereâs a spectrum of cases here. Letâs say I as a member of movement X in which most people arenât libertarians write a post âlibertarian case for Xâ, where I argue that X is good from a libertarian perspective.
Even if those in X usually donât agree with the libertarian premises, the arguments in the post still check out from Xâs perspective. Perhaps the arguments are reframed to make to show libertarians that X will lead to positive effects on their belief system as well as Xâs belief system. None of the claims in the post contradict what the most influential people advocating for X think.
The case for X is distorted and statements in the piece are highly optimized for convincing libertarians. Arguments arenât just reframed, new arguments are created that the most influential people advocating for X would disendorse.
I think pieces or informal arguments close to both (1) and (2) are common in the discourse, but I generally feel uncomfortable with ones closer to (2). Scottâs piece is somewhere in the middle and perhaps even closer to (1) than (2) but I think itâs too far toward (2) for my taste given that one of the most important claims in the piece that makes his whole argument go through may be disendorsed by the majority of the most influential people in EA.
If Scott had used language like this, my guess is that the people he was trying to convince would have completely bounced off of his post.
I interpreted him to be saying something like âlook Ezra Klein et al., even if we start with your assumptions and reasoning style, we still end up with the conclusion that EA is good.â
And it seems fine to me to argue from the basis of someone elseâs premises, even if you donât think those premises are accurate yourself.
I do think it would have been clearer if he had included a caveat like âif you think that small changes in the chance of existential risk outweigh ~everything else then this post isnât for you, read something else insteadâ but oh well.
I mostly agree with this, I wasnât suggesting he included that specific type of language, just that the arguments in the post donât go through from the perspective of most leader/âhighly-engaged EAs. Scott has discussed similar topics on ACT here but I agree the target audience was likely different.
I do think part of his target audience was probably EAs who he thinks are too critical of themselves, as I think heâs written before, but itâs likely a small-ish fraction of his readers.
Agree with that. I also think if this is the intention the title should maybe be different, instead of being called âIn continued defense of effective altruismâ it could be called something else like âIn defense of effective altruism from X perspectiveâ. The title seems to me to imply that effective altruism has been positive on its own terms.
Furthermore, people who identify as ~longtermists seemed to be sharing it widely on Twitter without any type of caveat you mentioned.
I feel like thereâs a spectrum of cases here. Letâs say I as a member of movement X in which most people arenât libertarians write a post âlibertarian case for Xâ, where I argue that X is good from a libertarian perspective.
Even if those in X usually donât agree with the libertarian premises, the arguments in the post still check out from Xâs perspective. Perhaps the arguments are reframed to make to show libertarians that X will lead to positive effects on their belief system as well as Xâs belief system. None of the claims in the post contradict what the most influential people advocating for X think.
The case for X is distorted and statements in the piece are highly optimized for convincing libertarians. Arguments arenât just reframed, new arguments are created that the most influential people advocating for X would disendorse.
I think pieces or informal arguments close to both (1) and (2) are common in the discourse, but I generally feel uncomfortable with ones closer to (2). Scottâs piece is somewhere in the middle and perhaps even closer to (1) than (2) but I think itâs too far toward (2) for my taste given that one of the most important claims in the piece that makes his whole argument go through may be disendorsed by the majority of the most influential people in EA.