Some other comment hinted at this: another frame that I’m not sure this paper considers is that non-strong-longtermist views are in one sense very undemocratic—they drastically prioritize the interests of very privileged current generations while leaving future generations disenfranchised, or at least greatly under-represented (if we assume there’ll be many future people). So characterizing a field as being undemocratic due to having longtermism over-represented sounds a little like calling the military reconstruction that followed the US civil war (when the Union installed military governments in defeated Southern states to protect the rights of African Americans) undemocratic—yes, it’s undemocratic in a sense, but there’s also an important sense in which the alternative is painfully undemocratic.
How much we buy my argument here seems fairly dependent on how much we buy (strong) longtermism. It’s intuitive to me that (here and elsewhere) we won’t be able to fully answer “to what extent should certain views be represented in this field?” without dealing with the object-level question “to what extent are these views right”? The paper seems to try to side-step this, which seems reasonably pragmatic but also limited in some ways.
I think there’s a similarly plausible case for non-total-utilitarian views being in a sense undemocratic; they tend to not give everyone equal decision-making weight. So there’s also a sense in which seemingly fair representation of these other views is non-democratic.
As a tangent, this seems closely related to how a classic criticism of utilitarianism—that it might trample on the few for the well-being of a majority—is also an old criticism of democracy (which is a little funny, since the paper both raises these worries with utilitarianism and gladly takes democracy on board, although that might be defensible.)
One thing I appreciate about the paper is how it points out that the ethically loaded definitions of “existential risk” make the scope of the field dependent on ethical assumptions—that helped clarify my thinking on this.
Other thoughts:
Some other comment hinted at this: another frame that I’m not sure this paper considers is that non-strong-longtermist views are in one sense very undemocratic—they drastically prioritize the interests of very privileged current generations while leaving future generations disenfranchised, or at least greatly under-represented (if we assume there’ll be many future people). So characterizing a field as being undemocratic due to having longtermism over-represented sounds a little like calling the military reconstruction that followed the US civil war (when the Union installed military governments in defeated Southern states to protect the rights of African Americans) undemocratic—yes, it’s undemocratic in a sense, but there’s also an important sense in which the alternative is painfully undemocratic.
How much we buy my argument here seems fairly dependent on how much we buy (strong) longtermism. It’s intuitive to me that (here and elsewhere) we won’t be able to fully answer “to what extent should certain views be represented in this field?” without dealing with the object-level question “to what extent are these views right”? The paper seems to try to side-step this, which seems reasonably pragmatic but also limited in some ways.
I think there’s a similarly plausible case for non-total-utilitarian views being in a sense undemocratic; they tend to not give everyone equal decision-making weight. So there’s also a sense in which seemingly fair representation of these other views is non-democratic.
As a tangent, this seems closely related to how a classic criticism of utilitarianism—that it might trample on the few for the well-being of a majority—is also an old criticism of democracy (which is a little funny, since the paper both raises these worries with utilitarianism and gladly takes democracy on board, although that might be defensible.)
One thing I appreciate about the paper is how it points out that the ethically loaded definitions of “existential risk” make the scope of the field dependent on ethical assumptions—that helped clarify my thinking on this.