Thanks for sharing this! Responding to just some parts of the object-level issues raised by the paper (I only read parts closely, so I might not have the full picture)--I find several parts of this pretty confusing or unintuitive:
Your first recommendation in your concluding paragraph is: “EA needs to diversify funding sources by breaking up big funding bodies.” But of course “EA” per se can’t do this; the only actors with the legal authority to break up these bodies (other than governments, which I’d guess would be uninterested) are these funding bodies themselves, i.e. mainly OpenPhil. Given the emphasis on democratization and moral uncertainty, it sounds like your first recommendation is a firm assertion that two people with lots of money should give away most of their money to other philanthropists who don’t share their values, i.e. it’s a recommendation that obviously won’t be implemented (after all, who’d want to give influence to others who want to use it for different ends?). So unless I’ve misunderstood, this looks like there might be more interest in emphasizing bold recommendations than in emphasizing recommendations that stand a chance of getting implemented. And that seems at odds with your earlier recognition, which I really appreciate—that this is not a game. Have I missed something?
Much of the paper seems to assume that, for moral uncertainty reasons, it’s bad for the existential risk research community to be unrepresentative of the wider world, especially in its ethical views. I’m not sure this is a great response to moral uncertainty. My intuition would be that, under moral uncertainty, each worldview will do best (by its own lights) if it can disproportionately guide the aspects of the world it considers most important. This suggests that all worldviews will do best (by their own lights) if [total utiliarianism + strong longtermism + transhumanism]* retains over-representation in existential risk research (since this view cares about this niche field to an extremely unusual extent), while other ethical views retain their over-representation in the many, many other areas of the world that entirely lack these longtermists. These disproportionate influences just seem like different ethical communities specializing differently, to mutual benefit. (There’s room to debate just how much these ethical views should concentrate their investments, but if the answer is not zero, then it’s not the case that e.g. the field having “non-representative moral visions of the future” is a “daunting problem” for anyone.)
*I don’t use your term “techno-utopian approach” because “utopian” has derogotary connotations, not to mention misleading/inaccurate connotations re: these researchers’ typical levels of optimism regarding technology and the future.
Some other comment hinted at this: another frame that I’m not sure this paper considers is that non-strong-longtermist views are in one sense very undemocratic—they drastically prioritize the interests of very privileged current generations while leaving future generations disenfranchised, or at least greatly under-represented (if we assume there’ll be many future people). So characterizing a field as being undemocratic due to having longtermism over-represented sounds a little like calling the military reconstruction that followed the US civil war (when the Union installed military governments in defeated Southern states to protect the rights of African Americans) undemocratic—yes, it’s undemocratic in a sense, but there’s also an important sense in which the alternative is painfully undemocratic.
How much we buy my argument here seems fairly dependent on how much we buy (strong) longtermism. It’s intuitive to me that (here and elsewhere) we won’t be able to fully answer “to what extent should certain views be represented in this field?” without dealing with the object-level question “to what extent are these views right”? The paper seems to try to side-step this, which seems reasonably pragmatic but also limited in some ways.
I think there’s a similarly plausible case for non-total-utilitarian views being in a sense undemocratic; they tend to not give everyone equal decision-making weight. So there’s also a sense in which seemingly fair representation of these other views is non-democratic.
As a tangent, this seems closely related to how a classic criticism of utilitarianism—that it might trample on the few for the well-being of a majority—is also an old criticism of democracy (which is a little funny, since the paper both raises these worries with utilitarianism and gladly takes democracy on board, although that might be defensible.)
One thing I appreciate about the paper is how it points out that the ethically loaded definitions of “existential risk” make the scope of the field dependent on ethical assumptions—that helped clarify my thinking on this.
Re your second point, a counter would be that the implementation of recommendations arising from ERS will often have impacts on the population around at the time of implementation, and the larger those impacts are the less possible specialization seems. E.g. if total utilitarians/longtermists were considering seriously pursuing the implementation of global governance/ubiquitous surveillance, this might risk such a significant loss of value to non-utilitarian non-longtermists that it’s not clear total utilitarians/longtermists should be left to dominate the debate.
I mostly agree. I’m not sure I see how that’s a counter to my second point though. My second point was just that (contrary to what the paper seems to assume) some amount of ethical non-representativeness is not in itself bad:
There’s room to debate just how much these ethical views should concentrate their investments, but if the answer is not zero, then it’s not the case that e.g. the field having “non-representative moral visions of the future” is a “daunting problem” for anyone.
Also, if we’re worried about implementation of large policy shifts (at least, if we’re worried about this under “business as usual” politics), I think utilitarians/longtermists can’t and won’t actually dominate the debate, because policymaking processes in modern democracies by default engage a large and diverse set of stakeholders. (In other words, dominance in the internal debates of a niche research field won’t translate into dominance of policymaking debates—especially when the policy in question would significantly affect many people.)
Thanks for sharing this! Responding to just some parts of the object-level issues raised by the paper (I only read parts closely, so I might not have the full picture)--I find several parts of this pretty confusing or unintuitive:
Your first recommendation in your concluding paragraph is: “EA needs to diversify funding sources by breaking up big funding bodies.” But of course “EA” per se can’t do this; the only actors with the legal authority to break up these bodies (other than governments, which I’d guess would be uninterested) are these funding bodies themselves, i.e. mainly OpenPhil. Given the emphasis on democratization and moral uncertainty, it sounds like your first recommendation is a firm assertion that two people with lots of money should give away most of their money to other philanthropists who don’t share their values, i.e. it’s a recommendation that obviously won’t be implemented (after all, who’d want to give influence to others who want to use it for different ends?). So unless I’ve misunderstood, this looks like there might be more interest in emphasizing bold recommendations than in emphasizing recommendations that stand a chance of getting implemented. And that seems at odds with your earlier recognition, which I really appreciate—that this is not a game. Have I missed something?
Much of the paper seems to assume that, for moral uncertainty reasons, it’s bad for the existential risk research community to be unrepresentative of the wider world, especially in its ethical views. I’m not sure this is a great response to moral uncertainty. My intuition would be that, under moral uncertainty, each worldview will do best (by its own lights) if it can disproportionately guide the aspects of the world it considers most important. This suggests that all worldviews will do best (by their own lights) if [total utiliarianism + strong longtermism + transhumanism]* retains over-representation in existential risk research (since this view cares about this niche field to an extremely unusual extent), while other ethical views retain their over-representation in the many, many other areas of the world that entirely lack these longtermists. These disproportionate influences just seem like different ethical communities specializing differently, to mutual benefit. (There’s room to debate just how much these ethical views should concentrate their investments, but if the answer is not zero, then it’s not the case that e.g. the field having “non-representative moral visions of the future” is a “daunting problem” for anyone.)
*I don’t use your term “techno-utopian approach” because “utopian” has derogotary connotations, not to mention misleading/inaccurate connotations re: these researchers’ typical levels of optimism regarding technology and the future.
Other thoughts:
Some other comment hinted at this: another frame that I’m not sure this paper considers is that non-strong-longtermist views are in one sense very undemocratic—they drastically prioritize the interests of very privileged current generations while leaving future generations disenfranchised, or at least greatly under-represented (if we assume there’ll be many future people). So characterizing a field as being undemocratic due to having longtermism over-represented sounds a little like calling the military reconstruction that followed the US civil war (when the Union installed military governments in defeated Southern states to protect the rights of African Americans) undemocratic—yes, it’s undemocratic in a sense, but there’s also an important sense in which the alternative is painfully undemocratic.
How much we buy my argument here seems fairly dependent on how much we buy (strong) longtermism. It’s intuitive to me that (here and elsewhere) we won’t be able to fully answer “to what extent should certain views be represented in this field?” without dealing with the object-level question “to what extent are these views right”? The paper seems to try to side-step this, which seems reasonably pragmatic but also limited in some ways.
I think there’s a similarly plausible case for non-total-utilitarian views being in a sense undemocratic; they tend to not give everyone equal decision-making weight. So there’s also a sense in which seemingly fair representation of these other views is non-democratic.
As a tangent, this seems closely related to how a classic criticism of utilitarianism—that it might trample on the few for the well-being of a majority—is also an old criticism of democracy (which is a little funny, since the paper both raises these worries with utilitarianism and gladly takes democracy on board, although that might be defensible.)
One thing I appreciate about the paper is how it points out that the ethically loaded definitions of “existential risk” make the scope of the field dependent on ethical assumptions—that helped clarify my thinking on this.
Re your second point, a counter would be that the implementation of recommendations arising from ERS will often have impacts on the population around at the time of implementation, and the larger those impacts are the less possible specialization seems. E.g. if total utilitarians/longtermists were considering seriously pursuing the implementation of global governance/ubiquitous surveillance, this might risk such a significant loss of value to non-utilitarian non-longtermists that it’s not clear total utilitarians/longtermists should be left to dominate the debate.
I mostly agree. I’m not sure I see how that’s a counter to my second point though. My second point was just that (contrary to what the paper seems to assume) some amount of ethical non-representativeness is not in itself bad:
Also, if we’re worried about implementation of large policy shifts (at least, if we’re worried about this under “business as usual” politics), I think utilitarians/longtermists can’t and won’t actually dominate the debate, because policymaking processes in modern democracies by default engage a large and diverse set of stakeholders. (In other words, dominance in the internal debates of a niche research field won’t translate into dominance of policymaking debates—especially when the policy in question would significantly affect many people.)