Utilitarianism is not any sort of objective truth, in many cases it is not even a good idea in practice (but in other cases it is).
The long-term future, while important, should not completely dominate decision making.
Slowing down progress is a valid approach to mitigating X-risk, at least in theory.
Points where I disagree with the paper:
The papers argues that “for others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent”. I think it is completely clear, given that in pre-industrial times most people lived in societies that were rather unfree and unequal (harder to say about “virtue” since different people would argue for very different conceptions of what virtue is). Moreover, although intellectuals argued for all sorts of positions (words are cheap, after all), few people are trying to return to pre-industrial life in practice. Finally, techno-utopian visions of the future are usually very pro-freedom and are entirely consistent with groups of people voluntarily choosing to live in primitivist communes or whatever.
If ideas are promoted by an “elitist” minority, that doesn’t automatically imply anything bad. Other commenters have justly pointed out that many ideas that are widely accepted today (e.g. gender equality, religious freedom, expanding suffrage) were initially promoted by elitist minorities. In practice, X-risk is dominated by a minority since they are the people who care most X-risk. Nobody is silencing the voices of other people (maybe the authors would disagree, given their diatribe in this post, but I am skeptical).
“Democratization” is not always a good approach. Democratic decision processes are often dominated by tribal virtue-signaling (simulacrum levels 3⁄4), because from the perspective of every individual participant using their voice for signaling is much more impactful than using it for affecting the outcome (a sort of tragedy of the commons). I find that democracy is good for situations that are zero-sum-ish (dividing a pie), where abuse of power is a major concern, whereas for situation that are cooperative-ish (i.e. everyone’s interests are aligned), it is much better to use meritocracy. That is, set up institutions that give more stage to good thinkers rather than giving an equal voice to everyone. X-risk seems much closer to the latter than to the former.
If some risk is more speculative, that doesn’t mean we should necessarily allocate it less resources. “Speculativeness” is a property of the map, not the territory. A speculative risk can kill you just as well as a non-speculative risk. The allocation of resources should be driven by object-level discussion, not by a meta-level appeal to “speculativeness” or “consensus”.
Because, unfortunately, we do not have consensus among experts about AI risk, talking about moratoria on AI seems impractical. With time, we might be able to build such a consensus and then go for a moratorium, although it is also possible we don’t have enough time for this.
This is a relatively minor point, but there is some tension between the authors calling for stopping the development of dangerous technology while also strongly rejecting the idea of government surveillance. Clearly, imposing a moratorium on research requires some infringement on personal freedoms. I understand the authors’ argument as something like: early moratoria are better since they require less drastic measures. This is probably true but the tension should be acknowledged more.
Points where I agree with the paper:
Utilitarianism is not any sort of objective truth, in many cases it is not even a good idea in practice (but in other cases it is).
The long-term future, while important, should not completely dominate decision making.
Slowing down progress is a valid approach to mitigating X-risk, at least in theory.
Points where I disagree with the paper:
The papers argues that “for others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent”. I think it is completely clear, given that in pre-industrial times most people lived in societies that were rather unfree and unequal (harder to say about “virtue” since different people would argue for very different conceptions of what virtue is). Moreover, although intellectuals argued for all sorts of positions (words are cheap, after all), few people are trying to return to pre-industrial life in practice. Finally, techno-utopian visions of the future are usually very pro-freedom and are entirely consistent with groups of people voluntarily choosing to live in primitivist communes or whatever.
If ideas are promoted by an “elitist” minority, that doesn’t automatically imply anything bad. Other commenters have justly pointed out that many ideas that are widely accepted today (e.g. gender equality, religious freedom, expanding suffrage) were initially promoted by elitist minorities. In practice, X-risk is dominated by a minority since they are the people who care most X-risk. Nobody is silencing the voices of other people (maybe the authors would disagree, given their diatribe in this post, but I am skeptical).
“Democratization” is not always a good approach. Democratic decision processes are often dominated by tribal virtue-signaling (simulacrum levels 3⁄4), because from the perspective of every individual participant using their voice for signaling is much more impactful than using it for affecting the outcome (a sort of tragedy of the commons). I find that democracy is good for situations that are zero-sum-ish (dividing a pie), where abuse of power is a major concern, whereas for situation that are cooperative-ish (i.e. everyone’s interests are aligned), it is much better to use meritocracy. That is, set up institutions that give more stage to good thinkers rather than giving an equal voice to everyone. X-risk seems much closer to the latter than to the former.
If some risk is more speculative, that doesn’t mean we should necessarily allocate it less resources. “Speculativeness” is a property of the map, not the territory. A speculative risk can kill you just as well as a non-speculative risk. The allocation of resources should be driven by object-level discussion, not by a meta-level appeal to “speculativeness” or “consensus”.
Because, unfortunately, we do not have consensus among experts about AI risk, talking about moratoria on AI seems impractical. With time, we might be able to build such a consensus and then go for a moratorium, although it is also possible we don’t have enough time for this.
This is a relatively minor point, but there is some tension between the authors calling for stopping the development of dangerous technology while also strongly rejecting the idea of government surveillance. Clearly, imposing a moratorium on research requires some infringement on personal freedoms. I understand the authors’ argument as something like: early moratoria are better since they require less drastic measures. This is probably true but the tension should be acknowledged more.