Thanks Jim! I think this points in a useful direction, but Iâm not sure I would describe this argument as âdebunkedâ. Instead, I think I would say that the following claim from you is the crux:
Potential future forms of suffering (e.g., digital suffering)[2] do not seem to similarly correlate with inefficiency
As an example of why this claim is not obviously true: Quicksort is provably the most efficient way to sort a list, and Iâm fairly confident it doesnât involve suffering. If you told me that you had an algorithm which suffered while sorting a list, I would feel fairly confident that this algorithm would be less efficient than quicksort (i.e suffering is anti-correlated with efficiency).
Will this anti-correlation generalize to more complex algorithms? I donât really know. But I would be surprised if you were >90% confident that it would not.
Interesting, thanks Ben! I definitely agree that this is the crux.
Iâm sympathetic to the claim that âthis algorithm would be less efficient than quicksortâ and that this claim is generalizable.[1]However, if true, I think it only implies that suffering isâby defaultâinefficient as a motivation for an algorithm.
Right after making my crux claim, I reference some of Tobias Baumannâs (2022a,2022b) work which gives some examples of how significant amounts of suffering may be instrumentally useful/ârequired in cases such as scientific experiments where sentience plays a key role (where the suffering is not due to it being a strong motivator for an efficient algorithm, but for other reasons). Interestingly, his âincidental sufferingâ examples are more similar to the factory farming and human slavery examples than to the Quicksort example.
To be fair, itâs been a while since Iâve read about stuff like suffering subroutines (see, e.g., Tomasik 2019) and its plausibility, and people might have raised considerations going against that claim.
Right after making my crux claim, I reference some of Tobias Baumannâs (2022a,2022b) work which gives some examples of how significant amounts of suffering may be instrumentally useful/ârequired in cases such as scientific experiments where sentience plays a key role (where the suffering is not due to it being a strong motivator for an efficient algorithm, but for other reasons).
I think it would be helpful if you provided some of those examples in the post.
Yeah, I find some of Baumannâs examples plausible, but in order for the future to be net negative we donât just need some examples, we need the majority of computation to be suffering.[1]
I donât think Baumann is trying to argue for that in the linked pieces (or if they are, I donât find it terribly compelling); I would be interested in more research looking into this.
I do not mean to argue that the future will be net negative. (I even make this disclaimer twice in the post, aha.) :)
I simply argue that the convergence between efficiency and methods that involve less suffering argument in favor of assuming itâll be positive is unsupported.
There are many other arguments/âconsiderations to take into account to assess the sign of the future.
Ah yeah sorry, what I said wasnât precise; I mean that is not enough to show that there exists one instance of suffering being instrumentally useful, you have to show that this is true in general.
If I want to prove that technological progress generally correlates with methods that involve more suffering, yes! Agreed.
But while the post suggests that this is a possibility, its main point is that suffering itself is not inefficient, such that there is no reason to expect progress and methods that involve less suffering to correlate by default (much weaker claim).
This makes me realize that the crux is perhaps this below part more than the claim we discuss above.
While I tentatively think the âthe most efficient solutions to problems donât seem like they involve sufferingâ claimis true if we limit ourselves to the present and the past, I think it is false once we consider the long-term future, which makes the argument break apart.
Future solutions are more efficient insofar as they overcome past limitations. In the relevant examples that are enslaved humans and exploited animals, suffering itself is not a limiting factor. It is rather the physical limitations of those biological beings, relative to machines that could do a better job at their tasks.
I donât see any inevitable dependence between their suffering and these physical limitations. If human slaves and exploited animals were not sentient, this wouldnât change the fact that machines would do a better job.
Sorry for the confusion and thanks for pushing back! Helps me clarify what the claims made in this post imply and donât imply. :)
Thanks Jim! I think this points in a useful direction, but Iâm not sure I would describe this argument as âdebunkedâ. Instead, I think I would say that the following claim from you is the crux:
As an example of why this claim is not obviously true: Quicksort is provably the most efficient way to sort a list, and Iâm fairly confident it doesnât involve suffering. If you told me that you had an algorithm which suffered while sorting a list, I would feel fairly confident that this algorithm would be less efficient than quicksort (i.e suffering is anti-correlated with efficiency).
Will this anti-correlation generalize to more complex algorithms? I donât really know. But I would be surprised if you were >90% confident that it would not.
Interesting, thanks Ben! I definitely agree that this is the crux.
Iâm sympathetic to the claim that âthis algorithm would be less efficient than quicksortâ and that this claim is generalizable.[1] However, if true, I think it only implies that suffering isâby defaultâinefficient as a motivation for an algorithm.
Right after making my crux claim, I reference some of Tobias Baumannâs (2022a, 2022b) work which gives some examples of how significant amounts of suffering may be instrumentally useful/ârequired in cases such as scientific experiments where sentience plays a key role (where the suffering is not due to it being a strong motivator for an efficient algorithm, but for other reasons). Interestingly, his âincidental sufferingâ examples are more similar to the factory farming and human slavery examples than to the Quicksort example.
To be fair, itâs been a while since Iâve read about stuff like suffering subroutines (see, e.g., Tomasik 2019) and its plausibility, and people might have raised considerations going against that claim.
I think it would be helpful if you provided some of those examples in the post.
Yeah, I find some of Baumannâs examples plausible, but in order for the future to be net negative we donât just need some examples, we need the majority of computation to be suffering.[1]
I donât think Baumann is trying to argue for that in the linked pieces (or if they are, I donât find it terribly compelling); I would be interested in more research looking into this.
Or maybe the vast majority to be suffering. See e.g. this comment from Paul Christiano about how altruists may have outsized impact in the future.
I do not mean to argue that the future will be net negative. (I even make this disclaimer twice in the post, aha.) :)
I simply argue that the convergence between efficiency and methods that involve less suffering argument in favor of assuming itâll be positive is unsupported.
There are many other arguments/âconsiderations to take into account to assess the sign of the future.
Ah yeah sorry, what I said wasnât precise; I mean that is not enough to show that there exists one instance of suffering being instrumentally useful, you have to show that this is true in general.
(Unless I misunderstood your post?)
If I want to prove that technological progress generally correlates with methods that involve more suffering, yes! Agreed.
But while the post suggests that this is a possibility, its main point is that suffering itself is not inefficient, such that there is no reason to expect progress and methods that involve less suffering to correlate by default (much weaker claim).
This makes me realize that the crux is perhaps this below part more than the claim we discuss above.
Sorry for the confusion and thanks for pushing back! Helps me clarify what the claims made in this post imply and donât imply. :)