If Superpositions can Suffer
Tl;dr: Due to quantum computers’ abilities to compute multiple superpositions in parallel it seems possible that the capability of a quantum computer to generate utility grows exponentially with the size of that quantum computer. If this exponential conjecture is true, then very near term quantum computers could suffer more than all life on Earth has in the past 4 billion years.
Introduction
Quantum computing (QC) is a categorically different form of computation. It is capable of tackling certain problems exponentially faster than its classical counterparts. The bounty QC may bring is large: speedups in a broad range of topics, from drug discovery to reinforcement learning. What’s more, these speedups may be imminent. Quantum supremacy[1] has already been crossed and the large quantum chip manufacturers are expecting the construction of chips capable of some of these promises before the decade is out[2].
It seems, however, that these breakthroughs may be rotten. While the power of QC is based in the ability to tackle problems of exponentially larger size, it seems possible that the quantum computer would also be capable of suffering in exponentially larger amounts than a classical computer. Indeed, the same rudimentary quantum computers from the previous paragraph could conceivably suffer more in a second, than all life on Earth has suffered since its inception.
What is a QC
The task of describing the basics of how a quantum computer works is the focus of many dense books, and while we encourage you to read them, we obviously can’t compress the entirety into this blog post’s introduction. Instead here is a simplified model:
In a classical computer (such as the one you’re looking at now) information is supplied as an input string, , the computer processes this string and spits back out an output string, . At a high level this could be you inputting “what is Will MacAskill’s Birthday” to google and it returning “1987”, or at a low level you might ask “what value does f(x) take at x = 3”.
Quantum computers operate similarly, you input a string and you get one out. But they have one crucial advantage, inputs can be prepared in superposition, and outputs will be returned in superposition. So now, instead of just asking one question, , you could ask many questions together, , and the resulting is another valid state to put into the computer. While the nuances of this are many, the power of quantum computing is clear when you know that the amount of states that can be put in grows exponentially with the size of your quantum computer. A quantum computer of size 4 can manage superpositions. Whereas a classical computer that wants to run 2 inputs at once, needs to be twice the size; a classical computer of size 4 can only replicate superpositions.
Obviously, having the sum of the states isn’t as useful as having each input separately; it is the task of many bright careers to figure out how to translate these increased superpositions into increased speed. For some problems (such as those related to period finding), this speed up is often exponential, but for others (such as those related to optimising an arbitrary function), any quantum algorithm is known to be limited to only polynomial speed up.
Quantum suffering subroutines
A good introduction to suffering subroutines is What Are Suffering Subroutines?. The short of it being they are small parts of a larger algorithm which might be morally relevant (e.g. because they’re capable of experiencing suffering). Objections to this idea are numerous and complicated. We assume all readers have at least a rough understanding of this concept. In this section, we discuss the potential for quantum subroutine suffering, which comes with similar considerations to those of classical subroutine suffering, but with potentially astronomically larger stakes.
How large are these stakes? It all depends on how suffering scales. Does it scale along with the number of superpositions (exponential suffering) or is it a smaller, perhaps polynomial increase? If it’s the latter, we probably don’t have to worry about QC suffering in particular; the polynomial increase in suffering is washed out by the increased size and speed of classical computers (at least in the foreseeable future), and quantum computers are of no particular concern anytime soon.
If it’s the former (exponential suffering), we are about to enter a very troubling period. To describe how troubling this exponential suffering would be, it’s useful to compare the amount of compute to other tasks. A recent estimate for the total computational cost of evolution is Petaflop/s-days (Interpreting AI compute trends – AI Impacts), using estimates from How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects. This comes down to less than FLOP in total. Even relatively small quantum computers of 200 qubits will be capable of performing a Quantum Fourier Transform in less than a minute that would require over FLOP to match with a classical machine, more computation than to simulate the entire evolutionary history on Earth!
Scaling of utility
Given the previous section, it seems important to know: which is the case? Does utility scale polynomially or exponentially? Unfortunately we don’t have a clear answer to this, in this section we will very briefly outline why you might believe either side. We primarily hope this post leads to someone more capable attempting to answer this question with less uncertainty.
For the exponential case: When you input a superposition of states, , no matter what operations you run, it is always possible to distribute the operations being applied to each element of the superposition, such that the final state is also a superposition , where each is equal to what you would get if you just ran by itself. While you can’t measure each at the end, the evolution of the individual states is equivalent to running them independently, and then placing them into superposition at the end. Given the experiences can be considered to be had independently, it seems possible that each superposition deserves moral weight equivalent to if it had been run independently. Which then implies you have exponentially larger amounts of morally relevant beings, which deserve as a group[3] exponential moral weight.
Against the exponential case: the previous paragraph essentially applies the Everett/Many worlds interpretation of quantum mechanics to the inside of a quantum computer. Thus the many objections raised by philosophers to the Everett interpretation apply to that argument. Of particular concern is the “preferred basis problem”. Essentially there is a mathematical trick wherein you can write any state as a sum of terms in another basis (a change of basis), and due to the linearity of quantum computing, that will produce the same answer as if you ran one term. So there is both a way to write any single state, , as an exponential superposition of different states, and to write an exponential sum of states as just one single state. Defining a model of suffering where you count the number of states and multiply suffering is then very poorly defined. In one basis there is 1 being, in another basis there are exponentially many. To solve this there must be some “prefered basis” to do the counting in.
On top of this scaling question, there is a question of what the smallest morally relevant being is, if any particular state, , (or process on that state) is not morally relevant. Then any multiplication, exponential or otherwise, will just be multiplying a big number by zero. We will not discuss what this smallest state might be as it is equal to the classical case, where it is a topic of contention.
Quantum utility monsters/super-beneficiaries
Given the size of the potential experience of these machines, it seems they are prime candidates for Bostrom-esque AI utility monsters (also denoted as ‘super-beneficiaries’ in Sharing the World with Digital Minds). In this scenario, single labs with machines of a few million qubits[4] could produce many orders of magnitude more utility than the rest of the world put together.
It is important to note that quantum computers might become utility monsters without being capable of anything we would consider “intelligent” thought. This is because in some tasks quantum computers are only capable of solving a problem in the same resources as their classic counterparts, and a classical computer with only 100 bits is not even good at long division! So a quantum computer could have 100 qubits, be completely unable to do anything interesting but still suffer as exp(100).
Nevertheless, as quantum computers become larger, they will be capable of running more than simple subroutines in superposition; they might even be capable of running human beings in superposition. While even in the most optimistic timeline this is very far in the future (we might expect to simulate humans on classical machines first), the points of this post still apply: the experience could be amplified exponentially[5].
Summary
In this post we posed the question of how quantum superpositions relate to suffering and found that the variation in scaling (either polynomial or exponential) can produce wildly different answers. We hope this post will inspire a grant making organisation, or an individual skilled in the philosophy of quantum mechanics, to address these questions in greater detail/with a greater understanding than the authors are able to provide.
The premature posting of this work was prompted when a post by Paul Christiano[6] almost swooped the core idea. So following in the time honoured near-swoop academic tradition of desperately posting asap, we have published before the obvious next question is answered. In another post we hope to take these ideas further and ask: “Suppose many worlds applies to our universe, how does this change the moral weighting of different actions and the long run future?”.
Acknowledgements
This post has taken over a year to finish writing/procrastinating. In that time a great many people have helped. EB owes particular credit to Robert Harling, George Noether and Wuschel for their corrections, comments and occasionally pushing against my will to make this post better. BIC acknowledges the EA Oxford September 2020 writing weekend and conversations therein which led to the first draft of this post.
A further point on scaling that doesn’t fit in the main post:
What if larger brains experience more than smaller brains by a non-linear amount?
Many people believe (correctly or not), that higher forms of intelligence lead to larger moral weight. They care more about humans than great apes. Often by more than just the linear ratio of brain sizes. Assuming this is a correct belief you might choose to ground it in some function that takes in the size of the computer and outputs the moral weight. The scaling of this function is hugely important, indeed if it is exponential then you can reapply all the thinking in this post to classical computers. Furthermore, the proposed exponential scaling of a larger computer now adds in the exponent to the exponential scaling of a quantum computer. Which for any QC in the near future (therefore of limited size) means supercomputers are of much larger concern.
Viewing moral weight as a computational task in some complexity class seems like it could be a very interesting project, but not one that either of the authors has time to pursue.
<!-- Footnotes themselves at the bottom. -->
Notes
- ↩︎
The point at which a QC performs any single task that classical computers could not do in millenia, this point was crossed in late 2019/2020 for a specific sampling problem. It is important to note that not all tasks become faster at this point since the efficiencies of classical and quantum algorithms vary.
- ↩︎
- ↩︎
Assuming you hold no particularly exotic population ethics opinions.
- ↩︎
- ↩︎
This could potentially be exploited to create a “grander future”, where computroniums simulate multiple experiences in superposition. Although this would require the whole computer communicating coherently, a non trivial task for a stellar sized computer.
- ↩︎
1. I think it is much more likely that different states should be counted according to their measure, not their existence. Denying this has the issues with preferred bases that you mentioned—since |+> = 1/sqrt(2) (|0> + |1>), it’s unclear whether we should count one observer in state |+> or two observers, one in state |0> and one in state |1>--whereas accepting it will at least be consistent on how much moral weight the system has (weight <+|+> = 1 in the first case, weight 1/sqrt(2)(<0| + <1|) 1/sqrt(2)(|0> + |1>) = 1 in the second case). (Also, this issue is even worse in infinite-dimensional Hilbert spaces—a position eigenstate is a superposition of infinitely many momentum eigenstates. If we counted each state in a superposition separately, the most important question in the world for suffering would be whether space is discrete.)
2. This isn’t an issue that’s unique to quantum computers in all interpretations of quantum mechanics. In a theory where wavefunction collapse is a physical process, then indeed quantum computers would be special; but in many worlds, everything is in a superposition anyway, and quantum computers are only special because they avoid decoherence at larger scales. I personally think something like many worlds is far more likely to be true than a theory that involves true nondeterminism, although it’s not a closed question.
Hi Evn, thanks for your points,
Yeah this was initially my overwhelming opinion too. Coming from a QC background normalising according to amplitude is just instinct, but just because the operators we have nice models for behave like this doesnt mean we should expect this very different type of effect to act the same (for one, gravity doesnt seem to!). There are some justifications** you could generate against this approach but ultimately the point of the post is “we are uncertain”, it could be normalised, it could be exponential, or it could be some polynomial function. Given the scale it seems its worth someone capable attempting a paper.
2.Fully agree with this point, this is exactly the question we wanted to address with the next blog post but in the interest of time, haven’t written yet. You would essentially have a universe with “value” growing with an exponential according to the amount of superposition generating interactions happening in any instant (which is a lot). If you believed each superposition had some non-normalised value, this would mean you care about the long run future way more (since its been multipled by such a large value). Which might mean your only goal is to make sure there is some vaguely happy being as far into the future as possible. It gets even worse when you include your point about infinite dimensional Hilbert spaces, suddenly the future becomes an infinite set of futures growing by infinity every second, and I know better to pretend I understand infinity ethics on this level! As you say, this is not a settled debate, I also land on the side of many worlds but I am far from certain in this belief.
**Suppose (for the sake of argument) you believe that a brain experiencing one second of happiness is worth one utiliton and increasing the size of the brain increases the size of its moral worth (most people think a human is worth more than an ant). This brain can be simulated by some classical computation requiring a certain time, there exists some quantum computation which is equivalent. Due to the large number of processes needed to simulate a brain at least some probably have a quantum speedup associated. Now you can run the same brain (say) 10x faster, this seems like it would be worth 10x more, because there are 10x more experiences. Which implies that the increased power of the QC is worth more than just normalising it to one. As you scale up this brain the quantum speedup scales too, which implies some scaling associated. Ultimately the exp vs poly debate comes down to what the most efficient utility generating quantum computation is.
If the Everett interpretation is true, then all experiences are already amplified exponentially. Unless I’m missing something, a QC doesn’t deserve any special consideration. It all adds up to normality.
You don’t seem to be missing anything, if Everett is true then this is a whole different issue (future post) and QCs become worth as much as measuring n superpositions (creating n many worlds) and then running a classical simulation.
As to decision theory there are good papers* explaining why you do want your decision theory to be normalised IF you don’t want many worlds to break your life.
David Deutsch: https://arxiv.org/ftp/quant-ph/papers/9906/9906015.pdf *Hillary Greaves: https://arxiv.org/abs/quant-ph/0312136 (much more approachable)
This is somewhat covered by existing comments, but to add my wording:
It’s highly unlikely that utility is exponential in quantum state, for roughly the same reason that quantum information is not exponential in quantum state. That is, if you have n qbits, you can hold n bits of classical information, not 2^n. You can do more computation with n qbits, but only in special cases.
Thats a good point, why do you think that at least some part of utility generation doesn’t allow a more efficient quantum algorithm?
The issue is not the complexity, but the information content. As mentioned, n qbits can’t store more than n bits of classical information, so the best way to think of them is “n bits of information with some quantum properties”. Therefore, it’s implausible that they correspond to exponential utility.
What do you mean by store information? The state space of a quantum state is(/can be thought of as) a vector of 2^n complex numbers, it’s this that prohibits efficient classical simulation.
Perhaps you’re talking about storing and retrieving information, which does indeed have constraints (e.g. Holevo bound). Constraints that limit quantum computers as a kind of exponentially large memory stick where you store and retrieve information. But algorithms (like Shor’s) use this large space then carefully encode their output (using the structure in the problem) in a way that can be transferred off the computer without breaking the Holevo bound.
I guess I believe the state space that you can’t necessarily access is the important element, not the information being brought in and out of the system.
Yes, Holevo as you say. By information I mean the standard definitions.