An anonymous Quantum ML researcher.
Unfortunately this field often looks down on researchers who ever make statements without a strong mathematical proof, so a pseudonym was required!
An anonymous Quantum ML researcher.
Unfortunately this field often looks down on researchers who ever make statements without a strong mathematical proof, so a pseudonym was required!
Thats a good point, why do you think that at least some part of utility generation doesn’t allow a more efficient quantum algorithm?
You don’t seem to be missing anything, if Everett is true then this is a whole different issue (future post) and QCs become worth as much as measuring n superpositions (creating n many worlds) and then running a classical simulation.
As to decision theory there are good papers* explaining why you do want your decision theory to be normalised IF you don’t want many worlds to break your life.
David Deutsch: https://arxiv.org/ftp/quant-ph/papers/9906/9906015.pdf *Hillary Greaves: https://arxiv.org/abs/quant-ph/0312136 (much more approachable)
Hi Evn, thanks for your points,
Yeah this was initially my overwhelming opinion too. Coming from a QC background normalising according to amplitude is just instinct, but just because the operators we have nice models for behave like this doesnt mean we should expect this very different type of effect to act the same (for one, gravity doesnt seem to!). There are some justifications** you could generate against this approach but ultimately the point of the post is “we are uncertain”, it could be normalised, it could be exponential, or it could be some polynomial function. Given the scale it seems its worth someone capable attempting a paper.
2.Fully agree with this point, this is exactly the question we wanted to address with the next blog post but in the interest of time, haven’t written yet. You would essentially have a universe with “value” growing with an exponential according to the amount of superposition generating interactions happening in any instant (which is a lot). If you believed each superposition had some non-normalised value, this would mean you care about the long run future way more (since its been multipled by such a large value). Which might mean your only goal is to make sure there is some vaguely happy being as far into the future as possible. It gets even worse when you include your point about infinite dimensional Hilbert spaces, suddenly the future becomes an infinite set of futures growing by infinity every second, and I know better to pretend I understand infinity ethics on this level! As you say, this is not a settled debate, I also land on the side of many worlds but I am far from certain in this belief.
**Suppose (for the sake of argument) you believe that a brain experiencing one second of happiness is worth one utiliton and increasing the size of the brain increases the size of its moral worth (most people think a human is worth more than an ant). This brain can be simulated by some classical computation requiring a certain time, there exists some quantum computation which is equivalent. Due to the large number of processes needed to simulate a brain at least some probably have a quantum speedup associated. Now you can run the same brain (say) 10x faster, this seems like it would be worth 10x more, because there are 10x more experiences. Which implies that the increased power of the QC is worth more than just normalising it to one. As you scale up this brain the quantum speedup scales too, which implies some scaling associated. Ultimately the exp vs poly debate comes down to what the most efficient utility generating quantum computation is.
What do you mean by store information? The state space of a quantum state is(/can be thought of as) a vector of 2^n complex numbers, it’s this that prohibits efficient classical simulation.
Perhaps you’re talking about storing and retrieving information, which does indeed have constraints (e.g. Holevo bound). Constraints that limit quantum computers as a kind of exponentially large memory stick where you store and retrieve information. But algorithms (like Shor’s) use this large space then carefully encode their output (using the structure in the problem) in a way that can be transferred off the computer without breaking the Holevo bound.
I guess I believe the state space that you can’t necessarily access is the important element, not the information being brought in and out of the system.