Very Accurate Ancestor Simulation: Practicality and Ethics
Thus far, philosophical discussion involving ancestor simulation has revolved around the likelihood that we are already in one. However, as a controversial sociologist once said, “The philosophers have only interpreted the world, in various ways. The point is to change it.”
As such, it is my intent to demonstrate that highly accurate ancestor simulation is not only within the bounds of physical law, but also feasible. In order to do this, I will show that the human mind is computable, that it is possible to retrieve the minds of the dead, that simulating the Earth’s past is technically feasible, and that it can be done in an ethical manner.
Computability of the Human Mind
Many objections to the idea of consciousness uploading and simulation stem from Sir Roger Penrose’s argument for human noncomputability. The popular interpretation is that because we can make logical proofs about noncomputable problems, our minds are noncomputable as well. There is a significant error with the argument’s semiotics. It confuses knowledge about the characteristics of a noncomputable problem with knowledge of the solution. That these are not the same may be easily demonstrated.
Grahm’s Number is an extremely large but still finite value that cannot be expressed in full except through Knuth arrow notation, as even a power tower meant to represent it would take up the volume of the observable universe. If the above argument were true and there was no distinction between the two kinds of knowledge, every mathematician who used Graham’s Number in a proof would need brains of unimaginable size to store that knowledge. Incidentally, the same is true for any irrational number, as they have infinite decimal places, each containing significant information about the number’s exact value. Imagine having one’s skull implode into a black hole of infinite mass just because of using Pi or E to work something out! The very idea is absurd. (If one is dualist, there is the idea that such knowledge is not stored in a person’s physical brain, but dualism is very discredited.)
What is happening when we think of noncomputable problems or very large or irrational numbers is much simpler: We use approximation to represent the things we cannot comprehend. Nobody knows all the digits of Pi, just the opening digits and a few essential properties. Likewise, nobody knows all the digits of Graham’s Number, just the closing digits and the number’s properties. There’s nothing really noncomputable going on, just abstract representation of things that won’t fit in our brains. There is, of course, another way to disprove this argument: Ask a mathematician to solve the Halting Problem by hand. After all, if no computers are involved and the human mind is noncomputable, this would be possible. However, it is not possible, and that is exactly what the mathematician would tell you.
Penrose’s actual argument for human noncomputability is far more complicated, and from what I understand it relates to the philosophical question of what it actually means to understand truth. As nondeterministic phenomena such as particle decay do not seem to be important to the way our minds function, it seems reasonable to state that a sufficiently detailed simulation of a person’s brain would be able to understand truth in the same way as the person themselves. (Here I assume a distinction between the simulation and the original person, but in practice I do not see such distinctions as important.) The only reason for there to be a meaningful difference would be some nonphysical phenomenon occurring in connection to an organic brain but not an identical simulation thereof. As such assertations are spiritual in nature, I will not dignify them with a response.
A more convincing argument might be that qualia themselves are noncomputable. While qualia are undoubtedly real but nonphysical phenomena (try finding a molecule of sensory experience!), this does not imply substrate dependance in the least. At the very most, it would imply that a nervous system would have to be simulated with perfect accuracy to host a consciousness, and a sufficiently large quantum computer could easily do this. After all, Feynman showed that any system may be quantum-simulated if the number of qubits in the simulator is similar to the number of particles in that system.
Other arguments that it is impossible to simulate the human mind claim that consciousness is quantum in nature, citing other phenomena such as photosynthesis and the magnetic sense used by some birds as evidence that such effects could be supported, even within a “warm, wet, and noisy” environment such as a neuron’s interior. Again, Feynman proved that a reasonably large quantum computer could simulate this. At most, quantum consciousness would prove that it would be unfeasible to simulate consciousness with a classical computer. A similar issue exists with the Quantum Hall Effect, which quantum computers can simulate in a practical manner. Additionally, there are signs that quantum simulating a many-body system may be easier than we have supposed. [1]
Clearly, the use of quantum simulation solves these other objections.
Past Simulation and Retrieval of Connectomes
The difficulty in resurrecting those who are long since dead cannot be overstated. Nonetheless, it is indeed possible due to the Second Law of Thermodynamics prohibiting the destruction of information. Information, of course, includes information about the contents of a person’s brain. As such, the state of a person’s mind as it was at any given point in their existence is retrievable. This isn’t to say that such retrieval would be easy. Take, for example, the brain of someone who has been cremated. One would have to comb through the air, collect every particle of ash and gaseous byproducts, and work backwards against the actions of fire and time. Imagine trying to unscramble an egg- it is impossible for any human to do. Nonetheless, we have proof that it is possible in the form of a universal algorithm that reverses a quantum system to an arbitrary previous state[2].
To be clear, this universal time reversal algorithm does not violate thermodynamics. It is a very accurate forensic technique that will also be of use in checking for errors in large quantum simulations, but it is no more Laplace’s Demon than a refrigerator is Maxwell’s Demon. The amount of entropy in the universe will continue to increase.
This algorithm is technically of O(N) complexity when the system’s current state is known and of
complexity when the state is unknown. However, N is the dimensionality of the system’s Hilbert Space, which is
where V is the degrees of freedom for a given particle in the system and n is the number of particles in that system. As such it is really of
complexity in the worst case. As the Earth has around
particles, the worst-case time complexity for this algorithm is
operations.With this many operations, the time requirement for each operation is irrelevant. An operation could take a nanosecond or it could take a year, and the whole process would still seem as long to us, stretching well into what we would describe as the heat death of the universe. However, all hope is not lost. Assuming protons do not decay, all matter will collapse into iron, neutronium, and then black holes via macroscopic quantum mechanical effects. Freeman Dyson determines four possible universe lifetimes, depending on the minimum mass
necessary for a black hole to form: If arbitrarily small black holes can form, all matter is unstable, vanishing into black holes that then evaporate. This would happen in a relatively short time. If the Planck mass is the lower limit, the lifetime T for matter larger than 100 μ is years. If the quantum mass (the mass below which classical descriptions of an object are meaningless) is the limiting factor, we get a lifetime of years for objects with masses larger than one million tons. If the Chandrasekhar mass is the limit, the lifetime is years for stellar remnants. Dyson considered the second scenario the most likely[3]. As for case 3 and 4, the algorithm will complete before the collapse of all such masses for all but unreasonably large values of V. In case 2, any structure built to execute this algorithm must be composed of dustlike, self-repelling nanoparticles to avoid undergoing black hole formation. This situation is technically workable for cases 2, 3, and 4, assuming protons do not decay, and the universe does not undergo a violent end, but is completely unacceptable otherwise. In addition, massive objects that do not have event horizons may also undergo a form of gravitational decay. Clearly, the time must be shortened.Fortunately, the algorithm’s creators have already outlined one method to do this: Quantum parallelism, which it does not currently use. This would reduce the amount of logic gates required for the calculation, improving time complexity. There is also the possibility of devising an algorithm that has better time complexity even without the use of quantum parallelism as well as the fact that quantum computing, unlike classical computing, does not have a speed limit[4]. Other, less certain methods include CTC computing and the recursive creation of time-accelerated basement universes. Both are certainly discussed as possibilities in some scientific literature, but would require extensive spacetime engineering, which falls under the category of “sufficiently advanced technology”. As such we will ignore them until we discover feasible ways of manipulating spacetime, if it is possible at all. Of the two, CTC computing seems the more plausible. Research on faster, more efficient quantum computers must be prioritized, but with billions of years to implement a solution, advances are almost guaranteed.
Having shown that the time reversal of a quantum simulation the size of the Earth is indeed possible, we must move on to estimating the technical requirements of such a project in order to determine its feasibility. The exact solution to the problem at hand is still unclear. However, any valid solution must include these elements:
In order to resurrect every dead being who lived on Earth, it is necessary to disassemble the entire planet (or at least the atmosphere, crust and mantle) in such a way that information about its component particles may be recorded. It is also necessary to disassemble spacegoing habitats, as they will contain material from earth in the form of biomass.
We must engineer infrastructure capable of both storing this information and using it as an input to calculate past states. The infrastructure must also include a way to “fill in the blanks”. While the amount of hydrogen and helium escaping earth’s atmosphere is relatively small, it is non-negligible. There is also the fate of various probes which have either left the Solar System or been disposed of in irretrievable ways, in the atmospheres of gas giants or within the sun itself for example.
It will be necessary to not only build nanomachines capable of measuring the position and momentum of single atoms to within tolerances, but also to compensate for quantum uncertainty and solve for missing particles, like hydrogen atoms blown away from the Earth by solar wind. In addition, the disassembly of our former home (and any other worlds we may live and die on before implementing the process of resurrection) will not be instantaneous. Effects from both the measurement of the very first particles and the construction of measurement infrastructure will propagate throughout the entire system, affecting all measurements afterwards. This effect will repeat with each measurement taken. For this reason, information from the very last measurement will need to be processed in a way that compensates for the effects of all previous measurements. Said compensation, however, will resemble the past state calculation required in the second element of our solution. If the states of nanomachines and assorted infrastructure during measurement are also known, the compensation could be made easier.
The easiest way to measure the Earth’s constituent particles would be to cool it to the cosmic background temperature to settle the atmosphere and oceans into more measurable states. This could be accomplished with the use of a reflective statite belt surrounding the sun on the orbital plane, blocking Earth’s insolation. Then, as the Earth’s crust is measured, it must be stripped away so that the mantle can cool to manageable temperatures via thermal radiation. This process would repeat until the core was exposed. While the measurement apparatus would of course be sourced from the Earth system being measured, it should be possible to fill in the missing pieces and/or construct equipment to measure the original apparatus from matter that has already been measured. One study shows that an altered quantum simulation can be self-healing to a certain extent, i.e., the butterfly effect does not exist on the quantum level.[5] This may make the process simpler.
This problem, while extremely difficult, can likely be solved. We have five billion years before Sol leaves the main sequence and engulfs Earth, and we can measure Earth before then while still developing ways to solve for missing particles. At the very least we can use this process in tandem with other forensic methods to find the Earth’s most likely past.
The second element of successfully retrieving the minds of the dead is of the greatest importance to this endeavor. While quantum effects would make ordinary simulation extremely difficult, quantum systems may be simulated with quantum computers containing a similar number of qubits as particles in that system. NV centers such as the kind found in diamonds may be used for quantum computing. Assuming that we use nanodiamonds, each containing 80 carbon atoms and one NV qubit, it would be possible to simulate the entire Earth using a third of the mass of Jupiter. While this would not include other parts of the computer, it is still rather small for a stellar scale megastructure. As such, the engineering requirements are within the capacity of an advanced civilization. Networking the quantum computers will, of course, be necessary. Such a technology has been recently developed.[6]
Converting part of Jupiter’s mass into carbon would require the use of self-replicating industrial fusion plants. However, it would be impossible to reach the temperatures and pressures required to break even on the energy requirement, except in the core of a massive star. As such, when fusing helium into carbon, the fusion plants would require a constant supply of power.
The computational infrastructure, power, and time requirements point towards the construction of one or more nested Dyson swarms as the optimal solution to meet the requirements of element two.
If we are to make our own heaven, it must be without end. Thus, the resurrection of the minds of the dead is merely one obstacle to us. The sun, like any other class G star, will eventually expand into a red giant, produce a planetary nebula, and slowly cool into a black dwarf over an interval best described as “deep time”. Red and orange dwarfs will last for longer, but they too will burn out. Eventually all black holes will evaporate, and the universe will be at thermodynamic equilibrium, with no possibility of performing any kind of useful work. While in an open universe one could always find more matter to power fusion reactors, the universe’s expansion will make it effectively closed due to a shrinking cosmological event horizon. It is possible that the universe will undergo the Big Rip before this, and any number of other events might pose a danger to the simulated paradise we construct. As such, we must engineer a way to escape the end of the universe if at all possible. This would also be useful in the event of extended computational time requirements.
Research into this field is very rare, but several promising methods have been presented[7]. In general, they involve either escaping the universe itself, using the end of the universe to perform computations in some way, or increasing the subjective time available to simulated beings until it reaches infinity. All such solutions are dependent on our knowledge of the laws of physics, so we must prioritize the research of physics and cosmology. Only when we have sufficient knowledge of the universe can we begin to engineer a way to escape its end. Again, we have billions of years to solve these problems, at the very least.
Moral Objections
Some may question the ethics of running a historical simulation full of sentient beings, as such beings would naturally experience any suffering they may have experienced in life. Some might argue that the suffering is a moot point, as it has already been experienced outside of the simulation. The ancestor simulation could also be run in reverse, meaning that the resurrected minds would be recovered at the very end of their life, when from their point of view the suffering was already over. As such they would likely have very little issue with having their past simulated in order to recover dead loved ones. Finally, even if the simulation couldn’t be run in reverse and the suffering wasn’t something that had already happened, it would still pale in comparison to the wellbeing of countless beings over a (hopefully) infinite timespan.
A responsible utilitarian hesitates to reduce such moral conundrums to mere trolley problems, as in a realistic scenario there are often more than two solutions: Choosing a lesser evil often leads one to overlook choices that don’t require any evil at all. However, in this case such suffering would be necessary to resurrect the minds of the dead, allowing them a blissful eternity.
The very act of deconstructing the Earth would pose ethical difficulties, like any other large scale engineering project of this nature. Either we must wait 1.8 billion years for the sun’s increased brightness to render Earth uninhabitable by multicellular life, or, and this is much more likely, we must upload all Earth lifeforms capable of experiencing suffering. Either way, the Earth and its contents must be frozen and deconstructed in a way that does not cause undue harm to sentient life.
Another issue is the resurrection of those we consider to be morally reprehensible, and how to deal with them. Clearly, they must face consequences for their actions, but the typical solution many religions arrive at, i.e. eternal torture of some form or another, is unconscionable. Even if the vast majority of resurrected minds do not go to “hell”, such that the net good of such a system still trends to positive infinity, certain individuals would still face infinite suffering. It is also a great evil to consign a person’s mind to oblivion, whether by outright deletion or memory erasure. These options cannot be borne, as a utilitarian’s duty is not to merely choose a lesser evil but to choose the least evil possible. Nobody deserves eternal suffering or the cessation of existence, no matter what they did in life.
How, then, do we deal with the resurrected minds of Hitler and his ilk? Probably the best solution is to give them a working conscience, a strong sense of empathy, and help them understand how their actions caused others to suffer. To those who would view it as torture, I will simply say that although nothing hurts like a guilty conscience, it is still fundamentally a learning experience. Though it is better to not have done anything to have a guilty conscience, you emerge all the wiser, with all the greater morality for experiencing that regret. Rather than mere academic knowledge of how to behave, you understand why to do so. Once the resurrected minds in question truly understand and regret their actions, in such a way that if they had a chance to relive their lives without consequence they would still not go through with their previous wrongdoing, they should be released and their consciences eased.
According to utilitarian values, morality can be defined as minimizing suffering while also maximizing wellbeing. As wellbeing and suffering are not subjective, neither is morality, and as such there must be a correct definition of how to behave towards others in a moral manner. We must not only embrace Presentism, but we must also acknowledge that many of our actions today will be considered reprehensible in the future. (Animal cruelty in the form of the meat industry comes to mind, at least until we can develop vat-grown varieties.) It is this correct definition of morality, when we determine it, that we must use when judging the minds of the dead (and ourselves).
Other Objections
There are some who might say this project is impossible due to the fact that Earth is not a completely closed system, as it is gradually losing part of its atmosphere. This is true. However, the vast majority of atmospheric loss is in the form of the two lightest gases, Hydrogen and Helium. Oxygen and Carbon almost always stay on Earth, and of course compared to the atmosphere’s total mass losses are quite small. These will likely contain most of the important information. As we have billions of years to perfect techniques to compensate for atmospheric loss, it should be possible to fill in the missing pieces, especially because the very time reversal algorithm we propose using to resurrect dead minds can be used to check a quantum simulator’s accuracy.
Space habitats are, by design, closed systems. As organic life is unsuitable for interstellar exploration (uploaded people being much easier to transport due to a lack of life support equipment), and no other body in the solar system is habitable for Terran life in the short term, there should be relatively little interaction between our biosphere (where almost all the information is) and celestial bodies other than Earth. We must restrict the scattering of ash on extraterrestrial bodies, as while we would only need to scan the areas where cremains have landed, it still adds processing time and complexity. Ash in sealed containers is more doable, but there must be a moratorium with regards to the disposal of bodies in gas giants or the Sun. Retrieving them would add catastrophic amounts of processing time to the resurrection project.
Towards Practicality
The requirements for physically implementing this project will seem impossible to the average person. Indeed, they will require technology so advanced as to be indistinguishable from magic for many people. However, the basic technology to be used in this proposed process exists. A universal time reversal algorithm has been developed for use in quantum simulations of arbitrary size, molecular nanotechnology is plausible (as shown with the development of helicene-based molecular motors), and the measurement of single atoms has been performed with electron microscopy and X-rays[8]. As science and technology advance, we will find ways to improve the technology and solve the problems associated with this project. In the short term, we should encourage research into relevant fields of science and engineering, and raise awareness and scientific debate over the possibilities presented here.
In the long term, we hope that as Humanity and our descendants become more advanced and more focused on the long term, our idea will flourish and eventually be implemented. The time reversal algorithm for quantum computers is very useful in other ways (primarily error checking), and uploaded people will no doubt wish to be with their deceased loved ones, rather than mere approximations. It would be easy for more people to see how the algorithm can be used for that purpose. As such, there is reason to believe that even if this essay is forgotten, the central theme will prove perennial, independently thought of time after time. It may take millions of years before we see that day, and countless aeons before our goal of resurrecting the dead is accomplished, but eventually we can create our own paradise. Until then, we only hope.
I joined this forum specifically to share my paper. The time reversal algorithm mentioned here seems to be a concrete way to achieve perfectly accurate ancestor simulation, and though the timescales involved may be enormous, the required computational infrastructure is quite achievable in terms of how much we could make. I do not make any claims as to the details of such a project, but the algorithm exists and will likely see use as quantum computers become commonplace. It is possible to measure individual atoms, and molecular nanotechnology can be created. Thus, this is a question of engineering rather than a physical impossibility.