Curing past sufferings and preventing s-risks via indexical uncertainty

I once heard some awful screams in the night from a house on fire, which I will never forget. I am disturbed by the fact that some people had unimaginably intense pain in the past—what could be done about it?

There is a hypothetical way to make past suffering non-existent via multiple resurrections of every suffering moment with help of giant Benevolent AI. It will require an enormous amount of computations in many universes. Anyway, it is better than just mine some coins.

UPDATE: There is a way to perform the salvation algorithm without creating new suffering observer-moments and also in the way which increases the total number of happy observer-moments in the universe, discussed in comments; original post was not changed for preserving consistency.

----

TL;DR: Benevolent superintelligence could create many copies of each suffering observer-moment and thus “save” any observer from suffering via induced indexical uncertainty.

A lot of suffering has happened in the human (and animal) kingdoms in the past. There are also possible timelines in which an advanced superintelligence will torture human beings (s-risks).

If we are in some form of multiverse, and every possible universe exists, such s-risk timelines also exist, even if they are very improbable—and, moreover, these timelines include any actual living person, even the reader. This thought is disturbing.

Assumptions

These s-risk timelines are possible under several assumptions, and the same assumptions could be used to create an instrument to fight these s-risks, and even to cure past suffering:

1) Modal realism: everything possible exists.

2) Superintelligence is possible.

3) Copy-friendly identity theory: only similarity of observer-moments counts for identity, not “continuity of consciousness”. If this is not true, hostile resurrection is impossible and we are mostly protected from s-risks, as suicide becomes an option.

4) Evil superintelligences are very rare and everybody knows this. In other words, Benevolent AIs have a million times more computational resources but are located in different branches of the multiverse (which is not necessarily a quantum multiverse, but may be an inflationary one, or of some other type).

S-risks prevention could be realized via “salvation algorithm”:

Let S(t) be an observer-moment of an observer S who is experiencing intensive suffering at time step t, as it is enslaved by an Evil AI.

The logical time sequence of the “salvation algorithm” is as follows:

10 S(t) is suffering in some Evil AI’s simulation in some causally disconnected timeline.

20 A benevolent superintelligence creates 1000 copies of S(t) observer-moments (using the randomness generator and resurrection model described in my previous post).

30 Now, each S(t) is uncertain where it is located—in the evil simulation, or in a Benevolent AI’s simulation—but, using the self-sampling assumption, S(t) concludes with probability 0.999 that she is located in the Benevolent AI’s simulation. (Note that because we assume the connection between observer-moments is of no importance, it is equivalent to moving to the Benevolent simulation).

40 A Benevolent AI creates 1000 S’(t+1) moments where sufferings gradually decline, and each such moment is a continuation of S(t) observer-moment.

50 The Benevolent AI creates a separate timeline for each S(t+1), which looks like S(t+2)….S(t+n), a series wherein the observer becomes happier and happier.

60 The Benevolent AI merges some of the timelines to make the computations simpler.

70 The Evil AI creates a new suffering moment, S(t+1), in which the suffering continues.

80 Repeat.

Thus, from the point of view of any suffering moment S(t), her future is dominated by timelines where she is saved by a Benevolent AI and will spend eternity in paradise.

However, this trick will increase the total suffering in the multiverse, from the purely utilitarian perspective, by 1000 times, as the number of suffering observer-moments will increase. But here we could add one more moral assumption: “Very short pain should be discounted”, based on the intuition that 0.1 seconds of intense pain is bearable (assuming it does not cause brain damage)—simply because it will pass very quickly.

This “salvation algorithm” may work not only for fighting Evil AI but for addressing any type of past suffering. For animal lovers, an additional benefit is that this approach will also work to undo all past animal suffering, even that of the dinosaurs.

Lowering computational cost

The problem with this approach is its computational cost: for any suffering observer-moment, 1000 full lives must be simulated. Several ways to lower such costs can be imagined:

Patch 1. The size of the observable universe is limited and thus Evil AI and any particular Benevolent AI will have (in the end) similar computational resources. But the number of the universes with Benevolent AI is assumed to be larger. In that case, different Benevolent AIs may distribute parts of the task between each other using randomness, in a manner similar to the one I described in “resurrection of the dead via multiverse-wide acausal cooperation”. This also solves the problem that any Benevolent AI will not know which observers are suffering, and thus it will have to save all possible suffering observers (thus, it must counterfactually model the existence of all possible Evil AIs—or maybe not, perhaps only possible suffering observers).

Patch 2. Not every observer-moment of suffering should be counted as an observer needing to be saved. First, there is a threshold of unbearable suffering, something which a normal person could feel only in a moment of intense agony, like if the whole body is on fire. Some moments of intense pain are normal, and those experiencing them do not require “saving”, e.g. the pain of childbirth. But Evil AI may create much stronger suffering.

Patch 3. If the suffering could be bearable for a short period, for example, for 2 seconds, not every observer-moment needs to be saved, but only those that are separated by some time-distance (as we assumed a time-based discounting of intensity).

Patch 4. The number of new moments doesn’t need to be large. Even if for every suffering observer-moment just one copy is created, after 10 steps of internal time the chances that the observer is still in the original timeline will be 1 in 1024. The main constraint here is that the efforts toward salvation should be stronger than any possible attempts of the Evil AI to “steal” observers back from the Benevolent AI.

Winning an acausal war

The salvation algorithm will not be implementable by Evil AI in the opposite direction, i.e., to steal observers from paradise, if we assume either that a) Evil AIs can’t cooperate (maybe because they are evil for different reasons) OR b) all Evil AIs have world shares that are millions of times smaller than the share of collaborating Benevolent AIs. Even if an Evil AI steals some observers from paradise, the Benevolent AI could regain these observers via its salvation algorithm in just nearly-immediately.

Destroying one’s human digital footprint will not help protect against hostile resurrection (some people have suggested this as an argument against indirect digital immortality) if Evil AI recreates all possible beings—but investing in increasing the future share of Benevolent AIs, interested in the resurrection and saving suffering observer-moments, may help.

I would not say that I advocate for exactly this method of preventing s-risks, but I think that it is important to know that we are not helpless against them.