I want to make salient these propositions, which I consider very likely:
In expectation, almost all of the resources our successors will use/affect comes via von Neumann probes (or maybe acausal trade or affecting the simulators).
If 1, the key question for evaluating a possible future from scope-sensitive perspectives is will the von Neumann probes be launched, and what is it that they will tile the universe with? (modulo acausal trade and simulation stuff)
[controversial] The best possible thing to tile the universe with (maybe call it “optimonium”) is wildly better than what you get if you not really optimizing for goodness,[1] so given 2, the key question is will the von Neumann probes tile the universe with ~the best possible thing (or ~the worst possible thing) or something else?
Considerations about just our solar system or value realized this century miss the point, by my lights. (Even if you reject 3.)
Call computronium optimized to produce maximum pleasure per unit of energy “hedonium,” and that optimized to produce maximum pain per unit of energy “dolorium,” as in “hedonistic” and “dolorous.” Civilizations that colonized the galaxy and expended a nontrivial portion of their resources on the production of hedonium or dolorium would have immense impact on the hedonistic utilitarian calculus. Human and other animal life on Earth (or any terraformed planets) would be negligible in the calculation of the total. Even computronium optimized for other tasks would seem to be orders of magnitude less important.
So hedonistic utilitarians could approximate the net pleasure generated in our galaxy by colonization as the expected production of hedonium, multiplied by the “hedons per joule” or “hedons per computation” of hedonium (call this H), minus the expected production of dolorium, multiplied by “dolors per joule” or “dolors per computation” (call this D).
Given 3, a key question is what can we do to increase P(optimonium | ¬ AI doom)?
For example:
Averting AI-enabled human-power-grabs might increase P(optimonium | ¬ AI doom)
Averting premature lock-in and ensuring the von Neumann probes are launched deliberately would increase P(optimonium | ¬ AI doom), but what can we do about that?
Some people seem to think that having norms of being nice to LLMs is valuable for increasing P(optimonium | ¬ AI doom), but I’m skeptical and I haven’t seen this written up.
(More precisely we should talk about expected fraction of resources that are optimonium rather than probability of optimonium but probability might be a fine approximation.)
I want to make salient these propositions, which I consider very likely:
In expectation, almost all of the resources our successors will use/affect comes via von Neumann probes (or maybe acausal trade or affecting the simulators).
If 1, the key question for evaluating a possible future from scope-sensitive perspectives is will the von Neumann probes be launched, and what is it that they will tile the universe with? (modulo acausal trade and simulation stuff)
[controversial] The best possible thing to tile the universe with (maybe call it “optimonium”) is wildly better than what you get if you not really optimizing for goodness,[1] so given 2, the key question is will the von Neumann probes tile the universe with ~the best possible thing (or ~the worst possible thing) or something else?
Considerations about just our solar system or value realized this century miss the point, by my lights. (Even if you reject 3.)
Related:
Given 3, a key question is what can we do to increase P(optimonium | ¬ AI doom)?
For example:
Averting AI-enabled human-power-grabs might increase P(optimonium | ¬ AI doom)
Averting premature lock-in and ensuring the von Neumann probes are launched deliberately would increase P(optimonium | ¬ AI doom), but what can we do about that?
Some people seem to think that having norms of being nice to LLMs is valuable for increasing P(optimonium | ¬ AI doom), but I’m skeptical and I haven’t seen this written up.
(More precisely we should talk about expected fraction of resources that are optimonium rather than probability of optimonium but probability might be a fine approximation.)