And once I accept this conclusion, the most absurd-seeming conclusion of them all follows. By increasing the computing power devoted to the training of these utility-improved agents, the utility produced grows exponentially (as more computing power means more digits to store the rewards). On the other hand, the impact of all other attempts to improve the world (e.g. by improving our knowledge of artificial sentience so we can more efficiently promote their welfare) grows at only a polynomial rate with the amount of resource devoted into these attempts. Therefore, running these trainings is the single most impactful thing that any rational altruist should do. Q.E.D.
If you believed in wildly superexponential impacts from more compute, you’d be correspondingly uninterested in what could be done with the limited computational resources of our day, since a Jupiter Brain playing with big numbers instead of being 10^40 times as big a deal as an ordinary life today could be 2^(10^40) times as big a deal. And likewise for influencing more computation rich worlds that are simulating us.
The biggest upshot (beyond ordinary ‘big future’ arguments) of superexponential-with-resources utility functions is greater willingnesss to take risks/care about tail scenarios with extreme resources, although that’s bounded by ‘leaks’ in the framework (e.g. the aforementioned influence on simulators with hypercomputation), and greater valuation of futures per unit computation (e.g. it makes welfare in sims like ours conditional on the simulation hypothesis less important).
I’d say that ideas of this sort, like infinite ethics, are reason to develop a much more sophisticated, stable, and well-intentioned society (which can more sensibly address complex issues affecting an important future) that can address these well, but doesn’t make the naive action you describe desirable even given certainty in a superexponential model of value.
If you believed in wildly superexponential impacts from more compute, you’d be correspondingly uninterested in what could be done with the limited computational resources of our day, since a Jupiter Brain playing with big numbers instead of being 10^40 times as big a deal as an ordinary life today could be 2^(10^40) times as big a deal. And likewise for influencing more computation rich worlds that are simulating us.
The biggest upshot (beyond ordinary ‘big future’ arguments) of superexponential-with-resources utility functions is greater willingnesss to take risks/care about tail scenarios with extreme resources, although that’s bounded by ‘leaks’ in the framework (e.g. the aforementioned influence on simulators with hypercomputation), and greater valuation of futures per unit computation (e.g. it makes welfare in sims like ours conditional on the simulation hypothesis less important).
I’d say that ideas of this sort, like infinite ethics, are reason to develop a much more sophisticated, stable, and well-intentioned society (which can more sensibly address complex issues affecting an important future) that can address these well, but doesn’t make the naive action you describe desirable even given certainty in a superexponential model of value.