I liked the figures, but think a figure that would be quite valuable for explication was missing, something like this:
Relatedly, does anything important happen in the last 0.01% of best futures, I wonder? If one had a very extreme view, most of the value of the future would be in that best 10^-4 of worlds, and so normalising things by that point might not work as well. Implicitly I was assuming that 99.99th ~= ‘best’ ie that the scale as you have defined it doesn’t go much past 1, but this seems not obvious.
Though the probability distiribution would have to be conditional on people in the future not trying to optimise the future. (You could have a “no easy eutopia” view, but expect that people in the future will optimise toward the good and hit the narrow target, and therefore have a curve that’s more like the green line).
Great essay.
I liked the figures, but think a figure that would be quite valuable for explication was missing, something like this:
Relatedly, does anything important happen in the last 0.01% of best futures, I wonder? If one had a very extreme view, most of the value of the future would be in that best 10^-4 of worlds, and so normalising things by that point might not work as well. Implicitly I was assuming that 99.99th ~= ‘best’ ie that the scale as you have defined it doesn’t go much past 1, but this seems not obvious.
(I will post my other comments separately)
I like the figure!
Though the probability distiribution would have to be conditional on people in the future not trying to optimise the future. (You could have a “no easy eutopia” view, but expect that people in the future will optimise toward the good and hit the narrow target, and therefore have a curve that’s more like the green line).