Thanks for sharing Rob! Here’s a summary of my comments from our conversation:
Replace “optimal” with “great”
I think the terminology good vs “optimal” is a little confusing in that the probability of obtaining a future which is mathematically optimal seems to me to be zero. I’d suggest “great” instead.
Good, great, really great, really really great …
Some futures, which I’ll call “optimal”, are several orders of magnitude better than other seemingly good futures.
(Using great rather than “optimal”) I’d imagine that some great futures are also several orders of magnitude better than other seemingly great futures. I think here we’d really like to say something about the rates of decay.
Distribution over future value
Let X be the value of the future, which we suppose has some distribution pX(x). It’s my belief that X would be essentially continuous variable but in this post you choose to distinguish between two types of future (“good” and “optimal”) rather than consider a continuous distribution .
This choice could be justified if you believe a priori that pX is bimodal, or perhaps multi-modal. If this reflects your view, I think it would be good to make your reasoning on this more explicit.
One way to model the value of the future is
X=Resouces×Values×Efficiency
where Resouces refers to the physical, spatial, temporal resources that humanity has access to, Valuesare our ethical beliefs about how best to use those resources and the final Efficiency term reflects our ability to use resources in prompting our values. In A2 your basic model of the future is suggestive of a multi-modal distribution over future Resouces. It does seem reasonable to me that this would be the case. I’m quite uncertain about the distributions on the other two terms which appear less physically constrained.
Thanks for your comment athowes. I appreciate your point that I could have done more in the post to justify this “binary” of good and optimal.
Though the simulated minds scenario I described seems at first to be pretty much optimal, it could be much larger if you thought it would last for many more years. Given large enough uncertainty about future technology, maybe seeking to identify the optimal future is impossible.
I think your resources, value and efficiency model is really interesting. My intuition is that values is the limiting factor. I can believe there are pretty strong forces that mean that humanity will eventually end up optimising resources and efficiency, but less confident values will converge to the best ones over time. This probably depends on whether you think a singleton will form at some point, and then it feels like the limit is how good the values of the singleton are.
Events which are possible may still have zero probability, see “Almost never” on this Wikipedia page. That being said I think I still might object even if it was ϵ-optimal (within some small number ϵ>0 of achieving the mathematically optimal future) unless this could be meaningfully justified somehow.
In terms of cardinal utility? I think drawing any line in the sand has problems when things are continuous because it falls right into a slippery slope (if ϵ doesn’t make a real difference, what about drawing the line at 2ϵ, and then what about 3ϵ?).
But I think of our actions as discrete. Even if we design a system with some continuous parameter, the actual implementation of that system is going to be in discrete human actions. So I don’t think we can get arbitrarily small differences in utility. Then maximalism (i.e. going for only ideal outcomes) makes sense when it comes to designing long-lasting institutions, since the small (but non-infinitesimal) differences add up across many people and over a long time.
Thanks for sharing Rob! Here’s a summary of my comments from our conversation:
Replace “optimal” with “great”
I think the terminology good vs “optimal” is a little confusing in that the probability of obtaining a future which is mathematically optimal seems to me to be zero. I’d suggest “great” instead.
Good, great, really great, really really great …
(Using great rather than “optimal”) I’d imagine that some great futures are also several orders of magnitude better than other seemingly great futures. I think here we’d really like to say something about the rates of decay.
Distribution over future value
Let X be the value of the future, which we suppose has some distribution pX(x). It’s my belief that X would be essentially continuous variable but in this post you choose to distinguish between two types of future (“good” and “optimal”) rather than consider a continuous distribution .
This choice could be justified if you believe a priori that pX is bimodal, or perhaps multi-modal. If this reflects your view, I think it would be good to make your reasoning on this more explicit.
One way to model the value of the future is
X=Resouces×Values×Efficiencywhere Resouces refers to the physical, spatial, temporal resources that humanity has access to, Valuesare our ethical beliefs about how best to use those resources and the final Efficiency term reflects our ability to use resources in prompting our values. In A2 your basic model of the future is suggestive of a multi-modal distribution over future Resouces. It does seem reasonable to me that this would be the case. I’m quite uncertain about the distributions on the other two terms which appear less physically constrained.
Thanks for your comment athowes. I appreciate your point that I could have done more in the post to justify this “binary” of good and optimal.
Though the simulated minds scenario I described seems at first to be pretty much optimal, it could be much larger if you thought it would last for many more years. Given large enough uncertainty about future technology, maybe seeking to identify the optimal future is impossible.
I think your resources, value and efficiency model is really interesting. My intuition is that values is the limiting factor. I can believe there are pretty strong forces that mean that humanity will eventually end up optimising resources and efficiency, but less confident values will converge to the best ones over time. This probably depends on whether you think a singleton will form at some point, and then it feels like the limit is how good the values of the singleton are.
I think he’s saying “optimal future = best possible future”, which necessarily has a non-zero probability.
Events which are possible may still have zero probability, see “Almost never” on this Wikipedia page. That being said I think I still might object even if it was ϵ-optimal (within some small number ϵ>0 of achieving the mathematically optimal future) unless this could be meaningfully justified somehow.
In terms of cardinal utility? I think drawing any line in the sand has problems when things are continuous because it falls right into a slippery slope (if ϵ doesn’t make a real difference, what about drawing the line at 2ϵ, and then what about 3ϵ?).
But I think of our actions as discrete. Even if we design a system with some continuous parameter, the actual implementation of that system is going to be in discrete human actions. So I don’t think we can get arbitrarily small differences in utility. Then maximalism (i.e. going for only ideal outcomes) makes sense when it comes to designing long-lasting institutions, since the small (but non-infinitesimal) differences add up across many people and over a long time.