Thank you for this piece. I enjoyed reading it and I’m glad that we’re seeing more people being explicit about their cause-prioritization decisions and opening up discussion on this crucially important issue.
I know that it’s a weak consideration, but I hadn’t, before I read this, considered the argument for the scale of values spreading being larger than the scale of AI alignment (perhaps because, as you pointed out, the numbers involved in both are huge) so thanks for bringing that up.
I’m in agreement with Michael_S that hedonium and delorium should be the most important considerations when we’re estimating the value of the far-future, and from my perspective the higher probability of hedonium likely does make the far-future robustly positive, despite the valid points you bring up. This doesn’t necessarily mean that we should focus on AIA over MCE (I don’t), but it does make it more likely that we should.
Another useful contribution, though others may disagree, was the biases section: the biases that could potentially favour AIA did resonate with me, and they are useful to keep in mind.
That makes sense. If I were convinced hedonium/dolorium dominated to a very large degree, and that hedonium was as good as dolorium is bad, I would probably think the far future was at least moderately +EV.
Isn’t hedonium inherently as good as dolorium is bad? If it’s not, can’t we just normalize and then treat them as the same? I don’t understand the point of saying there will be more hedonium than dolorium in the future, but the dolorium will matter more. They’re vague and made-up quantities, so can’t we just set it so that “more hedonium than dolorium” implies “more good than bad”?
He defines hedonium/dolorium as the maximum positive/negative utility you can generate with a certain amount of energy:
“For example, I think a given amount of dolorium/dystopia (say, the amount that can be created with 100 joules of energy) is far larger in absolute moral expected value than hedonium/utopia made with the same resources.”
Thank you for this piece. I enjoyed reading it and I’m glad that we’re seeing more people being explicit about their cause-prioritization decisions and opening up discussion on this crucially important issue.
I know that it’s a weak consideration, but I hadn’t, before I read this, considered the argument for the scale of values spreading being larger than the scale of AI alignment (perhaps because, as you pointed out, the numbers involved in both are huge) so thanks for bringing that up.
I’m in agreement with Michael_S that hedonium and delorium should be the most important considerations when we’re estimating the value of the far-future, and from my perspective the higher probability of hedonium likely does make the far-future robustly positive, despite the valid points you bring up. This doesn’t necessarily mean that we should focus on AIA over MCE (I don’t), but it does make it more likely that we should.
Another useful contribution, though others may disagree, was the biases section: the biases that could potentially favour AIA did resonate with me, and they are useful to keep in mind.
That makes sense. If I were convinced hedonium/dolorium dominated to a very large degree, and that hedonium was as good as dolorium is bad, I would probably think the far future was at least moderately +EV.
Isn’t hedonium inherently as good as dolorium is bad? If it’s not, can’t we just normalize and then treat them as the same? I don’t understand the point of saying there will be more hedonium than dolorium in the future, but the dolorium will matter more. They’re vague and made-up quantities, so can’t we just set it so that “more hedonium than dolorium” implies “more good than bad”?
He defines hedonium/dolorium as the maximum positive/negative utility you can generate with a certain amount of energy:
“For example, I think a given amount of dolorium/dystopia (say, the amount that can be created with 100 joules of energy) is far larger in absolute moral expected value than hedonium/utopia made with the same resources.”
Exactly. Let me know if this doesn’t resolve things, zdgroff.