If one accepts your conclusion, how does one go about implementing it? There is the work on existential risk reduction, which you mention. Beyond that, however, predicting any long-term effect seems to be a work of fiction. If you think you might have a vague idea of how things will turn out in 1k year, you must realize that even longer-term effects (1m? 1b?) dominate these. An omniscient being might be able to see the causal chain from our present actions to the far future, but we certainly cannot.
A question this raises for me is whether we should adjust our moral theories in any way. Given your conclusions, classic utilitarianism becomes a great idea that can never be implemented by us mere mortals. A bounded implementation, as MichaelStJules mentions, is probably preferable to ignoring utilitarianism completely, but that only answers this question by side-stepping it. I have come across philosophical work on “The Nonidentity Problem” which suggests that our moral obligations more or less extend to our grandchildren, but personaly I remain unconvinced by it.
I think there might be one area of human activity that, even given your conclusion, it is moral and rational to pursue—education. Not the contemporary kind which amounts to exercising our memories to pass standardized tests. More along the lines of what the ancient Greeks had in mind when they thought about education. The aim would be somewhere in the ballpark of producing critical thinking, compassionate, and physically fit people. These people will then be able to face the challenges they encounter, and which we cannot predict, in the best possible way. There is a real risk that humanity takes an unrecoverable turn for the worst, and while good education does not promise to prevent that, it increases the odds that we achieve the highest levels of human happiness and fulfillment as we set out to discover the farthest reaches of our galaxy.
Unfortunately he doesn’t talk about how to construct the evaluation function, and steering capacity is only motivated by an analogy. I agree with you/Bostrom/Milan that there are probably some things that look more robustly good than others. It’s a bit unclear how to get these but something like :‘Build models of how the world works by looking to the past and then updating based on inside view arguments of the present/future. Then take actions that look good on most of your models’ seems vaguely right to me. Some things that look good to me are: investing, building the EA community, reducing the chance of catastrophic risks, spreading good values, getting better at forecasting, building models of how the world works
Adjusting our values based on them being difficult to achieve seems a bit backward to me, but I’m motivated by subjective preferences, and maybe it would make more sense if you were taking a more ethical/realist approach (eg. because you expect the correct moral theory to actually be feasible to implement).
Great post, thank you.
If one accepts your conclusion, how does one go about implementing it? There is the work on existential risk reduction, which you mention. Beyond that, however, predicting any long-term effect seems to be a work of fiction. If you think you might have a vague idea of how things will turn out in 1k year, you must realize that even longer-term effects (1m? 1b?) dominate these. An omniscient being might be able to see the causal chain from our present actions to the far future, but we certainly cannot.
A question this raises for me is whether we should adjust our moral theories in any way. Given your conclusions, classic utilitarianism becomes a great idea that can never be implemented by us mere mortals. A bounded implementation, as MichaelStJules mentions, is probably preferable to ignoring utilitarianism completely, but that only answers this question by side-stepping it. I have come across philosophical work on “The Nonidentity Problem” which suggests that our moral obligations more or less extend to our grandchildren, but personaly I remain unconvinced by it.
I think there might be one area of human activity that, even given your conclusion, it is moral and rational to pursue—education. Not the contemporary kind which amounts to exercising our memories to pass standardized tests. More along the lines of what the ancient Greeks had in mind when they thought about education. The aim would be somewhere in the ballpark of producing critical thinking, compassionate, and physically fit people. These people will then be able to face the challenges they encounter, and which we cannot predict, in the best possible way. There is a real risk that humanity takes an unrecoverable turn for the worst, and while good education does not promise to prevent that, it increases the odds that we achieve the highest levels of human happiness and fulfillment as we set out to discover the farthest reaches of our galaxy.
I would love to hear your thoughts.
Pleased you liked it and thanks for the question. Here are my quick thoughts:
That kind of flourishing-education sounds a bit like Bostrom’s evaluation function described here: http://www.stafforini.com/blog/bostrom/
Or steering capacity described here: https://forum.effectivealtruism.org/posts/X2n6pt3uzZtxGT9Lm/doing-good-while-clueless
Unfortunately he doesn’t talk about how to construct the evaluation function, and steering capacity is only motivated by an analogy. I agree with you/Bostrom/Milan that there are probably some things that look more robustly good than others. It’s a bit unclear how to get these but something like :‘Build models of how the world works by looking to the past and then updating based on inside view arguments of the present/future. Then take actions that look good on most of your models’ seems vaguely right to me. Some things that look good to me are: investing, building the EA community, reducing the chance of catastrophic risks, spreading good values, getting better at forecasting, building models of how the world works
Adjusting our values based on them being difficult to achieve seems a bit backward to me, but I’m motivated by subjective preferences, and maybe it would make more sense if you were taking a more ethical/realist approach (eg. because you expect the correct moral theory to actually be feasible to implement).