Thanks! I’m confused about the acausal issue as well :) , and it’s not my specialty. I agree that acausal trade (if it’s possible in practice, which I’m uncertain about) could add a lot of weird dynamics to the mix. If someone was currently almost certain that Earth-originating space colonization was net bad, then this extra variance should make such a person less certain. (But it should also make people less certain who think space colonization is definitely good.) My own probabilities for Earth-originating space colonization being net bad vs good from a negative-utilitarian (NU) perspective are like 65% vs 35% or something, mostly because it’s very hard to have much confidence in the sign of almost anything. (I think my own work is less than 65% likely to reduce net suffering rather than increase it.) Since you said your probabilities are like 60% vs 40%, maybe we’re almost in agreement? (That said, the main reason I think Earth-originating space colonization might be good is that there may be a decent chance of grabby aliens within our future light cone whom we could prevent from colonizing, and it seems maybe ~50% likely an NU would prefer for human descendants to colonize than for the aliens to do so.)
My impression (which could be wrong) is that ECL, if it works, can only be a good thing for one’s values, but generic acausal trade can cause harm as well as benefit. So I don’t think the possibility of future acausal trade is necessarily a reason to favor Earth-originating intelligence (even a fully NU intelligence) from reaching the stars, but I haven’t studied this issue in depth.
I suspect that preserving one’s goals across multiple rounds of building smarter successors is extremely hard, especially in a world as chaotic and multipolar as ours, so I think the most likely intelligence to originate from Earth will be pretty weird relative to human values—some kind of Moloch creature. Even if something like human values does retain control, I expect NUs to represent a small faction. The current popularity of a value system (especially among intelligent young people) seems to me like a good prior for how popular it will be in the future.
I think people’s values are mostly shaped by emotions and intuitions, with rational arguments playing some role but not a determining role. If rational arguments were decisive, I would expect more convergence among intellectuals about morality than we in fact see. I’m mostly NU based on my hard wiring and life experiences, rather than based on abstract reasons. People sometimes become more or less suffering-focused over time due to a combination of social influence, life events, and philosophical reflection, but I don’t think philosophy alone could ever be enough to create agreement one way or the other. Many people who are suffering-focused came to that view after experiencing significant life trauma, such as depression or a painful medical condition (and sometimes people stop being NU after their depression goes away). Experiencing such life events could be part of a reflection process, but experiencing other things that would reduce the salience of suffering would also be part of the reflection process, and I don’t think there’s any obvious attractor here. It seems to me more like causing random changes of values in random directions. The output distribution of values from a reflection process is probably sensitive to the input distribution of values and the choice of parameters regarding what kinds of thinking and life experiences would happen in what ways.
In any case, I don’t think ideal notions of “values on reflection” are that relevant to what actually ends up happening on Earth. Even if human values control the future, I assume it will be in a similar way as they control the present, with powerful and often self-interested actors fighting for control, mostly in the economic and political spheres rather than by sophisticated philosophical argumentation. The idea that a world that can’t stop nuclear weapons, climate change, AI races, or wars of aggression could somehow agree to undertake and be bound by the results of a Long Reflection seems prima facie absurd to me. :) Philosophy will play some role in the ongoing evolution of values, but so will lots of other random factors. (To the extent that “Long Reflection” just means an ideal that a small number of philosophically inclined people try to crudely approximate, it seems reasonable. Indeed, we already have a community of such people.)
It seems as though I’m more optimistic about a ‘simple’ picture of reflection and enlightenment.
When providing the 60⁄40 numbers, I was imagining something like ‘probability that it’s ex-ante good, as opposed to ex-post good’. This distinction is pretty unclear and I certainly didn’t make this clear in my comment.
Are you more optimistic that various different kinds of reflection would tend to yield a fair amount of convergence? Or that our descendants will in fact undertake reflection on human values to a significant degree?
Thanks! I’m confused about the acausal issue as well :) , and it’s not my specialty. I agree that acausal trade (if it’s possible in practice, which I’m uncertain about) could add a lot of weird dynamics to the mix. If someone was currently almost certain that Earth-originating space colonization was net bad, then this extra variance should make such a person less certain. (But it should also make people less certain who think space colonization is definitely good.) My own probabilities for Earth-originating space colonization being net bad vs good from a negative-utilitarian (NU) perspective are like 65% vs 35% or something, mostly because it’s very hard to have much confidence in the sign of almost anything. (I think my own work is less than 65% likely to reduce net suffering rather than increase it.) Since you said your probabilities are like 60% vs 40%, maybe we’re almost in agreement? (That said, the main reason I think Earth-originating space colonization might be good is that there may be a decent chance of grabby aliens within our future light cone whom we could prevent from colonizing, and it seems maybe ~50% likely an NU would prefer for human descendants to colonize than for the aliens to do so.)
My impression (which could be wrong) is that ECL, if it works, can only be a good thing for one’s values, but generic acausal trade can cause harm as well as benefit. So I don’t think the possibility of future acausal trade is necessarily a reason to favor Earth-originating intelligence (even a fully NU intelligence) from reaching the stars, but I haven’t studied this issue in depth.
I suspect that preserving one’s goals across multiple rounds of building smarter successors is extremely hard, especially in a world as chaotic and multipolar as ours, so I think the most likely intelligence to originate from Earth will be pretty weird relative to human values—some kind of Moloch creature. Even if something like human values does retain control, I expect NUs to represent a small faction. The current popularity of a value system (especially among intelligent young people) seems to me like a good prior for how popular it will be in the future.
I think people’s values are mostly shaped by emotions and intuitions, with rational arguments playing some role but not a determining role. If rational arguments were decisive, I would expect more convergence among intellectuals about morality than we in fact see. I’m mostly NU based on my hard wiring and life experiences, rather than based on abstract reasons. People sometimes become more or less suffering-focused over time due to a combination of social influence, life events, and philosophical reflection, but I don’t think philosophy alone could ever be enough to create agreement one way or the other. Many people who are suffering-focused came to that view after experiencing significant life trauma, such as depression or a painful medical condition (and sometimes people stop being NU after their depression goes away). Experiencing such life events could be part of a reflection process, but experiencing other things that would reduce the salience of suffering would also be part of the reflection process, and I don’t think there’s any obvious attractor here. It seems to me more like causing random changes of values in random directions. The output distribution of values from a reflection process is probably sensitive to the input distribution of values and the choice of parameters regarding what kinds of thinking and life experiences would happen in what ways.
In any case, I don’t think ideal notions of “values on reflection” are that relevant to what actually ends up happening on Earth. Even if human values control the future, I assume it will be in a similar way as they control the present, with powerful and often self-interested actors fighting for control, mostly in the economic and political spheres rather than by sophisticated philosophical argumentation. The idea that a world that can’t stop nuclear weapons, climate change, AI races, or wars of aggression could somehow agree to undertake and be bound by the results of a Long Reflection seems prima facie absurd to me. :) Philosophy will play some role in the ongoing evolution of values, but so will lots of other random factors. (To the extent that “Long Reflection” just means an ideal that a small number of philosophically inclined people try to crudely approximate, it seems reasonable. Indeed, we already have a community of such people.)
It seems as though I’m more optimistic about a ‘simple’ picture of reflection and enlightenment.
When providing the 60⁄40 numbers, I was imagining something like ‘probability that it’s ex-ante good, as opposed to ex-post good’. This distinction is pretty unclear and I certainly didn’t make this clear in my comment.
Makes sense about ex ante vs ex post. :)
Are you more optimistic that various different kinds of reflection would tend to yield a fair amount of convergence? Or that our descendants will in fact undertake reflection on human values to a significant degree?
More optimistic on both.