Not speaking for Brian, but biorisk reduction increases the probability humanity reaches the stars, which is object-level bad from a negative utilitarian perspective unless you think we’re counterfactually likely to encounter worse-than-humans aliens.
That’s right. :) There are various additional details to consider, but that’s the main idea.
Catastrophic risks have other side effects in scenarios where humanity does survive, and in most cases, humanity would survive. My impression is that apart from AI risk, biorisk is the most likely form of x-risk to cause actual extinction rather than just disruption. Nuclear winter and especially climate change seem to have a higher ratio of (probability of disruption but still survival)/(probability of complete extinction). AI extinction risk would presumably still involve intelligent agents reaching the stars, so it still may lead to astronomical amounts of suffering.
There are also considerations about cooperation. For example, if one has enough credence in Evidential Cooperation in Large Worlds (ECL), then even a negative utilitarian should support reaching the stars because many other value systems want it (though some don’t, even for reasons besides reducing suffering). Even ignoring ECL, it seems like a bad idea to actively increase biorisk because of the backlash it would provoke. However, due to the act/omission distinction, it’s probably ok to encourage others to omit funding for biorisk-safety work, or at least to try to avoid increasing such funding yourself. Given that work on reducing AI risk isn’t necessarily bad from a suffering-reduction standpoint, shifting biorisk funding to AI risk (or other EA cause areas) is a way to do this omission in a way that may not be that objectionable to most EAs, because the risk of human extinction is still being reduced in either case.
[Epistemic status: confused stuff that I haven’t thought about that much. That said I do think this consideration is quite real and I’ve talked to suffering focused people about this sort of thing (I’m not currently suffering focused)]
Beyond ECL-style cooperation with values which want to reach the stars and causally reaching aliens, I think the strongest remaining case is post singularity acausal trade.
I think this consideration is actually quite strong in expectation if you think that suffering focused ethics is common on reflection among humans (or human originating AIs which took over) and less common among other powerful civilizations. Though this depends heavily on the relative probabilities of S-risk from different sources. My guess would be that this consideration out weighs cooperation and encountering technologically immature aliens. I normally think causal trade with technologically mature aliens/AIs from aliens and acaual trade are basically the same.
I’d guess that this consideration is probably not sufficent to think that reaching the stars is good from a negative utiliarian perspective, but I’m only like 60⁄40 on this (and very confused overall).
My guess would be that negative utilitarians should think that at least they would likely remain negative utilitarian on reflection (or that the residual is unpredictable). So probably negative utilitarians should also think negative utilitarianism is common on reflection?
Thanks! I’m confused about the acausal issue as well :) , and it’s not my specialty. I agree that acausal trade (if it’s possible in practice, which I’m uncertain about) could add a lot of weird dynamics to the mix. If someone was currently almost certain that Earth-originating space colonization was net bad, then this extra variance should make such a person less certain. (But it should also make people less certain who think space colonization is definitely good.) My own probabilities for Earth-originating space colonization being net bad vs good from a negative-utilitarian (NU) perspective are like 65% vs 35% or something, mostly because it’s very hard to have much confidence in the sign of almost anything. (I think my own work is less than 65% likely to reduce net suffering rather than increase it.) Since you said your probabilities are like 60% vs 40%, maybe we’re almost in agreement? (That said, the main reason I think Earth-originating space colonization might be good is that there may be a decent chance of grabby aliens within our future light cone whom we could prevent from colonizing, and it seems maybe ~50% likely an NU would prefer for human descendants to colonize than for the aliens to do so.)
My impression (which could be wrong) is that ECL, if it works, can only be a good thing for one’s values, but generic acausal trade can cause harm as well as benefit. So I don’t think the possibility of future acausal trade is necessarily a reason to favor Earth-originating intelligence (even a fully NU intelligence) from reaching the stars, but I haven’t studied this issue in depth.
I suspect that preserving one’s goals across multiple rounds of building smarter successors is extremely hard, especially in a world as chaotic and multipolar as ours, so I think the most likely intelligence to originate from Earth will be pretty weird relative to human values—some kind of Moloch creature. Even if something like human values does retain control, I expect NUs to represent a small faction. The current popularity of a value system (especially among intelligent young people) seems to me like a good prior for how popular it will be in the future.
I think people’s values are mostly shaped by emotions and intuitions, with rational arguments playing some role but not a determining role. If rational arguments were decisive, I would expect more convergence among intellectuals about morality than we in fact see. I’m mostly NU based on my hard wiring and life experiences, rather than based on abstract reasons. People sometimes become more or less suffering-focused over time due to a combination of social influence, life events, and philosophical reflection, but I don’t think philosophy alone could ever be enough to create agreement one way or the other. Many people who are suffering-focused came to that view after experiencing significant life trauma, such as depression or a painful medical condition (and sometimes people stop being NU after their depression goes away). Experiencing such life events could be part of a reflection process, but experiencing other things that would reduce the salience of suffering would also be part of the reflection process, and I don’t think there’s any obvious attractor here. It seems to me more like causing random changes of values in random directions. The output distribution of values from a reflection process is probably sensitive to the input distribution of values and the choice of parameters regarding what kinds of thinking and life experiences would happen in what ways.
In any case, I don’t think ideal notions of “values on reflection” are that relevant to what actually ends up happening on Earth. Even if human values control the future, I assume it will be in a similar way as they control the present, with powerful and often self-interested actors fighting for control, mostly in the economic and political spheres rather than by sophisticated philosophical argumentation. The idea that a world that can’t stop nuclear weapons, climate change, AI races, or wars of aggression could somehow agree to undertake and be bound by the results of a Long Reflection seems prima facie absurd to me. :) Philosophy will play some role in the ongoing evolution of values, but so will lots of other random factors. (To the extent that “Long Reflection” just means an ideal that a small number of philosophically inclined people try to crudely approximate, it seems reasonable. Indeed, we already have a community of such people.)
It seems as though I’m more optimistic about a ‘simple’ picture of reflection and enlightenment.
When providing the 60⁄40 numbers, I was imagining something like ‘probability that it’s ex-ante good, as opposed to ex-post good’. This distinction is pretty unclear and I certainly didn’t make this clear in my comment.
Are you more optimistic that various different kinds of reflection would tend to yield a fair amount of convergence? Or that our descendants will in fact undertake reflection on human values to a significant degree?
Also, If there’s sentient life on reachable planets or a chance of it emerging in the future, some NUs might also argue that the chance of human descendants ending/preventing suffering on such planets might be worth the risk of spreading suffering. (Cf. David Pearce’s “cosmic rescue mission”.)
Not speaking for Brian, but biorisk reduction increases the probability humanity reaches the stars, which is object-level bad from a negative utilitarian perspective unless you think we’re counterfactually likely to encounter worse-than-humans aliens.
That’s right. :) There are various additional details to consider, but that’s the main idea.
Catastrophic risks have other side effects in scenarios where humanity does survive, and in most cases, humanity would survive. My impression is that apart from AI risk, biorisk is the most likely form of x-risk to cause actual extinction rather than just disruption. Nuclear winter and especially climate change seem to have a higher ratio of (probability of disruption but still survival)/(probability of complete extinction). AI extinction risk would presumably still involve intelligent agents reaching the stars, so it still may lead to astronomical amounts of suffering.
There are also considerations about cooperation. For example, if one has enough credence in Evidential Cooperation in Large Worlds (ECL), then even a negative utilitarian should support reaching the stars because many other value systems want it (though some don’t, even for reasons besides reducing suffering). Even ignoring ECL, it seems like a bad idea to actively increase biorisk because of the backlash it would provoke. However, due to the act/omission distinction, it’s probably ok to encourage others to omit funding for biorisk-safety work, or at least to try to avoid increasing such funding yourself. Given that work on reducing AI risk isn’t necessarily bad from a suffering-reduction standpoint, shifting biorisk funding to AI risk (or other EA cause areas) is a way to do this omission in a way that may not be that objectionable to most EAs, because the risk of human extinction is still being reduced in either case.
[Epistemic status: confused stuff that I haven’t thought about that much. That said I do think this consideration is quite real and I’ve talked to suffering focused people about this sort of thing (I’m not currently suffering focused)]
Beyond ECL-style cooperation with values which want to reach the stars and causally reaching aliens, I think the strongest remaining case is post singularity acausal trade.
I think this consideration is actually quite strong in expectation if you think that suffering focused ethics is common on reflection among humans (or human originating AIs which took over) and less common among other powerful civilizations. Though this depends heavily on the relative probabilities of S-risk from different sources. My guess would be that this consideration out weighs cooperation and encountering technologically immature aliens. I normally think causal trade with technologically mature aliens/AIs from aliens and acaual trade are basically the same.
I’d guess that this consideration is probably not sufficent to think that reaching the stars is good from a negative utiliarian perspective, but I’m only like 60⁄40 on this (and very confused overall).
By ‘on reflection’ I mean something like ‘after the great reflection’ or what you get from indirect normativity: https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/
My guess would be that negative utilitarians should think that at least they would likely remain negative utilitarian on reflection (or that the residual is unpredictable). So probably negative utilitarians should also think negative utilitarianism is common on reflection?
Thanks! I’m confused about the acausal issue as well :) , and it’s not my specialty. I agree that acausal trade (if it’s possible in practice, which I’m uncertain about) could add a lot of weird dynamics to the mix. If someone was currently almost certain that Earth-originating space colonization was net bad, then this extra variance should make such a person less certain. (But it should also make people less certain who think space colonization is definitely good.) My own probabilities for Earth-originating space colonization being net bad vs good from a negative-utilitarian (NU) perspective are like 65% vs 35% or something, mostly because it’s very hard to have much confidence in the sign of almost anything. (I think my own work is less than 65% likely to reduce net suffering rather than increase it.) Since you said your probabilities are like 60% vs 40%, maybe we’re almost in agreement? (That said, the main reason I think Earth-originating space colonization might be good is that there may be a decent chance of grabby aliens within our future light cone whom we could prevent from colonizing, and it seems maybe ~50% likely an NU would prefer for human descendants to colonize than for the aliens to do so.)
My impression (which could be wrong) is that ECL, if it works, can only be a good thing for one’s values, but generic acausal trade can cause harm as well as benefit. So I don’t think the possibility of future acausal trade is necessarily a reason to favor Earth-originating intelligence (even a fully NU intelligence) from reaching the stars, but I haven’t studied this issue in depth.
I suspect that preserving one’s goals across multiple rounds of building smarter successors is extremely hard, especially in a world as chaotic and multipolar as ours, so I think the most likely intelligence to originate from Earth will be pretty weird relative to human values—some kind of Moloch creature. Even if something like human values does retain control, I expect NUs to represent a small faction. The current popularity of a value system (especially among intelligent young people) seems to me like a good prior for how popular it will be in the future.
I think people’s values are mostly shaped by emotions and intuitions, with rational arguments playing some role but not a determining role. If rational arguments were decisive, I would expect more convergence among intellectuals about morality than we in fact see. I’m mostly NU based on my hard wiring and life experiences, rather than based on abstract reasons. People sometimes become more or less suffering-focused over time due to a combination of social influence, life events, and philosophical reflection, but I don’t think philosophy alone could ever be enough to create agreement one way or the other. Many people who are suffering-focused came to that view after experiencing significant life trauma, such as depression or a painful medical condition (and sometimes people stop being NU after their depression goes away). Experiencing such life events could be part of a reflection process, but experiencing other things that would reduce the salience of suffering would also be part of the reflection process, and I don’t think there’s any obvious attractor here. It seems to me more like causing random changes of values in random directions. The output distribution of values from a reflection process is probably sensitive to the input distribution of values and the choice of parameters regarding what kinds of thinking and life experiences would happen in what ways.
In any case, I don’t think ideal notions of “values on reflection” are that relevant to what actually ends up happening on Earth. Even if human values control the future, I assume it will be in a similar way as they control the present, with powerful and often self-interested actors fighting for control, mostly in the economic and political spheres rather than by sophisticated philosophical argumentation. The idea that a world that can’t stop nuclear weapons, climate change, AI races, or wars of aggression could somehow agree to undertake and be bound by the results of a Long Reflection seems prima facie absurd to me. :) Philosophy will play some role in the ongoing evolution of values, but so will lots of other random factors. (To the extent that “Long Reflection” just means an ideal that a small number of philosophically inclined people try to crudely approximate, it seems reasonable. Indeed, we already have a community of such people.)
It seems as though I’m more optimistic about a ‘simple’ picture of reflection and enlightenment.
When providing the 60⁄40 numbers, I was imagining something like ‘probability that it’s ex-ante good, as opposed to ex-post good’. This distinction is pretty unclear and I certainly didn’t make this clear in my comment.
Makes sense about ex ante vs ex post. :)
Are you more optimistic that various different kinds of reflection would tend to yield a fair amount of convergence? Or that our descendants will in fact undertake reflection on human values to a significant degree?
More optimistic on both.
And even if we encounter worse-than-humans aliens, that could be bad due to conflict with them.
Also, If there’s sentient life on reachable planets or a chance of it emerging in the future, some NUs might also argue that the chance of human descendants ending/preventing suffering on such planets might be worth the risk of spreading suffering. (Cf. David Pearce’s “cosmic rescue mission”.)
Right, this is what I was alluding to with
but I agree that in theory we can reduce suffering for aliens who are morally better than humans but less technologically capable.
That said, the NU case for it doesn’t necessarily seem very strong because natural suffering isn’t as astronomical in scale as s-risks.