Thanks for the thoughtful response! I think you do a good job identifying the downsides of directed panspermia. However, in my description of the problem, I want to draw your attention to two claims drawn from Ord’s broader argument.
First, the premise that there is roughly 1⁄6 probability humanity does not successfully navigate through The Precipice and reach the Long Reflection. Second, the fact that for all we know we might be the universe’s only chance at intelligently flourishing.
My question is whether there is an implication here that directed panspermia is a warranted biotic hedge during The Precipice phase, perhaps prepared now and only acted on if existential catastrophe odds increase. If we make it to The Long Reflection, I’m in total agreement that we do not rapidly engage in directed panspermia. However, for the sake of increasing the universe’s chance of having some intelligent flourishing, perhaps a biotic hedge should at least be prepare now, to be executed when things look especially dire. But at what point would it be justified?
I think this reasoning is exactly the same as the utilitarian longtermist argument that we should invest more resources now addressing x risk, especially Parfit’s argument for the value of potential future persons.
Assume three cases:
A. All life in the universe is ended because weapon X is deployed on earth.
B. All life on earth is ended by weapon X but life is preserved in the universe because of earth’s directed panspermia.
C. Earth originating life makes it through the Precipice and flourishes in the cosmic endowment for billions of years.
It seems C > B > A, with the difference between A and B greater than the difference between B and C.
A neglected case above is where weapon X destroys life on earth, earth engages in directed panspermia, but there was already life in the universe unbeknownst to earth. I think we agree that B is superior to this case, and therefore the difference between B and A is greater. The question is does the difference between this case and C surpass that between A and B. Call it D. Is D so much worse than C that a preferred loss is from B to A? I don’t think so.
So I guess the implied position would be that we should prepare a biotic hedge in case things get especially dire, and invest more in SETI type searches. If we know that life exists elsewhere in the universe, we do not need to deploy the biotic hedge?
I think that the empirically the effort to prepare the biotic hedge, is likely to be be expensive in terms of resources and influence, as I suspect a lot of people would be strongly averse to directed panspernia, as it would be likely negative in some forms of negative utilitarianism, and other value systems. So it would be better for longterm future to reduce existential risk specifically.
I think I’m not well placed to answer that at this point and would rather defer that to someone who has thought about this more than I have from the vantage points of many ethical theories rather than just from my (or their) own. (I try, but this issue has never been a priority for me.) Then again this is a good exercise for me in moral perspective-taking or what it’s called. ^^
It seems C > B > A, with the difference between A and B greater than the difference between B and C.
In the previous reply I tried to give broadly applicable reasons to be careful about it, but those were mostly just from Precipice. The reason is that if I ask myself, e.g., how long I would be willing to endure extreme torture to gain ten years of ultimate bliss (apparently a popular thought experiment), I might be ready to invest a few seconds if any, for a tradeoff ratio of 1e7 or 1e8 to 1. So from my vantage point, the r-strategist style “procreation” is very disvaluable. It seems like it may well be disvaluable in expectation, but either way, it seems like an enormous cost to bear for a highly uncertain payoff. I’m much more comfortable with careful, K-strategist “procreation” on a species level. (Magnus Vinding has a great book coming out soon that covers this problem in detail.)
But assuming the agnostic position again, for practice, I suppose A and C are clear cut: C is overwhelmingly good (assuming the Long Reflection works out well and we successfully maximize what we really terminally care about, but I suppose that’s your assumption) and A is sort of clear because we know roughly (though not very viscerally) how much disvalue our ancestors have paid forward over the past millions of years so that we can hopefully eventually create a utopia.
But B is wide open. It may go much more negative than A even considering all our past generations – suffering risks, dystopian-totalitarian lock-ins, permanent prehistoric lock-ins, etc. The less certain it is, the more of this disvalue we’d have to pay forward to get one utopia out of it. And it may also go positive of course, almost like C, just with lower probability and a delay.
People have probably thought about how to spread self-replicating probes to other planets so that they produce everything a species will need at the destination to rebuild a flourishing civilization. Maybe there’ll be some DNA but also computers with all sorts of knowledge, and child-rearing robots, etc. ^^ But a civilization needs so many interlocking parts to function well – all sorts of government-like institutions, trust, trade, resources, … – that it seems to me like the vast majority of these civilizations either won’t get off the ground in the first place and remain locked in a probably disvaluable Stone Age type of state, or will permanently fall short of the utopia we’re hoping for eventually.
I suppose a way forward may to consider the greatest uncertainties about the project – probabilities and magnitudes at the places where things can go most badly net negative or most awesomely net positive.
Maybe one could look into Great Filters (they may exist less necessarily than I had previously thought), because if we are now past the (or a) Great Filter, and the Great Filter is something about civilization rather than something about evolution, we should probably assign a very low probability to a civilization like ours emerging under very different conditions through the probably very narrow panspermia bottleneck. I suppose this could be tested on some remote islands? (Ethics committees may object to that, but then these objections also and even more apply to untested panspermia, so they should be taken very seriously. Then again they may not have read Bostrom or Ord. Or Pearce, Gloor, Tomasik, or Vinding for that matter.)
Oh, here’s an idea: The Drake Equation has the parameter f_i for the probability that existing life develops (probably roughly human-level?) intelligence, f_c that intelligent life becomes detectable, and L for the longevity of the civilization. The probability that intelligent life creates a civilization with similar values and potential is probably a bit less than f_c (these civilizations could have any moral values) but more than the product of the two fs. The paper above has a table that says “f_i: log-uniform from 0.001 to 1” and “f_c: log-uniform from 0.01 to 1.” So I suppose we have some 2–5 orders of magnitude uncertainty from this source.
The longevity of a civilization is “L: log-uniform from 100 to 10,000,000,000” in the paper. An advanced civilization that exists for 10–100k years may be likely to have passed the Precipice… Not sure at all about this because of the risk of lock-ins. And I’d have to put this distribution into Guesstimate to get a range of probabilities out of this. But it seems like a major source of uncertainty too.
The ethical tradeoff question above feels almost okay to me with a 1e8 to 1 tradeoff but others are okay with a 1e3 or 1e4 to 1 tradeoff. Others again refuse it on deontological or lexical grounds that I also empathize with. It feels like there are easily five orders of magnitude uncertainty here, so maybe this is the bigger question. (I’m thinking more in terms of an optimal compromise utility function than in moral realist terms, but I suppose that doesn’t change much in this case.)
In the best case within B, there’s also the question whether it’ll be a delay compared to C of thousands or of tens of thousands of years, and how much that would shrink the cosmic endowment.
I don’t trust myself to be properly morally impartial about this after such a cursory investigation, but that said, I would suppose that most moral systems would put a great burden of proof on the intervention because it can be so extremely good and so extremely bad. But tackling these three to four sources of uncertainty and maybe others can perhaps shed more light on how desirable it really is.
I empathize with the notion that some things can’t wait until the Long Reflection, at least as part in a greater portfolio, because it seems to me that suffering risks (s-risks) are a great risk (in expectation) even or especially now in the span until the Long Reflection. They can perhaps be addressed through different and more tractable avenues than other longterm risks and by researchers with different comparative advantages.
A neglected case above is where weapon X destroys life on earth, earth engages in directed panspermia, but there was already life in the universe unbeknownst to earth. I think we agree that B is superior to this case, and therefore the difference between B and A is greater. The question is does the difference between this case and C surpass that between A and B. Call it D. Is D so much worse than C that a preferred loss is from B to A? I don’t think so.
Hmm, I don’t quite follow… Does the above change the relative order of preference for you, and if so, to which order?
So I guess the implied position would be that we should prepare a biotic hedge in case things get especially dire, and invest more in SETI type searches. If we know that life exists elsewhere in the universe, we do not need to deploy the biotic hedge?
There are all these risks from drawing the attention of hostile civilizations. I haven’t thought about what the risk and benefits are there. It feels like that came up in Precipice too, but I could be mixing something up.
There are all these risks from drawing the attention of hostile civilizations. I haven’t thought about what the risk and benefits are there. It feels like that came up in Precipice too, but I could be mixing something up.
Yes, Ord discusses that in Chapter 5. Here’s one relevant passage that I happened to have in my notes:
The extra-terrestrial risk that looms largest in popular culture is conflict with a spacefaring alien civilization. [...] perhaps more public discussion should be had before we engage in active SETI (sending powerful signals to attract the attention of distant aliens). And even passive SETI (listening for their messages) could hold dangers, as the message could be designed to entrap us. These dangers are small, but poorly understood and not yet well managed.
Great job identifying some relevant uncertainties to investigate. I will think about that some more.
My goal here is not so much to resolve the question of “should we prepare a biotic hedge?” but rather “Does utilitarian Longtermism imply that we should prepare it now, and if faced with a certain threshold of confidence that existential catastrophe is imminent, deploy it?” So I am comfortable not addressing the moral uncertainty arguments against the idea for now. If I become confident that utilitarian Longtermism does imply that we should, I would examine how other normative theories might come down on the question.
Me: “A neglected case above is where weapon X destroys life on earth, earth engages in directed panspermia, but there was already life in the universe unbeknownst to earth. I think we agree that B is superior to this case, and therefore the difference between B and A is greater. The question is does the difference between this case and C surpass that between A and B. Call it D. Is D so much worse than C that a preferred loss is from B to A? I don’t think so.”
You: “Hmm, I don’t quite follow… Does the above change the relative order of preference for you, and if so, to which order?”
No it would not change the relative order of A B C. The total order (including D) for me would be C > B > D > A, where |v(B) - v(A)| > |v(C) - v(D)|.
I was trying to make a Parfit style argument that A is so very bad that spending significant resources now to hedge against it is justified. Given that we fail to reach the Long Reflection, it is vastly preferable that we engage in a biotic hedge. I did a bad job of laying it out, and it seems that reasonable people think the outcome of B might actually be worse than A, based on your response.
Oh yeah, I was also talking about it only from utilitarian perspectives. (Except for one aside, “Others again refuse it on deontological or lexical grounds that I also empathize with.”) Just utilitarianism doesn’t make a prescription as to the exchange rate of intensity/energy expenditure/… of individual positive experiences to individual negative experiences.
It seems that reasonable people think the outcome of B might actually be worse than A, based on your response.
Yes, I hope they do. :-)
Sorry for responding so briefly! I’m falling behind on some reading.
Yes I think I messed up the Parfit style argument here. Perhaps the only relevant cases are A, B, and D, because I’m supposing we fail to reach the Long Reflection and asking what the best history line is on utility Longtermist grounds.
If we conclude from this that a biotic hedge is justified on those grounds, then the question would be what is its priority relative to directly preventing x risks, as edcon said.
My question is whether there is an implication here that directed panspermia is a warranted biotic hedge during The Precipice phase, perhaps prepared now and only acted on if existential catastrophe odds increase. If we make it to The Long Reflection, I’m in total agreement that we do not rapidly engage in directed panspermia. However, for the sake of increasing the universe’s chance of having some intelligent flourishing, perhaps a biotic hedge should at least be prepare now, to be executed when things look especially dire. [emphasis added]
I’d definitely much prefer that approach to just aiming for actually implementing directed panspermia ASAP. Though I’m still very unsure whether directed panspermia would even be good in expectation, and doubt it should be near the top of a longtermist’s list of priorities, for reasons given in my main answer.
I just wanted to highlight that passage because I think that this relates to a general category of (or approach to) x-risk intervention which I think we might call “Developing, but not deploying, drastic backup plans”, or just “Drastic Plan Bs”. (Or, to be nerdier, “Preparing saving throws”.)
I noticed that as a general category of intervention when reading endnote 92 in Chapter 4 of the Precipice:
Using geoengineering as a last resort could lower overall existential risk even if the technique is more risky than climate change itself. This is because we could adopt the strategy of only deploying it in the unlikely case where climate change is much worse than currently expected, giving is a second roll of the dice.
[Ord gives a simple numerical example]
The key is waiting for a situation when the risk of using geoengineering is appreciably lower than the risk of not using it. A similar strategy may be applicable for other kinds of existential risk too.
I’d be interested in someone naming this general approach, exploring the general pros and cons of this approach, and exploring examples of this approach.
Thanks for the thoughtful response! I think you do a good job identifying the downsides of directed panspermia. However, in my description of the problem, I want to draw your attention to two claims drawn from Ord’s broader argument.
First, the premise that there is roughly 1⁄6 probability humanity does not successfully navigate through The Precipice and reach the Long Reflection. Second, the fact that for all we know we might be the universe’s only chance at intelligently flourishing.
My question is whether there is an implication here that directed panspermia is a warranted biotic hedge during The Precipice phase, perhaps prepared now and only acted on if existential catastrophe odds increase. If we make it to The Long Reflection, I’m in total agreement that we do not rapidly engage in directed panspermia. However, for the sake of increasing the universe’s chance of having some intelligent flourishing, perhaps a biotic hedge should at least be prepare now, to be executed when things look especially dire. But at what point would it be justified?
I think this reasoning is exactly the same as the utilitarian longtermist argument that we should invest more resources now addressing x risk, especially Parfit’s argument for the value of potential future persons.
Assume three cases: A. All life in the universe is ended because weapon X is deployed on earth. B. All life on earth is ended by weapon X but life is preserved in the universe because of earth’s directed panspermia. C. Earth originating life makes it through the Precipice and flourishes in the cosmic endowment for billions of years.
It seems C > B > A, with the difference between A and B greater than the difference between B and C.
A neglected case above is where weapon X destroys life on earth, earth engages in directed panspermia, but there was already life in the universe unbeknownst to earth. I think we agree that B is superior to this case, and therefore the difference between B and A is greater. The question is does the difference between this case and C surpass that between A and B. Call it D. Is D so much worse than C that a preferred loss is from B to A? I don’t think so.
So I guess the implied position would be that we should prepare a biotic hedge in case things get especially dire, and invest more in SETI type searches. If we know that life exists elsewhere in the universe, we do not need to deploy the biotic hedge?
I think that the empirically the effort to prepare the biotic hedge, is likely to be be expensive in terms of resources and influence, as I suspect a lot of people would be strongly averse to directed panspernia, as it would be likely negative in some forms of negative utilitarianism, and other value systems. So it would be better for longterm future to reduce existential risk specifically.
I think SETI type searches are different, as you have to consider negative effects from contact to cuurent civilisation. Nice piece from paul christano https://sideways-view.com/2018/03/23/on-seti/
I think I’m not well placed to answer that at this point and would rather defer that to someone who has thought about this more than I have from the vantage points of many ethical theories rather than just from my (or their) own. (I try, but this issue has never been a priority for me.) Then again this is a good exercise for me in moral perspective-taking or what it’s called. ^^
In the previous reply I tried to give broadly applicable reasons to be careful about it, but those were mostly just from Precipice. The reason is that if I ask myself, e.g., how long I would be willing to endure extreme torture to gain ten years of ultimate bliss (apparently a popular thought experiment), I might be ready to invest a few seconds if any, for a tradeoff ratio of 1e7 or 1e8 to 1. So from my vantage point, the r-strategist style “procreation” is very disvaluable. It seems like it may well be disvaluable in expectation, but either way, it seems like an enormous cost to bear for a highly uncertain payoff. I’m much more comfortable with careful, K-strategist “procreation” on a species level. (Magnus Vinding has a great book coming out soon that covers this problem in detail.)
But assuming the agnostic position again, for practice, I suppose A and C are clear cut: C is overwhelmingly good (assuming the Long Reflection works out well and we successfully maximize what we really terminally care about, but I suppose that’s your assumption) and A is sort of clear because we know roughly (though not very viscerally) how much disvalue our ancestors have paid forward over the past millions of years so that we can hopefully eventually create a utopia.
But B is wide open. It may go much more negative than A even considering all our past generations – suffering risks, dystopian-totalitarian lock-ins, permanent prehistoric lock-ins, etc. The less certain it is, the more of this disvalue we’d have to pay forward to get one utopia out of it. And it may also go positive of course, almost like C, just with lower probability and a delay.
People have probably thought about how to spread self-replicating probes to other planets so that they produce everything a species will need at the destination to rebuild a flourishing civilization. Maybe there’ll be some DNA but also computers with all sorts of knowledge, and child-rearing robots, etc. ^^ But a civilization needs so many interlocking parts to function well – all sorts of government-like institutions, trust, trade, resources, … – that it seems to me like the vast majority of these civilizations either won’t get off the ground in the first place and remain locked in a probably disvaluable Stone Age type of state, or will permanently fall short of the utopia we’re hoping for eventually.
I suppose a way forward may to consider the greatest uncertainties about the project – probabilities and magnitudes at the places where things can go most badly net negative or most awesomely net positive.
Maybe one could look into Great Filters (they may exist less necessarily than I had previously thought), because if we are now past the (or a) Great Filter, and the Great Filter is something about civilization rather than something about evolution, we should probably assign a very low probability to a civilization like ours emerging under very different conditions through the probably very narrow panspermia bottleneck. I suppose this could be tested on some remote islands? (Ethics committees may object to that, but then these objections also and even more apply to untested panspermia, so they should be taken very seriously. Then again they may not have read Bostrom or Ord. Or Pearce, Gloor, Tomasik, or Vinding for that matter.)
Oh, here’s an idea: The Drake Equation has the parameter f_i for the probability that existing life develops (probably roughly human-level?) intelligence, f_c that intelligent life becomes detectable, and L for the longevity of the civilization. The probability that intelligent life creates a civilization with similar values and potential is probably a bit less than f_c (these civilizations could have any moral values) but more than the product of the two fs. The paper above has a table that says “f_i: log-uniform from 0.001 to 1” and “f_c: log-uniform from 0.01 to 1.” So I suppose we have some 2–5 orders of magnitude uncertainty from this source.
The longevity of a civilization is “L: log-uniform from 100 to 10,000,000,000” in the paper. An advanced civilization that exists for 10–100k years may be likely to have passed the Precipice… Not sure at all about this because of the risk of lock-ins. And I’d have to put this distribution into Guesstimate to get a range of probabilities out of this. But it seems like a major source of uncertainty too.
The ethical tradeoff question above feels almost okay to me with a 1e8 to 1 tradeoff but others are okay with a 1e3 or 1e4 to 1 tradeoff. Others again refuse it on deontological or lexical grounds that I also empathize with. It feels like there are easily five orders of magnitude uncertainty here, so maybe this is the bigger question. (I’m thinking more in terms of an optimal compromise utility function than in moral realist terms, but I suppose that doesn’t change much in this case.)
In the best case within B, there’s also the question whether it’ll be a delay compared to C of thousands or of tens of thousands of years, and how much that would shrink the cosmic endowment.
I don’t trust myself to be properly morally impartial about this after such a cursory investigation, but that said, I would suppose that most moral systems would put a great burden of proof on the intervention because it can be so extremely good and so extremely bad. But tackling these three to four sources of uncertainty and maybe others can perhaps shed more light on how desirable it really is.
I empathize with the notion that some things can’t wait until the Long Reflection, at least as part in a greater portfolio, because it seems to me that suffering risks (s-risks) are a great risk (in expectation) even or especially now in the span until the Long Reflection. They can perhaps be addressed through different and more tractable avenues than other longterm risks and by researchers with different comparative advantages.
Hmm, I don’t quite follow… Does the above change the relative order of preference for you, and if so, to which order?
There are all these risks from drawing the attention of hostile civilizations. I haven’t thought about what the risk and benefits are there. It feels like that came up in Precipice too, but I could be mixing something up.
Yes, Ord discusses that in Chapter 5. Here’s one relevant passage that I happened to have in my notes:
(Note that, perhaps contrary to what “before we engage in active SETI” might imply, I believe humanity is already engaging in some active SETI.)
Great job identifying some relevant uncertainties to investigate. I will think about that some more.
My goal here is not so much to resolve the question of “should we prepare a biotic hedge?” but rather “Does utilitarian Longtermism imply that we should prepare it now, and if faced with a certain threshold of confidence that existential catastrophe is imminent, deploy it?” So I am comfortable not addressing the moral uncertainty arguments against the idea for now. If I become confident that utilitarian Longtermism does imply that we should, I would examine how other normative theories might come down on the question.
Me: “A neglected case above is where weapon X destroys life on earth, earth engages in directed panspermia, but there was already life in the universe unbeknownst to earth. I think we agree that B is superior to this case, and therefore the difference between B and A is greater. The question is does the difference between this case and C surpass that between A and B. Call it D. Is D so much worse than C that a preferred loss is from B to A? I don’t think so.”
You: “Hmm, I don’t quite follow… Does the above change the relative order of preference for you, and if so, to which order?”
No it would not change the relative order of A B C. The total order (including D) for me would be C > B > D > A, where |v(B) - v(A)| > |v(C) - v(D)|.
I was trying to make a Parfit style argument that A is so very bad that spending significant resources now to hedge against it is justified. Given that we fail to reach the Long Reflection, it is vastly preferable that we engage in a biotic hedge. I did a bad job of laying it out, and it seems that reasonable people think the outcome of B might actually be worse than A, based on your response.
Oh yeah, I was also talking about it only from utilitarian perspectives. (Except for one aside, “Others again refuse it on deontological or lexical grounds that I also empathize with.”) Just utilitarianism doesn’t make a prescription as to the exchange rate of intensity/energy expenditure/… of individual positive experiences to individual negative experiences.
Yes, I hope they do. :-)
Sorry for responding so briefly! I’m falling behind on some reading.
Yes I think I messed up the Parfit style argument here. Perhaps the only relevant cases are A, B, and D, because I’m supposing we fail to reach the Long Reflection and asking what the best history line is on utility Longtermist grounds.
If we conclude from this that a biotic hedge is justified on those grounds, then the question would be what is its priority relative to directly preventing x risks, as edcon said.
I’d definitely much prefer that approach to just aiming for actually implementing directed panspermia ASAP. Though I’m still very unsure whether directed panspermia would even be good in expectation, and doubt it should be near the top of a longtermist’s list of priorities, for reasons given in my main answer.
I just wanted to highlight that passage because I think that this relates to a general category of (or approach to) x-risk intervention which I think we might call “Developing, but not deploying, drastic backup plans”, or just “Drastic Plan Bs”. (Or, to be nerdier, “Preparing saving throws”.)
I noticed that as a general category of intervention when reading endnote 92 in Chapter 4 of the Precipice:
I’d be interested in someone naming this general approach, exploring the general pros and cons of this approach, and exploring examples of this approach.