I think the authors are a bit too quick and confident in dismissing the idea that population ethics could substantially change their conclusions
They write:
However, the other options for long-run influence we discussed (in section 3.4) are attempts to improve average future well-being, conditional on humanity not going prematurely extinct. While the precise numbers that are relevant will depend on the precise choice of axiology (and we will not explicitly crunch suggested numbers for any other axiologies), any plausible axiology must agree that this is a valuable goal. Therefore, the bulk of our argument is robust to plausible variations in population axiology.
I think I essentially agree, but (as noted) I think that that’s a bit too quick and confident.
In particular, I think that, if we rule out extinction as a priority, it then becomes more plausible that empirical considerations would mean strong longtermism is either false or has no unusual implications
It’s worth noting that the authors’ toy model suggested that influencing a future world government is something like 30 times as valuable as giving to AMF
So the “margin for error” for non-extinction-related longtermist interventions might be relatively small
I.e., maybe a short-termist perspective would come out on top if we made different plausible empirical assumptions, or if we found something substantially more cost-effective than AMF
But of course, this was just with one toy model
And the case for strong longtermism could look more robust if we made other plausiblechanges in the assumptions, or if we found more cost-effective interventions for reducing non-extinction-related trajectory changes
Also, they quickly dismiss the idea that one approach to risk aversion would undermine the case for strong longtermism, with the reason being partly that extinction risk reduction still looks very good under that approach to risk aversion. But if we combined that approach with certain population ethics views, it might be the case that the only plausible longtermist priorities that remain focus on reducing the chance of worse-than-extinction futures.
I.e., given those assumptions, we might have to rule out a focus on reducing extinction risk and rule out a focus on increasing the quality of futures that would be better than extinction anyway.
This would be for reasons of population ethics and reasons of risk-aversion, respectively
This could then be an issue for longtermism if we can’t find any promising interventions that reduce the chance of worse-than-extinction futures
Though I tentatively think that there are indeed promising interventions in this category
The relevant passage about risk aversion is this one:
First, we must distinguish between two senses of “risk aversion with respect to welfare”. The standard sense is risk aversion with respect to total welfare itself (that is, vNM value is a concave function of total welfare, w). But risk aversion in that sense tends to increase the importance of avoiding much lower welfare situations (such as near-future extinction), relative to the importance of increasing welfare from an already much higher baseline (as in the case of distributing bed nets in a world in which extinction is very far in the future).
On the other hand, I think the authors understate the case for extinction risk reduction being important from a person-affecting view
They write “Firstly, “person-affecting” approaches to population ethics tend to regard premature extinction as being of modest badness, possibly as neutral, and even (if the view in question also incorporates “the asymmetry”) possibly as a good thing (Thomas, manuscript).”
See also discuss in The Precipice of how a moral perspective focused on “the present” might still see existential risk reduction as a priority
I personally think that this is neither obviously false nor obviously true, so all I’d have suggested to Greaves & MacAskill is adding a brief footnote to acknowledge the possibility
I think it’s worth clarifying that you mean worse-than-exrinction futures according to asymmetric views. S-risks can still happen in a better-than-extinction future according to classical utilitarianism, say, and could still be worth reducing.
There might be other interventions to increase wellbeing according to some person-affecting views, by increasing positive wellbeing without requiring additional people, but do any involve attractor states? Maybe genetically engineering humans to be happier or otherwise optimizing our descendants (possibly non-biological) for happiness? Maybe it’s better to do this before space colonization, but I think intelligent moral agents would still be motivated to improve their own wellbeing after colonization, so it might not be so pressing for them, although could be for moral patients who have too little agency if we send them out on their own.
Two mistakes people sometimes make are discussing s-risks as if they’re entirely distinct from existential risks, or discussing s-risks as if they’re a subset of existential risks. In reality:
There are substantial overlaps between suffering catastrophes and existential catastrophes, because some existential catastrophes would involve or result in suffering on an astronomical scale.
[...]
But there could also be suffering catastrophes that aren’t existential catastrophes, because they don’t involve the destruction of (the vast majority of) humanity’s long-term potential.
This depends on one’s moral theory or values (or the “correct” moral theory or values), because, as noted above, that affects what counts as fulfilling or destroying humanity’s long-term potential.
For example, the Center on Long-Term Risk notes: “Depending on how you understand the [idea of loss of “potential” in definitions] of [existential risks], there actually may be s-risks which aren’t [existential risks]. This would be true if you think that reaching the full potential of Earth-originating intelligent life could involve suffering on an astronomical scale, i.e., the realisation of an s-risk. Think of a quarter of the universe filled with suffering, and three quarters filled with happiness. Considering such an outcome to be the full potential of humanity seems to require the view that the suffering involved would be outweighed by other, desirable features of reaching this full potential, such as vast amounts of happiness.”
In contrast, given a sufficiently suffering-focused theory of ethics, anything other than near-complete eradication of suffering might count as an existential catastrophe.
Your second paragraph makes sense to me, and is an interesting point I don’t think I’d thought of.
I think the authors are a bit too quick and confident in dismissing the idea that population ethics could substantially change their conclusions
They write:
I think I essentially agree, but (as noted) I think that that’s a bit too quick and confident.
See also Why I think The Precipice might understate the significance of population ethics.
In particular, I think that, if we rule out extinction as a priority, it then becomes more plausible that empirical considerations would mean strong longtermism is either false or has no unusual implications
It’s worth noting that the authors’ toy model suggested that influencing a future world government is something like 30 times as valuable as giving to AMF
So the “margin for error” for non-extinction-related longtermist interventions might be relatively small
I.e., maybe a short-termist perspective would come out on top if we made different plausible empirical assumptions, or if we found something substantially more cost-effective than AMF
But of course, this was just with one toy model
And the case for strong longtermism could look more robust if we made other plausible changes in the assumptions, or if we found more cost-effective interventions for reducing non-extinction-related trajectory changes
Also, they quickly dismiss the idea that one approach to risk aversion would undermine the case for strong longtermism, with the reason being partly that extinction risk reduction still looks very good under that approach to risk aversion. But if we combined that approach with certain population ethics views, it might be the case that the only plausible longtermist priorities that remain focus on reducing the chance of worse-than-extinction futures.
I.e., given those assumptions, we might have to rule out a focus on reducing extinction risk and rule out a focus on increasing the quality of futures that would be better than extinction anyway.
This would be for reasons of population ethics and reasons of risk-aversion, respectively
This could then be an issue for longtermism if we can’t find any promising interventions that reduce the chance of worse-than-extinction futures
Though I tentatively think that there are indeed promising interventions in this category
See also discussion of s-risks
The relevant passage about risk aversion is this one:
On the other hand, I think the authors understate the case for extinction risk reduction being important from a person-affecting view
They write “Firstly, “person-affecting” approaches to population ethics tend to regard premature extinction as being of modest badness, possibly as neutral, and even (if the view in question also incorporates “the asymmetry”) possibly as a good thing (Thomas, manuscript).”
But see The person-affecting value of existential risk reduction
See also discuss in The Precipice of how a moral perspective focused on “the present” might still see existential risk reduction as a priority
I personally think that this is neither obviously false nor obviously true, so all I’d have suggested to Greaves & MacAskill is adding a brief footnote to acknowledge the possibility
I think it’s worth clarifying that you mean worse-than-exrinction futures according to asymmetric views. S-risks can still happen in a better-than-extinction future according to classical utilitarianism, say, and could still be worth reducing.
There might be other interventions to increase wellbeing according to some person-affecting views, by increasing positive wellbeing without requiring additional people, but do any involve attractor states? Maybe genetically engineering humans to be happier or otherwise optimizing our descendants (possibly non-biological) for happiness? Maybe it’s better to do this before space colonization, but I think intelligent moral agents would still be motivated to improve their own wellbeing after colonization, so it might not be so pressing for them, although could be for moral patients who have too little agency if we send them out on their own.
Yeah, this is true. On this, I’ve previously written that:
Your second paragraph makes sense to me, and is an interesting point I don’t think I’d thought of.