Possible misconceptions about (strong) longtermism

Overview

In this post I provide a brief sketch of The case for strong longtermism as put forward by Greaves and MacAskill, and proceed to raise and address possible misconceptions that people may have about strong longtermism. Some of the misconceptions I have come across, whilst others I simply suspect may be held by some people in the EA community.

The goal of this post isn’t to convert people as I think there remain valid objections against strong longtermism to grapple with, which I touch on at the end of this post. Instead, I simply want to address potential misunderstandings, or point out nuances that may not be fully appreciated by some in the EA community. I think it is important for the EA community to appreciate these nuances, which should hopefully aid the goal of figuring out how we can do the most good.

NOTE: I certainly do not consider myself to be any sort of authority on longtermism. I partly wrote this post to push me to engage with the ideas more deeply than I already had. No-one has read through this before my posting, so it’s certainly possible that there are inaccuracies or mistakes in this post and I look forward to any of these being pointed out! I’d also appreciate ideas for other possible misconceptions that I have not covered here.

Defining strong longtermism

The specific claim that I want to address possible misconceptions about is that of axiological strong longtermism, which Greaves and MacAskill define in their 2019 paper The case for strong longtermism as the following:

Axiological strong longtermism (AL): “In a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best.”

Put more simply (and phrased in a deontic way that assumes we should do is what will result in the best consequences), one might say that:

“In most of the choices (or, most of the most important choices) we face today, what we ought to do is mainly determined by possible effects on the far future.”

Greaves and MacAskill note that an implication of axiological strong longtermism is that:

“for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years, focussing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.”

I think most people would agree that this is a striking claim.

Sketch of the strong longtermist argument

The argument made by Greaves and MacAskill (2019) begins with a plausibility argument that goes roughly as follows:

Plausibility Argument:

  • In expectation, the future is vast in size (in terms of expected number of beings)

  • All consequences matter equally (i.e. it doesn’t matter when a consequence occurs, or if it was intended or not)

  • Therefore it is at least plausible that the amount of ex ante good we can generate by influencing the expected course of the very long-run future exceeds the amount of ex ante good we can generate via influencing the expected course of short-run events, even after taking into account the greater uncertainty of further-future effects.

  • Also, because of the near-term bias exhibited by the majority of existing actors, we should expect tractable longtermist options (if they exist) to be systematically under-exploited at the current margin

The authors then consider the intractability objection: that it is essentially impossible to significantly influence the long-term future ex ante, perhaps because the magnitude of the effects of one’s actions (in expected value-difference terms) decays with time (or “washes out”) and sufficiently quickly as to make short-term effects dominate expected value.

The authors then proceed to suggest possible examples of interventions that may avoid the intractability objection, in the following categories:

  • Speeding up progress

    • Provided value per unit time doesn’t plateau at a modest level, bringing forward the march of progress could have long-lasting beneficial effects compared to status quo

  • Mitigating extinction risk

    • Extinction is an “attractor state” in that, once humans go extinct, they will stay that way forever (assuming humans don’t re-evolve). Non-extinction is also an attractor state, although to a lesser extent. Increasing the probability of achieving the better attractor state (probably non-extinction by a large margin, if we make certain foundational assumptions) has high expected value that stretches into the far future. This is because the persistence of the attractor states allows the expected value of reducing extinction risk not to “wash out” over time. There also seem to be tractable ways to reduce extinction risk

  • Steering towards a better rather than a worse “attractor state” in contexts that do not involve a threat of extinction, including:

    • Mitigating climate change. Climate change could result in a slower long-run growth rate or permanently reduce the planet’s carrying capacity

    • Ensuring institutions that may be developed in the next century or two are constituted in ways that are better for wellbeing than others. Institutions could persist indefinitely

    • Ensuring advanced AI has goals that are conducive to wellbeing. AI could persist indefinitely and exert very significant control over human affairs

  • Funding research into longtermist interventions /​ saving money to fund future opportunities

Possible Misconceptions

Henceforth I will simply refer to “longtermism” to mean strong longtermism, and to “longtermists” to be people who act according to strong longtermism.

“Longtermists have to predict the far future”

Possible misconception: “Trying to influence the far future is pointless because it is impossible to forecast that far.”

My response: “Considering far future effects doesn’t necessarily require predicting what will happen in the far future.”

Note that most of Greaves and MacAskill’s longtermist interventions involve steering towards or away from certain “attractor states” that, when you enter them, you tend to persist in them for a very long time (if not forever). The persistence of these attractor states is what allows these interventions to avoid the “washing out” of expected value over time.

A key thing to notice is that some of these attractor states could realistically be entered in the near future. Nuclear war could feasibly happen tomorrow. Climate change is an ongoing phenomenon and catastrophic climate change could happen within decades. In The Precipice, Toby Ord places the probability of an existential catastrophe occuring within the next 100 years at 1 in 6, which is concerningly high.

Ord is not in the business of forecasting events beyond a 100-year time horizon, nor does he have to be. These existential threats affect the far future on account of the persistence of their effects if they occur, but not on account of the fact that they might happen in the far future. Therefore whilst it is true that a claim has to be made about the far future, namely that we are unlikely to ever properly recover from existential catastrophes, this claim seems less strong than a claim that some particular event will happen in the far future.

Having said that, not all longtermist interventions involve attractor states. “Speeding up progress” becomes a credible longtermist intervention provided the value of the future, per century, is much higher in the far future than it is today, perhaps due to space settlement or because some form of enhancement renders future people capable of much higher levels of well-being. The plausibility of speeding up progress being a credible longtermist intervention then appears to depend on somewhat speculative claims about what will happen in the (potentially far) future.

“Cluelessness affects longtermists more than shorttermists”

Possible misconception: “When we peer into the far future there are just too many complex factors at play to be able to know that what we’re doing is actually good. Therefore we should just do interventions based on their short-term effects.”

My response: “Every intervention has long-term effects, even interventions that are chosen based on their short-term effects. It often doesn’t seem reasonable to ignore these long-term effects. Therefore cluelessness is often a problem for shorttermists, just as it is for longtermists.”

It can be tempting to claim that we can’t be very confident at all about long-term effects, and therefore that we should just ignore them and decide to do interventions that have the best short-term effects.

There are indeed scenarios when we can safely ignore long-term effects. To steal an example from Phil Trammell’s note on cluelessness, when we are deciding whether to conceive a child on a Tuesday or a Wednesday, any chance that one of the options might have some long-run positive or negative consequence will be counterbalanced by an equal chance that the other will have that consequence. In other words there is evidential symmetry across the available choices. Hilary Greaves has dubbed such a scenario “simple cluelessness”, and argues, in this case, that we are justified in ignoring long-run effects.

However it seems that often we don’t have such evidential symmetry. In the conception example we simply can’t say anything about the long-term effects of choosing to conceive a child on a particular day, and so we have evidential symmetry. But what about, say, giving money to the Against Malaria Foundation? It seems that we can say some things about both the short-term and long-term effects of doing so. We can reasonably say that giving to AMF will save lives and therefore probably have long-term population effects. We can reasonably say that population changes should impact on things like climate change, animal welfare, and economic growth. We can also say that the total magnitude of these indirect (unintended) effects is very likely to exceed the magnitude of the direct (intended) effects, namely averting deaths due to malaria. We perhaps can’t however feel justified in saying that the net effect of all of these impacts is positive in terms of value (even in expectation) - there are just too many foreseeable effects that might plausibly go in different directions and that seem large in expected magnitude. Forming a realistic credence on even the sign of the net value of giving to AMF seems pretty hopeless. This scenario is what Greaves calls “complex cluelessness”, and she feels that this poses a problem for someone who wants to do the most good by giving to AMF.

Can we just ignore all of those “indirect” effects because we can’t actually quantify them, and just go with the direct effects that we can quantify (averting death due to malaria)? Well, this seems questionable. Imagine an omniscient being carries out a perfect cost-benefit analysis of giving to AMF which accurately includes impacts on saving people from malaria, climate change, animal welfare and economic growth (i.e. the things we might reasonably think will be impacted by giving to AMF). Now imagine the omniscient being blurs out all of the analysis, except the ‘direct effect’ of saving lives, before handing the analysis to you. Personally, because I know that the foreseeable ‘indirect’ effects make up the vast majority of the total value and could in certain cases realistically be negative, I wouldn’t feel comfortable just going with the one ‘direct effect’ I can see. I would feel completely clueless about whether I should give to AMF or not. Furthermore, this ‘blurred’ position seems to be the one we are currently in with regards to GiveWell’s analysis of AMF.

I’m not absolutely sure that this cluelessness critique of AMF is justified, but I do think that thinking in this way illustrates that cluelessness can be a problem for shorttermists, and that there seems to be no real reason why the problem should be more salient for longtermists. Every intervention has long-term effects, and deep uncertainty of these effects is often problematic for us when deciding how to do the most good.

Greaves actually argues that it might be the case that deliberately trying to beneficially influence the course of the very far future might allow us to find interventions where we more robustly have some clue that what we’re doing is beneficial, and of how beneficial it is. In other words, Greaves thinks that cluelessness may be less of a problem for longtermists. This may be the case as, for many longtermist interventions, the direct (intended) impact may be so large in expected value as to outweigh the indirect (unintended) impacts. For example, AI alignment research may be so good in expected value as to nullify relatively insubstantial concerns of indirect harm of engaging in such research. Overall I’m not sure if cluelessness is less of an issue for longtermists, but it seems possible.

“Longtermists have to ignore non-human animals”

Possible misconception: “I’m mainly concerned about reducing/​preventing suffering of non-human animals, but longtermism is a philosophy centred around humans. Therefore I’m not really interested.”

My response: “Longtermists shouldn’t ignore non-human animals. It is plausible that there are things we can do to address valid concerns about the suffering of non-human animals in the far future. More research into the tractability of certain interventions could have high expected value.”

A nitpick I have with Greaves and MacAskill’s paper is that they don’t mention non-human animals. For example, when they are arguing that the future is vast in expectation they say: “It should be uncontroversial that there is a vast number of expected beings in the future of human civilisation.” I take this to imply that they restrict their analysis to humans. I see some possible reasons for this:

  1. In aiming to introduce longtermism to the (non-EA) academic world, the authors decided to focus on humans in order to make the core argument seems less ‘weird’, or remain ‘conservative’ in terms of numbers of beings so as to make the argument more convincing

  2. For some reason, non-human animals aren’t as relevant from a longtermist point of view

The first reason is a possibility, and it may be fair.

The second reason is an interesting possibility. It could be the case if perhaps there aren’t a vast number of expected non-human animals in the future. It does indeed seem possible that farmed animals may cease to exist in the future on account of being made redundant due to cultivated meat, although I certainly wouldn’t be sure of this given some of the technical problems with scaling up cultivated meat to become cost-competitive with cheap animal meat. Wild animals seem highly likely to continue to exist for a long time, and currently vastly outnumber humans. Therefore it seems that we should consider non-human animals, and perhaps particularly wild animals, when aiming to ensure that the long-term future goes well.

The next question to ask then is if there are attractor states for non-human animals that differ in terms of value. I think there are. For example, just as with humans, non-human animal extinction and non-extinction are both attractor states. It is plausible that the extinction of both farmed and wild animals is better than existence, as some have suggested that wild animals tend to experience far more suffering than pleasure and it is clear that factory-farmed animals undergo significant suffering. Therefore causing non-human animal extinction may have high value. Even if some non-human animals do have positive welfare, it may be better to err on the side of caution and cause them to go extinct, making use of any resources or space that is freed up to support beings that have greater capacity for welfare and that are at lower risk of being exploited e.g. humans (although this may not be desirable under certain population axiologies).

The next question then is if we can tractably cause the extinction of non-human animals. In terms of farmed animals, as previously mentioned, cultivated meat has the potential to render them redundant. Further research into overcoming the technical difficulties of scaling cultivated meat could have high expected value. In terms of wild animals, mass sterilisation could theoretically help us achieve their extinction. However, the tractability of causing wild animals to go extinct, and the indirect effects of doing so, are uncertain. Overall, one could argue that making non-human animals go extinct may be less urgent than mitigating existential risk, as the former can be done at any time, although I do think it might be particularly difficult to do if we have spread to the stars and brought non-human animals with us.

There are potential longtermist interventions that have a non-human animal focus and that don’t centre around ensuring their extinction. Tobias Baumann suggests that expanding the moral circle to include non-human animals might be a credible longtermist intervention, as a good long-term future for all sentient beings may be unlikely as long as people think it is right to disregard the interests of animals for frivolous reasons such as the taste of meat. Non-human animals are moral patients that are essentially at our will, and it seems plausible that there are non-extinction attractor states for these animals. For example, future constitutions might (or might not) explicitly include protections for non-human animals, and then persist for a very long time. Depending on whether they include protections or not, the fate of non-human animals in the far future could be vastly better or worse. Trying to ensure that future constitutions do provide protections for non-human animals might require us to expand the moral circle such that a significant proportion of society believes non-human animals to have moral value. It isn’t clear however how tractable moral circle expansion is, and further research on this could be valuable.

Finally, it is worth noting that some of Greaves and MacAskill’s proposed longtermist interventions could help reduce animal suffering, even if that isn’t the main justification for carrying them out. For example, aligned superintelligent AI could help us effectively help animals.

“Longtermists won’t reduce suffering today”

Possible misconception: “Greaves and MacAskill say we can ignore short-term effects. That means longtermists will never reduce current suffering. This seems repugnant.”

My response: “It is indeed true that ignoring short-term effects means ignoring current suffering, and people may be justified in finding this repugnant. However, it is worth noting that it is possible that longtermists may end up reducing suffering today as a by-product of trying to improve the far future. It isn’t clear however that this is the case when reducing existential risk. In any case, it is important to remember that longtermists only claim longtermism is true on the current margin.”

Greaves and MacAskill’s claim that we can ignore “all the effects contained in the first 100 (or even 1000) years” is certainly a striking claim. Essentially, they claim this because the magnitude of short-term effects we can tractably influence will simply pale in comparison to the magnitude of the long-term effects we can tractably influence, if strong longtermism is true. It is natural to jump to the conclusion that this necessarily means longtermists won’t reduce suffering today.

It is indeed true that Greaves and MacAskill’s claim implies that longtermists shouldn’t concern themselves with current suffering (remember I am referring to the strong version of longtermism here). One could be forgiven in finding this repugnant, and it makes me feel somewhat uneasy myself.

However, it is worth noting that it is possible that longtermists may end up reducing suffering today as a by-product of trying to improve the far future. Indeed one of the plausible longtermist interventions that Greaves and MacAskill highlight is ‘speeding up progress’, which would likely involve some alleviation of current suffering. Tyler Cowen argues that boosting economic growth may be the most important thing to do if one has a long-term focus, which should entail a reduction in current suffering.

In addition, I mentioned previously that moral circle expansion could be a credible longtermist intervention. It seems plausible that one of the most effective ways to expand the moral circle could be to advance cultivated or plant-based meat, as stopping people from eating animals may then allow them to develop moral concern for them. In this case, short-term and expected long-term suffering reduction could coincide, although this is all admittedly fairly speculative.

In practice however, longtermists tend to focus on reducing existential risks, which indeed doesn’t seem to entail reducing current suffering. Note however that part of Greaves and MacAskill’s argument was that longtermist interventions, if they exist, are likely to be underexploited at the current margin. This is because other “do gooders” tend to be focused on current suffering, as opposed to the welfare of future generations. Therefore longtermism may be a current phenomenon based on where others are placing their attention. It isn’t clear, and actually seems quite unlikely, that longtermists should want everyone in the world to work on reducing existential risk, and therefore ignore current suffering completely. Even if the future is vast, there’s only so much that people can do to exploit that, and we can expect diminishing returns for longtermists.

“Longtermists have to think future people have the same moral value as people today”

Possible misconception: “I think we have special obligations towards people alive today, or at least that it is permissible to place more weight on people alive today. Therefore I reject longtermism.”

My response: “Whilst there may be a justifiable reason for privileging people alive today, the expected vastness of the future should still lead us to a longtermist conclusion.”

This one might surprise people. After all, the claim that all consequences matter equally regardless of when in time it occurs (which is generally considered to be quite uncontroversial) is one of the foundations of Greaves and MacAskill’s argument. I contend however that you don’t need this assumption for longtermism to remain valid.

My explanation for this is essentially based on Andreas Mogensen’s paper “The only ethical argument for positive 𝛿”. Delta in this context is the “rate of pure time preference” and reflects the extent to which a unit of utility or welfare accruing in the future is valued less than an equal unit of utility enjoyed today. If delta is greater than zero we are essentially saying that the welfare of future people matters less simply because they are in the future. If delta=0 equal units of utility are valued equally, regardless of when they occur.

In his paper, Mogensen argues that a positive delta may be justifiable in terms of agent-relative reasons, and furthermore that this seems to be the only credible ethical argument for positive delta. The basic idea here is that we may have justification for being partial to certain individuals, such as our children. For example, if someone chooses to save their child from a burning building as opposed to two children they don’t know from a separate building we, tend not to judge them, and in fact we might even think they did the right thing. Applying this partiality thinking to ‘the world community now’, Mogensen argues that we may be justified in caring more about the next generation, than those in succeeding generations. Mogensen calls this ‘discounting for kinship’.

Importantly however, Mogensen notes that under such discounting we shouldn’t be valuing the welfare of one of our distant descendants any less than the welfare of some stranger who is currently alive today. These two people seem similarly distant from us, just across different dimensions. Therefore, provided we care about strangers at least to some extent, which seems reasonable, we should also care about distant descendants. So, whilst delta can be greater than zero, it should decline very quickly over time when discounting the future, to allow for distant descendants to have adequate value. Given this, if the future is indeed vast in expectation and there are tractable ways to influence the far future, the longtermist thesis should remain valid.

“Longtermists must be consequentialists”

Possible misconception: “The longtermist argument seems to rest on some naive addition of expected utilities over time. As someone who doesn’t feel comfortable with maximising consequentialism, I reject longtermism.”

My response: “A particular concern for the future may be justified using other ethical theories including deontology and virtue ethics.”

Toby Ord has put forward arguments for why reducing existential risk may be very important for deontologists and virtue ethicists. These arguments also seem to be applicable to longtermism more generally.

In The Precipice, Ord highlights a deontological foundation for reducing existential risk by raising Edmund Burke’s idea of a partnership of the generations. Burke, one of the founders of political conservatism, wrote about how humanity’s remarkable success has relied on intergenerational cooperation, with each generation building on the work of those that have come before. In 1790 Burke wrote of society:

“It is a partnership in all science; a partnership in all art; a partnership in every virtue, and in all perfection. As the ends of such a partnership cannot be obtained except in many generations, it becomes a partnership not only between those who are living, but between those who are living, those who are dead, and those who are to be born.”

Ord highlights that such an idea might give us reasons to safeguard humanity that are grounded in our past—obligations to our grandparents, as well as our grandchildren. Ord suggests that we might have a duty to repay a debt to past generations by “paying it forward” to future generations.

Ord also appeals to the virtues of humanity, likening humanity’s current situation to that of an adolescent that is often incredibly impatient and imprudent. Ord writes:

“Our lack of regard for risks to our entire future is a deficiency of prudence. When we put the interests of our current generation far above those of the generations to follow, we display our lack of patience. When we recognise the importance of our future yet still fail to prioritise it, it is a failure of self-discipline. When a backwards step makes us give up on our future—or assume it to be worthless—we show a lack of hope and perseverance, as well as a lack of responsibility for our own actions.”

Ord hopes that we can grow from an impatient, imprudent adolescent, to a wiser, more mature adult, and that this will necessarily require a greater focus on the future of humanity.

“Longtermists must be total utilitarians”

Possible misconception: “Reducing extinction risk is only astronomically important if one accepts total utilitarianism, which I reject. Therefore I’m not convinced by longtermism.”

My response: “It may be that there are tractable longtermist interventions that improve average future well-being, conditional on humanity not going prematurely extinct. These will be good by the lights of many population axiologies.”

OK so this isn’t actually my response, this is covered in Greaves and MacAskill’s paper. They concede that the astronomical value of reducing extinction risk relies on a total utilitarianism axiology. This is because the leading alternative view—a person-affecting one—doesn’t find extinction to be astronomically bad.

However, they note that their other suggested longtermist interventions, including mitigating climate change, institutional design, and ensuring aligned AI, are attempts to improve average future well-being, conditional on humanity not going prematurely extinct. They then state that any plausible axiology must agree that this is a valuable goal, and therefore that the bulk of their longtermist argument is robust to plausible variations in population axiology.

From my point of view, the two animal-focused interventions that I floated earlier (making non-human animals go extinct, and expanding the moral circle) are also pretty robust to population axiology. Both of them centre on the importance of reducing suffering, which any population axiology should consider important. One could counter and say that causing non-human animals to go extinct may be bad if many non-human animals live lives that are worth living, appealing to a total utilitarianism population axiology. However, as I stated earlier, even if some non-human animals do have positive welfare, it may be better to err on the side of caution and cause them to go extinct, making use of any resources or space that is freed up to support beings that have greater capacity for welfare and that are at lower risk of being exploited e.g. humans.

“Longtermists must be classical utilitarians”

Possible misconception: “I think it is more valuable to improve the wellbeing of those with lower wellbeing (I’m a prioritarian). Therefore I think it more valuable to improve the lives of those in extreme poverty today, as opposed to future people who will be better off.”

My response: “It isn’t clear that future people will in fact be better off. Also, the prioritarian weighting may need to be quite extreme to avoid the longtermist conclusion.”

Again—not actually my response. This is also covered in Greaves and MacAskill’s paper (and given that they word this pretty well I’m stealing much of their wording here).

The authors first note that there are serious possibilities that future people will be even worse off than the poorest people today — for example, because of climate change, misaligned artificial general superintelligence, or domination by a repressive global political regime. They also note that many of their contenders for longtermist interventions are precisely aimed at improving the plight of these very badly off possible future people, or reducing the chance that they have terrible as opposed to flourishing lives.

Otherwise, the authors note that, given the large margin by which (they argue) longtermist interventions deliver larger improvements to aggregate welfare than similarly costly shorttermist interventions, only quite an extreme priority weighting would lead to wanting to address global poverty over longtermist interventions. Even if some degree of prioritarianism is plausible, the degree required might be too extreme to be plausible by any reasonable lights.

“Longtermists must embrace expected utility theory”

Possible misconception: “Greaves and MacAskill’s argument relies on maximising expected value. I don’t subscribe to this decision theory.”

My response: “They consider a few other decision theories, and conclude that longtermism is robust to these variations.”

Again—not my response. I will have to completely defer to Greaves and MacAskill on this one. In their strong longtermism paper they consider a few alternatives to maximising expected value.

They note that under ‘Knightian uncertainty’ - when there is little objective guidance as to which probability distributions over possible outcomes are appropriate vs inappropriate—a common decision theory is “maximin” whereby one chooses the outcome which is least bad. They argue that this supports axiological longtermism, as the worst outcomes are ones in which the vast majority of the long-run future is of highly negative value (or, at best, have zero or very little positive value). Therefore, according to maximin, the only consideration that is relevant to ex ante axiological option evaluation is the avoidance of these long-term catastrophic outcomes.

They also consider risk-weighted expected utility theory. It’s a similar story to the above, as risk aversion with respect to welfare (i.e. value is a concave function of total welfare) will make it more important to avoid very low welfare outcomes.

Admittedly, I am unsure if Greaves and MacAskill have tackled this question thoroughly enough. I look forward to further work on this.

Genuine Issues for Longtermism

Despite my defence of longtermism above, I do think that there remain genuine issues for longtermists to grapple with.

Tractability

A challenge for longtermists remains the tractability objection. Greaves and MacAskill “regard this as the most serious objection to axiological strong longtermism”.

It was the concept of attractor states that allowed Greaves and MacAskill to avoid the “washing out” of expected value over time. Even if attractor states exist however, it must be possible to somewhat reliably steer between them for longtermism to be valid. In the case of reducing existential risk for example, there have to be things that we can actually do to reduce these risks, and not just temporarily. Toby Ord argues in The Precipice that there are such things we can do, but it seems that more research on this question would be useful.

What about improving institutions? Greaves and MacAskill argue that institutions can be constituted in ways that are better for wellbeing than others and that these institutions may persist indefinitely. For institutional reform to be a credible longtermist intervention it must be possible to figure out what better institutions look like (from a longtermist point of view) and it must be possible, in practice, to actually redesign institutions in these ways. MacAskill and John suggest interesting ideas, but it still seems as if research in this area is still quite nascent.

As mentioned, the tractability of interventions like moral circle expansion or making non-human animals go extinct is also disputable, and further research on this could be valuable.

Fanaticism

In “The Epistemic Challenge to Longtermism”, Christian Tarsney develops a simple model in which the future gets continually harder to predict, and then considers if this means that the expected value of our present options is mainly determined by short-term considerations. Tarsney’s conclusion is that expected value maximisers should indeed be longtermists. However, Tarsney cautions that, on some plausible empirical worldviews, this conclusion may rely on miniscule probabilities of astronomical payoffs. Whether expected value maximisation is the correct decision theory in such cases isn’t necessarily clear, and is a question that philosophers continue to grapple with. If one isn’t comfortable with basing decisions on miniscule probabilities of astronomical payoffs, the case for longtermism may not hold up.

What small probabilities drive the superiority of longtermism in Tarsney’s model? In Tarsney’s talk about his paper he highlights the following:

  • The probability of being able to steer between attractor states

  • The probability of large-scale space settlement, conditional on survival

  • The probability of a “Dyson sphere” rather than “space opera” scenario, conditional on space settlement

  • The probability of a stable future

Tarsney also suggests, given current beliefs about such empirical parameters, that we should be longtermists on the scale of thousands or millions of years, as opposed to billions or trillions. This latter point isn’t really an argument against longtermism, but it is worth bearing in mind.

Concluding Remarks

There remain genuine issues for longtermists to grapple with and I look forward to further research in these areas. However, there are also (I believe to be) fairly common misconceptions about longtermism that can lead people to level objections that may not be entirely valid, or at least are more nuanced than is generally realised. I think it is important for the EA community to appreciate these nuances, which should hopefully aid the goal of figuring out how we can do the most good.