What We Owe the Future

TagLast edit: 17 Aug 2022 17:25 UTC by Leo

What We Owe the Future is a 2022 book by William MacAskill. The book makes the case for longtermism—defined as the view that positively affecting the long-run future is a key moral priority of our time—and explores what follows from that view.


The book makes the case for longtermism and proposes that we can make the future better in two ways: “by averting permanent catastrophes, thereby ensuring civilisation’s survival; or by changing civilisation’s trajectory to make it better while it lasts...Broadly, ensuring survival increases the quantity of future life; trajectory changes increase its quality”.:35–36 

Part 1: Longtermism

Part 1 introduces and advocates for longtermism, which MacAskill defines as “the idea that positively influencing the long-term future is a key moral priority of our time.”: 4 This part of the book also describes how we, the current generation, can shape the future through our actions.: 29 

MacAskill’s argument for longtermism has three parts. First, future people count morally as much as the people alive today, which he supports by drawing the analogy that “distance in time is like distance in space. People matter even if they live thousands of miles away. Likewise, they matter even if they live thousands of years hence”.: 10

Second, the future is immensely big since humanity may survive for a very long time, and there may be many more people alive at any given time. MacAskill points out that the “future of civilisation extremely long. The earth will remain habitable for hundreds of millions of years...And if humanity ultimately takes to the stars, the timescales become literally astronomical”.: 14

Third, the future could be very good or very bad, and our actions may affect what it will be. It could be very good if technological and moral progress continue to improve the quality of life into the future, just as they have greatly improved our lives compared to our ancestors; yet, the future could also be very bad if technology were to allow a totalitarian regime to control the world or a world war to completely destroy civilisation.: 19–21 MacAskill notes that our present time is highly unusual in that “we live in an era that involves an extraordinary amount of change”: 26—both relative to the past (where rates of economic and technological progress were very slow) and to the future (since current growth rates cannot continue for long before hitting physical limits).[1]: 26–28 From this he concludes that we live at a pivotal moment in human history, where “the world’s long-run fate depends in part on the choices we make in our lifetimes”: 6 since “society has not yet settled down into a stable state, and we are able to influence which stable state we end up in”.: 28 

Part 1 ends with a chapter on how individuals can shape the course of history. MacAskill introduces a three-part framework for thinking about the future, which states that the long-term value of an outcome we may bring about depends on its significance, persistence, and contingency.: 31–33 He explains that significance “is the average value added by bringing about a certain state of affairs”, persistence means “how long that state of affairs lasts, once it has been brought about”, and contingency “refers to the extent to which the state of affairs depends on an individual’s action”.: 32 

Part 2: Trajectory changes

Part 2 investigates how moral change and value lock-in may constitute trajectory changes, affecting the long-run value of future civilisation. MacAskill argues that “we are living through a period of plasticity, that the moral views that shape society are like molten glass that can be blown into many different shapes. But the glass is cooling, and at some point, perhaps in the not-too-distant future, it might set”. :102 

MacAskill suggests that moral and cultural values are malleable, contingent, and potentially long-lived—if history were to be rerun, the dominant global values may be very different from those in our world. For example, he argues that the abolition of slavery may not have been morally or economically inevitable.[1]: 70 Abolition may thus have been a turning point in the entirety of human history, supporting the idea that improving society’s values may positively influence the long-run future.

MacAskill warns of a potential value lock-in, “an event that causes a single value system, or set of value systems, to persist for an extremely long time”. 78 He notes that if “value lock-in occurred globally, then how well or poorly the future goes would be determined in significant part by the nature of those locked-in values”.78 Various past rulers sought to lock in their values—some with more success, like the Han dynasty in ancient China entrenching Confucianism for over a millenium,: 78 and some with less success, like Hitler’s proclaimed “Thousand-Year Reich”: 92 . MacAskill states that the “key issue is which values will guide the future. Those values could be narrow-minded, parochial, and unreflective. Or they could be open-minded, ecumenical, and morally exploratory”.: 88 

Value lock-in result from certain technological advances, according to MacAskill. In particular, he argues that the development of artificial general intelligence (AGI)—an AI system “capable of learning as wide an array of tasks as human beings can and performing them to at least the same level as human beings”: 80 —could result in the permanent lock-in of the values of those who control or have programmed the AGI.: 80–86 This may occur because AGI systems may be both enormously powerful and potentially immortal since they “could replicate themselves as many times as they wanted, just as easily as we can replicate software today”.: 86 MacAskill concludes that “if this happened, then the ruling ideology could in principle persist as long as civilisation does. And there would no longer be competing value systems that could dislodge the status quo”.: 86 

Part 3: Safeguarding civilisation

Part 3 explores how to protect humanity from risks of extinction, unrecoverable civilisational collapse, and long-run technological stagnation.

MacAskill discusses several risks of human extinction, focusing on engineered pathogens, misaligned artificial general intelligence (AGI), and great power war. He points to the rapid progress in biotechnology and states that “engineered pathogens could be much more destructive than natural pathogens because they can be modified to have dangerous new properties”, such as a pathogen “with the lethality of Ebola and the contagiousness of measles”.: 108 MacAskill points to other scholars who “put the probability of an extinction-level engineered pandemic this century at around 1 percent” and references his colleague Toby Ord, who estimates the probability at 3 percent in his 2020 book The Precipice: Existential Risk and the Future of Humanity.: 113 Ensuring humanity’s survival by reducing extinction risks may significantly improve the long-term future by increasing the number of flourishing future lives.: 35–36

The next chapter discusses the risk of civilisational collapse, referring to events “in which society loses the ability to create most industrial and postindustrial technology”.[1]: 124 He discusses several potential causes of civilisational collapse—including extreme climate change, fossil fuel depletion, and nuclear winter caused by nuclear war—concluding that civilisation appears very resilient, with recovery after a collapse being likely.: 127–142 Yet, he believes that the “lingering uncertainty is more than enough to make the risk of unrecovered collapse a key longtermist priority”.: 142

MacAskill next considers the risk of long-lasting technological and economic stagnation. While he considers indefinite stagnation unlikely, “it seems entirely plausible that we could stagnate for hundreds or thousands of years”.: 144 This matters for longtermism for two reasons: first, “if society stagnates technologically, it could remain stuck in a period of high catastrophic risk for such a long time that extinction or collapse would be all but inevitable”.: 142 Second, the society emerging after the period of stagnation may be guided by worse values than society today.: 144

Part 4: Assessing the end of the world

Part 4 discusses how bad the end of humanity would be, which depends on whether it is morally good for happy people to be born and whether the future will be good or bad. The answers to these questions, according to MacAskill, “determine whether we should focus on trajectory changes or on ensuring survival, or on both”.: 163

Whether making happy people improves the world is a key question in population ethics, which concerns “the evaluation of actions that might change who is born, how many people are born, and what their quality of life will be”.: 168 Answering this question determines whether we should “care about the loss of those future people who will never be born if humanity goes extinct in the next few centuries”.: 188 After discussing several population ethical theories—including the total view, the average view, critical-level theories, and person-affecting views—MacAskill concludes that “it is a loss if future people are prevented from coming into existence—as long as their lives would be good enough. So the early extinction of the human race would be a truly enormous tragedy”.: 189

On whether the future will be good or bad, MacAskill notes that the “more optimistic we are, the more important it is to avoid permanent collapse or extinction; the less optimistic we are, the stronger the case for focusing instead on improving values or other trajectory changes”.: 192 To answer the question, MacAskill compares how the quality of life of humans and nonhuman animals has changed over time and how both groups should be weighted numerically.: 194–213 While arguing that the billions of animals suffering in factory farms likely have negative well-being—they would have been better off never having been born—MacAskill concludes optimistically that “we should expect the future to be positive on balance”.: 193 He justifies this optimism in several ways, most crucially by pointing to “an asymmetry in the motivation of future people—namely, people sometimes produce good things just because the things are good, but people rarely produce bad things just because they are bad”.: 218 

Part 5: Taking action

Part 5 details what readers can do to take action based on the book’s arguments.

MacAskill emphasises the significance of professional work, writing that “by far the most important decision you will make, in terms of your lifetime impact, is your choice of career”.:234 He points the reader to the nonprofit 80,000 Hours that he helped cofound, which conducts research and provides advice on which careers have the largest positive social impact, especially from a longtermist perspective. One career opportunity he highlights is movement-building work—to “convince others to care about future generations...and to act to positively influence the long term”.:243

He makes a case that the common emphasis on personal behaviour and consumption, “though understandable, is a major strategic blunder for those of us who want to make the world better”.: 243 Instead, he argues that donations to effective causes and organisations are much more impactful than changing our personal consumption.: 232 Beyond donations, he elaborates on three other impactful personal decisions, including political activism, spreading good ideas, and having children.: 2 33 

MacAskill acknowledges the pervasive uncertainty, both moral and empirical, that surrounds longtermism and offers four lessons to help guide attempts to improve the long-term future: taking robustly good actions, building up options, learning more, and avoiding causing harm.226,240

Further reading

MacAskill, William (2022) What We Owe the Future, New York: Basic Books.

External links

What We Owe the Future. Official website.

Related entries

longtermism | The Precipice | William MacAskill

What We Owe The Fu­ture is out today

William_MacAskill16 Aug 2022 15:13 UTC
301 points
68 comments2 min readEA link

My take on What We Owe the Future

elifland1 Sep 2022 18:07 UTC
349 points
51 comments26 min readEA link

Cri­tique of MacAskill’s “Is It Good to Make Happy Peo­ple?”

Magnus Vinding23 Aug 2022 9:21 UTC
215 points
115 comments8 min readEA link

What We Owe The Fu­ture: A re­view and sum­mary of what I learned

Michael Townsend16 Aug 2022 14:19 UTC
45 points
3 comments8 min readEA link

An­nounc­ing What We Owe The Future

William_MacAskill30 Mar 2022 19:37 UTC
300 points
36 comments4 min readEA link

Should We Help Fu­ture Peo­ple? (A Hap­pier World video)

Jeroen Willems16 Aug 2022 16:00 UTC
5 points
0 comments2 min readEA link

Re­view of WWOTF

Richard Y Chappell15 Aug 2022 18:53 UTC
25 points
3 comments3 min readEA link

What we owe the fu­ture (Will MacAskill)

EA Global5 Jun 2020 19:36 UTC
14 points
0 comments19 min readEA link

Sys­temic Cas­cad­ing Risks: Rele­vance in Longter­mism & Value Lock-In

Richard Ren2 Sep 2022 7:53 UTC
55 points
10 comments16 min readEA link

An en­tire cat­e­gory of risks is un­der­val­ued by EA [Sum­mary of pre­vi­ous fo­rum post]

Richard Ren5 Sep 2022 15:07 UTC
71 points
5 comments5 min readEA link

“Call off the EAs”: Too Much Ad­ver­tis­ing?

tae19 Aug 2022 23:49 UTC
78 points
30 comments1 min readEA link

Cli­mate Change & Longter­mism: new book-length report

John G. Halstead26 Aug 2022 9:13 UTC
312 points
161 comments13 min readEA link

Can We Pos­i­tively Im­pact The Fu­ture? (A Hap­pier World video)

Jeroen Willems2 Apr 2023 13:31 UTC
15 points
0 comments3 min readEA link

Un­der­grads, Con­sider Mes­sag­ing Your Friends About Longter­mism or EA Right Now

anonymous_EA9 Aug 2022 22:32 UTC
5 points
3 comments1 min readEA link

ne­glect­ing un­cer­tainty: re­view of What We Owe the Future

idil cakmur11 Nov 2022 5:28 UTC
20 points
3 comments11 min readEA link

Puz­zles for Everyone

Richard Y Chappell10 Sep 2022 2:11 UTC
116 points
38 comments5 min readEA link

[BBC Fu­ture] What is longter­mism and why does it mat­ter?

Luke Freeman8 Aug 2022 7:34 UTC
16 points
0 comments1 min readEA link

What can we learn from the em­piri­cal so­cial sci­ence liter­a­ture on the ex­pected con­tin­gency of value change?

jackva7 Dec 2022 11:40 UTC
56 points
1 comment4 min readEA link

[Question] Why doesn’t WWOTF men­tion the Bronze Age Col­lapse?

BrownHairedEevee19 Sep 2022 6:29 UTC
16 points
4 comments1 min readEA link

Help with Up­com­ing NPR In­ter­view with William MacAskill

Avery J.C. Kleinman17 Aug 2022 20:01 UTC
62 points
8 comments1 min readEA link

[Question] Next week I’m in­ter­view­ing Will MacAskill — what should I ask?

Robert_Wiblin8 Apr 2022 14:20 UTC
25 points
5 comments1 min readEA link

Against pop­u­la­tion ethics

jasoncrawford16 Aug 2022 5:21 UTC
7 points
21 comments3 min readEA link

Short sum­mary of What We Owe The Future

finm12 Feb 2023 16:27 UTC
29 points
3 comments7 min readEA link

Fu­ture Mat­ters #5: su­per­vol­ca­noes, AI takeover, and What We Owe the Future

Pablo14 Sep 2022 13:02 UTC
31 points
5 comments18 min readEA link

Im­prov­ing EA Com­mu­ni­ca­tion Sur­round­ing Disability

MHR13 Jun 2023 13:18 UTC
128 points
19 comments6 min readEA link

Re­view: What We Owe The Future

Kelsey Piper21 Nov 2022 21:41 UTC
165 points
3 comments1 min readEA link

Tr­ish’s Quick takes

Trish26 Oct 2022 20:08 UTC
2 points
5 comments1 min readEA link

For Longter­mism, Em­ploy an Earth-Based Morality

Wahhab Baldwin26 Sep 2022 18:57 UTC
−3 points
0 comments2 min readEA link

[Question] Copies sold of What We Owe the Future

Anthony Fleming21 Oct 2022 0:18 UTC
15 points
0 comments1 min readEA link

The 3rd wave of EA is com­ing—what does it mean for you?

Jakob19 Aug 2022 13:47 UTC
44 points
4 comments5 min readEA link

What We Owe the Fu­ture, Chap­ter 1

William_MacAskill16 Aug 2022 18:31 UTC
31 points
3 comments18 min readEA link

Book Re­view: What We Owe The Fu­ture (Erik Hoel)

ErikHoel23 Aug 2022 15:04 UTC
1 point
0 comments1 min readEA link

[Question] How should EA groups lev­er­age re­cent me­dia at­ten­tion ahead of the WWOTF launch?

George Stiffman10 Aug 2022 18:38 UTC
16 points
2 comments1 min readEA link

Edit­ing wild an­i­mals is un­der­ex­plored in What We Owe the Future

Michael Huang31 Aug 2022 14:26 UTC
31 points
5 comments3 min readEA link

Re­place Neglectedness

Indra Gesink16 Jan 2023 17:42 UTC
51 points
4 comments4 min readEA link

What If 99% of Hu­man­ity Van­ished? (A Hap­pier World video)

Jeroen Willems16 Feb 2023 17:10 UTC
16 points
1 comment3 min readEA link

Will Tech­nol­ogy Keep Pro­gress­ing? (A Hap­pier World video)

Jeroen Willems9 Mar 2023 17:38 UTC
13 points
0 comments3 min readEA link

The Colo­nial­ism of William MacAskill’s What We Owe the Future

Vivian4 Jul 2023 15:48 UTC
−16 points
5 comments6 min readEA link

What We Owe the Fu­ture (Cosa Dob­bi­amo al Fu­turo), Capi­tolo 1

EA Italy9 Apr 2023 17:44 UTC
1 point
0 comments19 min readEA link

What We Owe The Fu­ture: A Buried Es­say

haven_worsham20 Jun 2023 17:49 UTC
19 points
0 comments16 min readEA link

What We Owe the Fu­ture is an NYT bestseller

Anonymous_EA25 Aug 2022 0:18 UTC
114 points
14 comments1 min readEA link

What is ne­glect­ed­ness, ac­tu­ally?

Richard Ren6 Sep 2022 18:25 UTC
29 points
6 comments2 min readEA link

[link post] The Case for Longter­mism in The New York Times

abier5 Aug 2022 16:27 UTC
133 points
14 comments1 min readEA link

[Question] What are great mar­ket­ing ideas to en­courage pre-or­ders of What We Owe The Fu­ture?

abier30 Mar 2022 20:42 UTC
38 points
20 comments1 min readEA link

The Story of Ben­jamin Lay (video)

Jeroen Willems23 Aug 2022 15:19 UTC
12 points
0 comments3 min readEA link

Samotsvety’s AI risk forecasts

elifland9 Sep 2022 4:01 UTC
174 points
30 comments3 min readEA link

Will MacAskill: The Begin­ning of History

Zach Stein-Perlman13 Aug 2022 22:45 UTC
36 points
0 comments1 min readEA link

Aus­trali­ans are pes­simistic about longterm fu­ture (n=1050)

Oscar Delaney8 Oct 2022 4:33 UTC
29 points
3 comments1 min readEA link

Vi­su­al­iza­tions of the sig­nifi­cance—per­sis­tence—con­tin­gency framework

Jakob2 Sep 2022 18:22 UTC
26 points
0 comments6 min readEA link

[Link post] Op­ti­mistic “Longter­mism” Is Ter­rible For Animals

BrianK6 Sep 2022 22:38 UTC
46 points
6 comments1 min readEA link

What does moral progress con­sist of?

jasoncrawford19 Aug 2022 0:21 UTC
23 points
6 comments2 min readEA link

A Longer­mist Case for The­olog­i­cal In­quiry

Garrett Ehinger17 Nov 2022 2:47 UTC
17 points
7 comments6 min readEA link

[Question] WWOTF: Is there an ex­ist­ing read­ing guide or read­ing group syl­labus?

Aris Richardson7 Jul 2022 18:51 UTC
8 points
3 comments1 min readEA link

Ezra Klein and Will MacAskill on Longter­mism

Kaleem9 Aug 2022 14:26 UTC
36 points
0 comments1 min readEA link

[Question] Ques­tions to ask Will MacAskill about ‘What We Owe The Fu­ture’ for 80,000 Hours Pod­cast (pos­si­ble new au­dio in­tro to longter­mism)

Robert_Wiblin21 Jun 2022 19:21 UTC
14 points
12 comments1 min readEA link

We can make the fu­ture a mil­lion years from now go bet­ter [video]

Writer16 Aug 2022 13:03 UTC
12 points
0 comments6 min readEA link

What is value lock-in? (YouTube video)

Jeroen Willems27 Oct 2022 14:03 UTC
23 points
2 comments4 min readEA link

What We Owe The Fu­ture Up­dated Me­dia List

James Aitchison23 Sep 2022 13:47 UTC
23 points
2 comments5 min readEA link

Pro­mot­ing com­pas­sion­ate longtermism

jonleighton7 Dec 2022 14:26 UTC
115 points
5 comments12 min readEA link

[Question] Which of Will MacAskill’s many re­cent pod­cast epi­sodes would you recom­mend to other en­gaged EAs?

Aaron Bergman15 Aug 2022 11:44 UTC
11 points
9 comments1 min readEA link

A Case Against Strong Longtermism

A. Wolff2 Sep 2022 16:40 UTC
9 points
4 comments39 min readEA link

What We Owe The Fu­ture: a Flash­card-based Summary

Florence12 Sep 2022 12:24 UTC
31 points
2 comments2 min readEA link

Will MacAskill Me­dia for WWOTF—Full List

James Aitchison30 Aug 2022 20:36 UTC
70 points
2 comments4 min readEA link

[Question] Is there a “What We Owe The Fu­ture” fel­low­ship study guide?

Jordan Arel1 Sep 2022 1:40 UTC
8 points
2 comments1 min readEA link
No comments.