RSS

Tra­jec­tory change

TagLast edit: 17 Jan 2022 10:42 UTC by Leo

In longtermism, a trajectory change is a persistent change to total value at every point in the long-term future. Trajectory changes have also been described as slight or significant changes to “the world’s development trajectory, or just trajectory for short”, with that referring to:[1]

a rough summary way the future will unfold over time. The summary includes various facts about the world that matter from a macro perspective, such as how rich people are, what technologies are available, how happy people are, how developed our science and culture is along various dimensions, and how well things are going all-things-considered at different points of time. It may help to think of the trajectory as a collection of graphs, where each graph in the collection has time on the x-axis and one of these other variables on the y-axis.

[...] If we ever prevent an existential catastrophe, that would be an extreme example of a trajectory change. There may also be smaller trajectory changes. For example, if some species of dolphins that we really loved were destroyed, that would be a much smaller trajectory change.

Most longtermist interventions focus on trajectory changes, including but not limited to existential risk reduction. However, some longtermist interventions focus on alternative objectives, such as speeding up development.

Further reading

Forethought Foundation (2018) Longtermism: Potential research projects, Forethought Foundation, November.

Koehler, Arden, Benjamin Todd, Robert Wiblin & Keiran Harris (2020) Benjamin Todd on varieties of longtermism and things 80,000 Hours might be getting wrong, The 80,000 Hours Podcast, September.

Related entries

cultural persistence | existential risk | longtermism

  1. ^

    Beckstead, Nick (2013) A proposed adjustment to the astronomical waste argument, Effective Altruism Forum, May 27.

A pro­posed ad­just­ment to the as­tro­nom­i­cal waste argument

Nick_Beckstead27 May 2013 4:00 UTC
43 points
0 comments12 min readEA link

Past and Fu­ture Tra­jec­tory Changes

N N28 Mar 2022 20:04 UTC
32 points
5 comments12 min readEA link
(goodoptics.wordpress.com)

Shap­ing Hu­man­ity’s Longterm Trajectory

Toby_Ord18 Jul 2023 10:09 UTC
171 points
57 comments2 min readEA link
(files.tobyord.com)

Model­ing the Hu­man Tra­jec­tory (Open Philan­thropy)

Aaron Gertler 🔸16 Jun 2020 9:27 UTC
50 points
4 comments2 min readEA link
(www.openphilanthropy.org)

A rel­a­tively athe­o­ret­i­cal per­spec­tive on as­tro­nom­i­cal waste

Nick_Beckstead6 Aug 2014 0:55 UTC
9 points
8 comments8 min readEA link

Row­ing, Steer­ing, An­chor­ing, Equity, Mutiny

Holden Karnofsky30 Nov 2021 21:11 UTC
94 points
32 comments18 min readEA link

Robert Wiblin: Mak­ing sense of long-term in­di­rect effects

EA Global6 Aug 2016 0:40 UTC
14 points
0 comments17 min readEA link
(www.youtube.com)

[Question] Is ex­is­ten­tial risk more press­ing than other ways to im­prove the long-term fu­ture?

Eevee🔹20 Aug 2020 3:50 UTC
23 points
1 comment1 min readEA link

Thoughts on “tra­jec­tory changes”

Eevee🔹7 Apr 2021 2:18 UTC
16 points
0 comments1 min readEA link

[Question] Most harm­ful peo­ple in his­tory?

SiebeRozendal11 Sep 2022 3:04 UTC
17 points
9 comments1 min readEA link

The Gover­nance Prob­lem and the “Pretty Good” X-Risk

Zach Stein-Perlman28 Aug 2021 20:00 UTC
23 points
4 comments11 min readEA link

Clar­ify­ing ex­is­ten­tial risks and ex­is­ten­tial catastrophes

MichaelA🔸24 Apr 2020 13:27 UTC
39 points
3 comments7 min readEA link

Hu­man­ity’s vast fu­ture and its im­pli­ca­tions for cause prioritization

Eevee🔹26 Jul 2022 5:04 UTC
38 points
3 comments5 min readEA link
(sunyshore.substack.com)

[Question] What are the best re­sources on com­par­ing x-risk pre­ven­tion to im­prov­ing the value of the fu­ture in other ways?

LHA26 Jun 2022 3:22 UTC
8 points
3 comments1 min readEA link

Cru­cial ques­tions for longtermists

MichaelA🔸29 Jul 2020 9:39 UTC
104 points
17 comments19 min readEA link

Helping an­i­mals or sav­ing hu­man lives in high in­come coun­tries is ar­guably bet­ter than sav­ing hu­man lives in low in­come coun­tries?

Vasco Grilo🔸21 Mar 2024 9:05 UTC
12 points
10 comments12 min readEA link

Long Reflec­tion Read­ing List

Will Aldred24 Mar 2024 16:27 UTC
92 points
7 comments14 min readEA link

Ro­bust longterm comparisons

Toby_Ord15 May 2024 15:07 UTC
45 points
3 comments7 min readEA link

On the Value of Ad­vanc­ing Progress

Toby_Ord11 Jul 2024 11:20 UTC
119 points
39 comments9 min readEA link

Per­ma­nent So­cietal Im­prove­ments

Larks6 Sep 2015 1:30 UTC
11 points
10 comments4 min readEA link

“Dis­ap­point­ing Fu­tures” Might Be As Im­por­tant As Ex­is­ten­tial Risks

MichaelDickens3 Sep 2020 1:15 UTC
96 points
18 comments25 min readEA link

How tractable is chang­ing the course of his­tory?

Jamie_Harris22 May 2019 15:29 UTC
41 points
2 comments7 min readEA link
(www.sentienceinstitute.org)

Balanc­ing safety and waste

Daniel_Friedrich17 Mar 2024 10:57 UTC
6 points
0 comments7 min readEA link

Why we may ex­pect our suc­ces­sors not to care about suffering

Jim Buhler10 Jul 2023 13:54 UTC
63 points
31 comments8 min readEA link

How can we in­fluence the long-term fu­ture?

Tobias_Baumann6 Mar 2019 15:31 UTC
11 points
1 comment4 min readEA link
(s-risks.org)

A Longer­mist Case for The­olog­i­cal In­quiry

Garrett Ehinger17 Nov 2022 2:47 UTC
17 points
7 comments6 min readEA link

In­tro­duc­ing the AI Ob­jec­tives In­sti­tute’s Re­search: Differ­en­tial Paths to­ward Safe and Benefi­cial AI

cmck5 May 2023 20:26 UTC
43 points
1 comment8 min readEA link

Assess­ing the case for pop­u­la­tion growth as a priority

FC15 Nov 2022 23:13 UTC
119 points
10 comments21 min readEA link

[Question] How bi­nary is longterm value?

Vasco Grilo🔸1 Nov 2022 15:21 UTC
13 points
15 comments1 min readEA link

The Grabby Values Selec­tion Th­e­sis: What val­ues do space-far­ing civ­i­liza­tions plau­si­bly have?

Jim Buhler6 May 2023 19:28 UTC
47 points
12 comments4 min readEA link