RSS

Macrostrategy

TagLast edit: 21 Apr 2024 22:37 UTC by Habryka [Deactivated]

Macrostrategy is the study of how present-day actions may influence the long-term future of humanity.[1]

Macrostrategy as a field of research was pioneered by Nick Bostrom, and it was a core focus area of the Future of Humanity Institute.[2] Some authors distinguish between “foundational” and “applied” global priorities research.[3] On this distinction, macrostrategy may be regarded as closely related to the former. It is concerned with the assessment of general hypotheses such as the hinge of history hypothesis, the vulnerable world hypothesis and the technological completion conjecture; the development of conceptual tools such as the concepts of existential risk, of a crucial consideration and of differential progress; and the analysis of the impacts and capabilities of future technologies such as artificial general intelligence, whole brain emulation and atomically precise manufacturing, but considered at a higher level of abstraction than is generally the case in cause prioritization research.

Further reading

Bostrom, Nick (2016) Macrostrategy, Bank of England, April 11.

Related entries

crucial consideration | existential risk | global priorities research | longtermism | long-range forecasting | long-term future | trajectory change

  1. ^

    Bostrom, Nick (2021) Home page, Nick Bostrom’s Website.

  2. ^

    Future of Humanity Institute (2021) Research areas, Future of Humanity Institute.

  3. ^

    Duda, Roman (2016) Global priorities research, 80,000 Hours, April (updated July 2018).

We can do bet­ter than argmax

Jan_Kulveit10 Oct 2022 10:32 UTC
113 points
36 comments10 min readEA link

[Question] What are novel ma­jor in­sights from longter­mist macros­trat­egy or global pri­ori­ties re­search found since 2015?

Max_Daniel13 Aug 2020 9:15 UTC
88 points
56 comments1 min readEA link

The Choice Transition

Owen Cotton-Barratt18 Nov 2024 12:32 UTC
49 points
1 comment15 min readEA link
(strangecities.substack.com)

Long Reflec­tion Read­ing List

Will Aldred24 Mar 2024 16:27 UTC
101 points
7 comments14 min readEA link

Ex­per­i­men­tal longter­mism: the­ory needs data

Jan_Kulveit15 Mar 2022 10:05 UTC
186 points
9 comments4 min readEA link

ACS is hiring: why work here and why not

Jan_Kulveit23 Oct 2025 9:38 UTC
39 points
4 comments2 min readEA link

Dis­cussing how to al­ign Trans­for­ma­tive AI if it’s de­vel­oped very soon

elifland28 Nov 2022 16:17 UTC
36 points
0 comments28 min readEA link

My thoughts on nan­otech­nol­ogy strat­egy re­search as an EA cause area

Ben Snodin2 May 2022 9:41 UTC
137 points
17 comments33 min readEA link

Video and tran­script of talk on “Can good­ness com­pete?”

Joe_Carlsmith17 Jul 2025 17:59 UTC
34 points
4 comments34 min readEA link
(joecarlsmith.substack.com)

Shal­low eval­u­a­tions of longter­mist organizations

NunoSempere24 Jun 2021 15:31 UTC
193 points
34 comments34 min readEA link

A case for strat­egy re­search: what it is and why we need more of it

SiebeRozendal20 Jun 2019 20:18 UTC
70 points
8 comments20 min readEA link

Co­op­er­at­ing with aliens and AGIs: An ECL explainer

Chi24 Feb 2024 22:58 UTC
54 points
9 comments20 min readEA link

Ev­i­den­tial Co­op­er­a­tion in Large Wor­lds: Po­ten­tial Ob­jec­tions & FAQ

Chi28 Feb 2024 18:58 UTC
41 points
5 comments18 min readEA link

What is meta Effec­tive Altru­ism?

Vaidehi Agarwalla 🔸2 Jun 2021 6:47 UTC
49 points
12 comments5 min readEA link

Truth­ful AI

Owen Cotton-Barratt20 Oct 2021 15:11 UTC
55 points
14 comments10 min readEA link

Su­per­in­tel­li­gent AI is nec­es­sary for an amaz­ing fu­ture, but far from sufficient

So8res31 Oct 2022 21:16 UTC
35 points
5 comments34 min readEA link

Hard-to-re­verse de­ci­sions de­stroy op­tion value

Stefan_Schubert17 Mar 2017 17:54 UTC
33 points
14 comments11 min readEA link

Beyond Max­ipok — good re­flec­tive gov­er­nance as a tar­get for action

Owen Cotton-Barratt15 Mar 2024 22:22 UTC
49 points
2 comments7 min readEA link

Everett branches, in­ter-light cone trade and other alien mat­ters: Ap­pendix to “An ECL ex­plainer”

Chi24 Feb 2024 23:09 UTC
26 points
1 comment11 min readEA link

Hedg­ing Grants And Donations

frib19 Dec 2022 16:33 UTC
10 points
3 comments4 min readEA link

Vi­atopia and Buy-In

Jordan Arel31 Oct 2025 2:59 UTC
7 points
0 comments19 min readEA link

Longter­mist im­pli­ca­tions of aliens Space-Far­ing Civ­i­liza­tions—Introduction

Maxime Riché 🔸21 Feb 2025 12:07 UTC
45 points
12 comments6 min readEA link

Effec­tive al­tru­ism in the age of AGI

William_MacAskill10 Oct 2025 10:57 UTC
466 points
76 comments20 min readEA link

AGI and Lock-In

Lukas Finnveden29 Oct 2022 1:56 UTC
154 points
20 comments10 min readEA link
(www.forethought.org)

The His­tory, Episte­mol­ogy and Strat­egy of Tech­nolog­i­cal Res­traint, and les­sons for AI (short es­say)

MMMaas10 Aug 2022 11:00 UTC
90 points
6 comments9 min readEA link
(verfassungsblog.de)

Why Vi­atopia is Important

Jordan Arel31 Oct 2025 2:59 UTC
5 points
0 comments20 min readEA link

How to make the fu­ture bet­ter (other than by re­duc­ing ex­tinc­tion risk)

William_MacAskill15 Aug 2025 15:40 UTC
45 points
3 comments3 min readEA link

The Case for On­tolog­i­cal Longtermism

James Yamada21 Oct 2025 16:19 UTC
8 points
4 comments11 min readEA link

(out­dated ver­sion) In­tro­duc­tion to Build­ing Co­op­er­a­tive Vi­atopia: The Case for Longter­mist In­fras­truc­ture Be­fore AI Builds Everything

Jordan Arel21 Oct 2025 11:26 UTC
6 points
0 comments18 min readEA link

How AI may be­come de­ceit­ful, syco­phan­tic… and lazy

titotal7 Oct 2025 14:15 UTC
30 points
4 comments22 min readEA link
(titotal.substack.com)

3 Stages of Com­pe­ti­tion for the Long-Term Future

JordanStone30 Nov 2025 21:55 UTC
29 points
7 comments25 min readEA link

(out­dated ver­sion) Why Vi­atopia is Important

Jordan Arel21 Oct 2025 11:33 UTC
4 points
0 comments18 min readEA link

In­tro­duc­tion to Build­ing Co­op­er­a­tive Vi­atopia: The Case for Longter­mist In­fras­truc­ture Be­fore AI Builds Everything

Jordan Arel31 Oct 2025 2:58 UTC
6 points
0 comments19 min readEA link

A per­sonal take on why you should work at Forethought (maybe)

Lizka14 Oct 2025 8:59 UTC
62 points
17 comments9 min readEA link

Dis­cus­sions of Longter­mism should fo­cus on the prob­lem of Unawareness

Jim Buhler20 Oct 2025 13:17 UTC
34 points
1 comment34 min readEA link

(out­dated ver­sion) Shortlist of Longter­mist Interventions

Jordan Arel21 Oct 2025 11:59 UTC
4 points
0 comments14 min readEA link

Shortlist of Vi­atopia Interventions

Jordan Arel31 Oct 2025 3:00 UTC
10 points
1 comment33 min readEA link

In­ves­ti­gat­ing the Long Reflection

Yannick_Muehlhaeuser24 Jul 2023 16:26 UTC
38 points
3 comments12 min readEA link

What would adults in the room know about AI risk?

rosehadshar20 Nov 2025 9:11 UTC
27 points
0 comments3 min readEA link

His­tory’s Gran­d­est Pro­jects: In­tro­duc­tion to Macro Strate­gies for AI Risk, Part 1

Coleman20 Jun 2025 17:32 UTC
7 points
0 comments38 min readEA link