RSS

Value lock-in

TagLast edit: 21 Aug 2022 17:48 UTC by Pablo

Value lock-in is a state in which the values determining the long-term future of Earth-originating life can no longer be altered.

Further reading

MacAskill, William (2022) What We Owe the Future, New York: Basic Books, ch. 4.

Riedel, Jess (2021) Value lock-in notes, Jess Riedel’s Website, July 25.

Related entries

dystopia | hinge of history | longtermism | moral advocacy

AGI and Lock-In

Lukas Finnveden29 Oct 2022 1:56 UTC
154 points
20 comments10 min readEA link
(www.forethought.org)

[Link Post] If We Don’t End Fac­tory Farm­ing Soon, It Might Be Here For­ever.

BrianK7 Dec 2022 11:20 UTC
93 points
13 comments1 min readEA link
(www.forbes.com)

What is value lock-in? (YouTube video)

Jeroen Willems🔸27 Oct 2022 14:03 UTC
23 points
2 comments4 min readEA link

An en­tire cat­e­gory of risks is un­der­val­ued by EA [Sum­mary of pre­vi­ous fo­rum post]

Richard R5 Sep 2022 15:07 UTC
78 points
5 comments5 min readEA link

[Question] Odds of re­cov­er­ing val­ues af­ter col­lapse?

Will Aldred24 Jul 2022 18:20 UTC
66 points
13 comments3 min readEA link

Sys­temic Cas­cad­ing Risks: Rele­vance in Longter­mism & Value Lock-In

Richard R2 Sep 2022 7:53 UTC
58 points
10 comments16 min readEA link

Value lock-in is hap­pen­ing *now*

Isaac King15 Oct 2024 1:40 UTC
12 points
17 comments4 min readEA link

Weak point in “most im­por­tant cen­tury”: lock-in

Holden Karnofsky11 Nov 2021 22:02 UTC
44 points
0 comments9 min readEA link

Could it be a (bad) lock-in to re­place fac­tory farm­ing with al­ter­na­tive pro­tein?

Fai10 Sep 2022 16:24 UTC
86 points
38 comments9 min readEA link

All Pos­si­ble Views About Hu­man­ity’s Fu­ture Are Wild

Holden Karnofsky13 Jul 2021 16:57 UTC
219 points
48 comments8 min readEA link
(www.cold-takes.com)

Famine deaths due to the cli­matic effects of nu­clear war

Vasco Grilo🔸14 Oct 2023 12:05 UTC
40 points
21 comments66 min readEA link

Cos­mic AI safety

Magnus Vinding6 Dec 2024 22:32 UTC
24 points
5 comments6 min readEA link

Par­tial value takeover with­out world takeover

Katja_Grace18 Apr 2024 3:00 UTC
24 points
2 comments3 min readEA link

Beyond Max­ipok — good re­flec­tive gov­er­nance as a tar­get for action

Owen Cotton-Barratt15 Mar 2024 22:22 UTC
49 points
2 comments7 min readEA link

Ques­tions about Value Lock-in, Pa­ter­nal­ism, and Empowerment

Sam Brown16 Nov 2022 15:33 UTC
4 points
0 comments12 min readEA link
(sambrown.eu)

Fu­ture benefits of miti­gat­ing food shocks caused by abrupt sun­light re­duc­tion scenarios

Vasco Grilo🔸4 Mar 2023 16:22 UTC
20 points
0 comments28 min readEA link

Fu­ture Mat­ters #6: FTX col­lapse, value lock-in, and coun­ter­ar­gu­ments to AI x-risk

Pablo30 Dec 2022 13:10 UTC
58 points
2 comments21 min readEA link

[Question] Do you worry about to­tal­i­tar­ian regimes us­ing AI Align­ment tech­nol­ogy to cre­ate AGI that sub­scribe to their val­ues?

diodio_yang28 Feb 2023 18:12 UTC
25 points
12 comments2 min readEA link

Are we liv­ing at the most in­fluen­tial time in his­tory?

William_MacAskill3 Sep 2019 4:55 UTC
205 points
147 comments24 min readEA link

In­crease in fu­ture po­ten­tial due to miti­gat­ing food shocks caused by abrupt sun­light re­duc­tion scenarios

Vasco Grilo🔸28 Mar 2023 7:43 UTC
12 points
2 comments8 min readEA link

[Question] Will the vast ma­jor­ity of tech­nolog­i­cal progress hap­pen in the longterm fu­ture?

Vasco Grilo🔸8 Jul 2023 8:40 UTC
8 points
0 comments2 min readEA link

Long Reflec­tion Read­ing List

Will Aldred24 Mar 2024 16:27 UTC
101 points
7 comments14 min readEA link

[Question] Do you think de­creas­ing the con­sump­tion of an­i­mals is good/​bad? Think again?

Vasco Grilo🔸27 May 2023 8:22 UTC
90 points
42 comments5 min readEA link

[Cause Ex­plo­ra­tion Prizes] Dy­namic democ­racy to guard against au­thor­i­tar­ian lock-in

Coefficient Giving24 Aug 2022 10:53 UTC
12 points
1 comment12 min readEA link

C.S. Lewis on Value Lock-In

calebo18 Dec 2021 20:00 UTC
16 points
0 comments3 min readEA link

Fu­ture Mat­ters #7: AI timelines, AI skep­ti­cism, and lock-in

Pablo3 Feb 2023 11:47 UTC
54 points
0 comments17 min readEA link

“Aligned with who?” Re­sults of sur­vey­ing 1,000 US par­ti­ci­pants on AI values

Holly Morgan21 Mar 2023 22:07 UTC
41 points
0 comments2 min readEA link
(www.lesswrong.com)

Dis­cus­sions of Longter­mism should fo­cus on the prob­lem of Unawareness

Jim Buhler20 Oct 2025 13:17 UTC
34 points
1 comment34 min readEA link

The Grabby Values Selec­tion Th­e­sis: What val­ues do space-far­ing civ­i­liza­tions plau­si­bly have?

Jim Buhler6 May 2023 19:28 UTC
52 points
12 comments4 min readEA link

Balanc­ing safety and waste

Daniel_Friedrich17 Mar 2024 10:57 UTC
6 points
0 comments8 min readEA link

Vi­atopia and Buy-In

Jordan Arel31 Oct 2025 2:59 UTC
7 points
0 comments19 min readEA link

Is Op­ti­mal Reflec­tion Com­pet­i­tive with Ex­tinc­tion Risk Re­duc­tion? - Re­quest­ing Reviewers

Jordan Arel29 Jun 2025 5:13 UTC
18 points
1 comment11 min readEA link

[Question] Be­sides ev­i­dence or log­i­cal ar­gu­ments, how do you in­vite peo­ple into EA val­ues?

ColinAitken7 Jan 2022 0:14 UTC
8 points
1 comment2 min readEA link

Deep Democ­racy as a promis­ing tar­get for pos­i­tive AGI futures

tylermjohn20 Aug 2025 12:18 UTC
62 points
32 comments3 min readEA link

(out­dated ver­sion) Why Vi­atopia is Important

Jordan Arel21 Oct 2025 11:33 UTC
4 points
0 comments18 min readEA link

Chris­tian home­school­ers in the year 3000

Buck17 Sep 2025 17:09 UTC
19 points
1 comment7 min readEA link

How to make the fu­ture bet­ter (other than by re­duc­ing ex­tinc­tion risk)

William_MacAskill15 Aug 2025 15:40 UTC
45 points
3 comments3 min readEA link

Pro­mot­ing com­pas­sion­ate longtermism

jonleighton7 Dec 2022 14:26 UTC
117 points
5 comments12 min readEA link

A new place to dis­cuss cog­ni­tive sci­ence, ethics and hu­man alignment

Daniel_Friedrich4 Nov 2022 14:34 UTC
9 points
1 comment2 min readEA link
(www.facebook.com)

What val­ues will con­trol the Fu­ture? Overview, con­clu­sion, and di­rec­tions for fu­ture work

Jim Buhler18 Jul 2023 16:11 UTC
28 points
0 comments1 min readEA link

Will Values and Com­pe­ti­tion De­cou­ple?

interstice28 Sep 2022 16:32 UTC
6 points
0 comments17 min readEA link

Don’t leave your finger­prints on the future

So8res8 Oct 2022 0:35 UTC
95 points
4 comments4 min readEA link

Pre­dict­ing what fu­ture peo­ple value: A terse in­tro­duc­tion to Ax­iolog­i­cal Futurism

Jim Buhler24 Mar 2023 19:15 UTC
63 points
10 comments2 min readEA link

Grad­ual Disem­pow­er­ment: Con­crete Re­search Projects

Raymond D29 May 2025 18:58 UTC
20 points
1 comment10 min readEA link

[Opz­ionale] Tutte le pos­si­bili con­clu­sioni sul fu­turo dell’uman­ità sono incredibili

EA Italy17 Jan 2023 14:59 UTC
1 point
0 comments8 min readEA link

Pop­u­la­tion After a Catastrophe

Stan Pinsent2 Oct 2023 16:06 UTC
33 points
12 comments14 min readEA link

Hu­man Em­pow­er­ment ver­sus the Longter­mist Im­perium?

Jackson Wagner21 Oct 2025 10:24 UTC
20 points
2 comments21 min readEA link

In­tro­duc­tion to Build­ing Co­op­er­a­tive Vi­atopia: The Case for Longter­mist In­fras­truc­ture Be­fore AI Builds Everything

Jordan Arel31 Oct 2025 2:58 UTC
6 points
0 comments19 min readEA link

Why Vi­atopia is Important

Jordan Arel31 Oct 2025 2:59 UTC
5 points
0 comments20 min readEA link

Some ini­tial mus­ing on the poli­tics of longter­mist tra­jec­tory change

GideonF26 Jun 2025 7:16 UTC
6 points
0 comments12 min readEA link
(futerman.substack.com)

Some gov­er­nance re­search ideas to pre­vent malev­olent con­trol over AGI and why this might mat­ter a hell of a lot

Jim Buhler23 May 2023 13:07 UTC
64 points
5 comments16 min readEA link

Shortlist of Vi­atopia Interventions

Jordan Arel31 Oct 2025 3:00 UTC
10 points
1 comment33 min readEA link

Per­ma­nent So­cietal Im­prove­ments

Larks6 Sep 2015 1:30 UTC
11 points
10 comments4 min readEA link

What failure looks like for animals

Alistair Stewart3 Sep 2025 17:55 UTC
69 points
5 comments5 min readEA link

Ori­gin and al­ign­ment of goals, mean­ing, and morality

FalseCogs24 Aug 2023 14:05 UTC
1 point
2 comments35 min readEA link

What We Owe The Fu­ture is out today

William_MacAskill16 Aug 2022 15:13 UTC
293 points
68 comments2 min readEA link

Assess­ing the Danger­ous­ness of Malev­olent Ac­tors in AGI Gover­nance: A Pre­limi­nary Exploration

Callum Hinchcliffe14 Oct 2023 21:18 UTC
28 points
4 comments9 min readEA link

An as­pira­tionally com­pre­hen­sive ty­pol­ogy of fu­ture locked-in scenarios

Milan Weibel🔹3 Apr 2023 2:11 UTC
12 points
0 comments4 min readEA link

Prevent­ing An­i­mal Suffer­ing Lock-in: Why Eco­nomic Tran­si­tions Matter

Karen Singleton28 Jul 2025 21:55 UTC
43 points
4 comments10 min readEA link

(out­dated ver­sion) Shortlist of Longter­mist Interventions

Jordan Arel21 Oct 2025 11:59 UTC
4 points
0 comments14 min readEA link

AI safety and con­scious­ness re­search: A brainstorm

Daniel_Friedrich15 Mar 2023 14:33 UTC
11 points
1 comment9 min readEA link

Key char­ac­ter­is­tics for eval­u­at­ing fu­ture global gov­er­nance institutions

Juan Gil4 Oct 2021 19:44 UTC
24 points
0 comments10 min readEA link
No comments.