RSS

Value lock-in

TagLast edit: 21 Aug 2022 17:48 UTC by Pablo

Value lock-in is a state in which the values determining the long-term future of Earth-originating life can no longer be altered.

Further reading

MacAskill, William (2022) What We Owe the Future, New York: Basic Books, ch. 4.

Riedel, Jess (2021) Value lock-in notes, Jess Riedel’s Website, July 25.

Related entries

dystopia | hinge of history | longtermism | moral advocacy

AGI and Lock-In

Lukas Finnveden29 Oct 2022 1:56 UTC
146 points
20 comments10 min readEA link
(docs.google.com)

[Question] Odds of re­cov­er­ing val­ues af­ter col­lapse?

Will Aldred24 Jul 2022 18:20 UTC
65 points
13 comments3 min readEA link

Sys­temic Cas­cad­ing Risks: Rele­vance in Longter­mism & Value Lock-In

Richard R2 Sep 2022 7:53 UTC
56 points
10 comments16 min readEA link

What is value lock-in? (YouTube video)

Jeroen Willems🔸27 Oct 2022 14:03 UTC
23 points
2 comments4 min readEA link

Weak point in “most im­por­tant cen­tury”: lock-in

Holden Karnofsky11 Nov 2021 22:02 UTC
42 points
0 comments9 min readEA link

Could it be a (bad) lock-in to re­place fac­tory farm­ing with al­ter­na­tive pro­tein?

Fai10 Sep 2022 16:24 UTC
85 points
38 comments9 min readEA link

Value lock-in is hap­pen­ing *now*

Isaac King15 Oct 2024 1:40 UTC
12 points
17 comments4 min readEA link

An en­tire cat­e­gory of risks is un­der­val­ued by EA [Sum­mary of pre­vi­ous fo­rum post]

Richard R5 Sep 2022 15:07 UTC
76 points
5 comments5 min readEA link

Are we liv­ing at the most in­fluen­tial time in his­tory?

William_MacAskill3 Sep 2019 4:55 UTC
204 points
147 comments24 min readEA link

C.S. Lewis on Value Lock-In

calebo18 Dec 2021 20:00 UTC
16 points
0 comments3 min readEA link

[Cause Ex­plo­ra­tion Prizes] Dy­namic democ­racy to guard against au­thor­i­tar­ian lock-in

Open Philanthropy24 Aug 2022 10:53 UTC
12 points
1 comment12 min readEA link

Ques­tions about Value Lock-in, Pa­ter­nal­ism, and Empowerment

Sam Brown16 Nov 2022 15:33 UTC
4 points
0 comments1 min readEA link

[Link Post] If We Don’t End Fac­tory Farm­ing Soon, It Might Be Here For­ever.

BrianK7 Dec 2022 11:20 UTC
90 points
13 comments1 min readEA link
(www.forbes.com)

All Pos­si­ble Views About Hu­man­ity’s Fu­ture Are Wild

Holden Karnofsky13 Jul 2021 16:57 UTC
217 points
47 comments8 min readEA link
(www.cold-takes.com)

Fu­ture Mat­ters #6: FTX col­lapse, value lock-in, and coun­ter­ar­gu­ments to AI x-risk

Pablo30 Dec 2022 13:10 UTC
58 points
2 comments21 min readEA link

Fu­ture Mat­ters #7: AI timelines, AI skep­ti­cism, and lock-in

Pablo3 Feb 2023 11:47 UTC
54 points
0 comments17 min readEA link

Fu­ture benefits of miti­gat­ing food shocks caused by abrupt sun­light re­duc­tion scenarios

Vasco Grilo🔸4 Mar 2023 16:22 UTC
20 points
0 comments28 min readEA link

[Question] Do you worry about to­tal­i­tar­ian regimes us­ing AI Align­ment tech­nol­ogy to cre­ate AGI that sub­scribe to their val­ues?

diodio_yang28 Feb 2023 18:12 UTC
25 points
12 comments2 min readEA link

[Question] Will the vast ma­jor­ity of tech­nolog­i­cal progress hap­pen in the longterm fu­ture?

Vasco Grilo🔸8 Jul 2023 8:40 UTC
8 points
0 comments2 min readEA link

“Aligned with who?” Re­sults of sur­vey­ing 1,000 US par­ti­ci­pants on AI values

Holly Morgan21 Mar 2023 22:07 UTC
40 points
0 comments2 min readEA link
(www.lesswrong.com)

In­crease in fu­ture po­ten­tial due to miti­gat­ing food shocks caused by abrupt sun­light re­duc­tion scenarios

Vasco Grilo🔸28 Mar 2023 7:43 UTC
12 points
2 comments8 min readEA link

[Question] Do you think de­creas­ing the con­sump­tion of an­i­mals is good/​bad? Think again?

Vasco Grilo🔸27 May 2023 8:22 UTC
89 points
41 comments5 min readEA link

Famine deaths due to the cli­matic effects of nu­clear war

Vasco Grilo🔸14 Oct 2023 12:05 UTC
40 points
21 comments66 min readEA link

Beyond Max­ipok — good re­flec­tive gov­er­nance as a tar­get for action

Owen Cotton-Barratt15 Mar 2024 22:22 UTC
43 points
2 comments7 min readEA link

Long Reflec­tion Read­ing List

Will Aldred24 Mar 2024 16:27 UTC
92 points
7 comments14 min readEA link

Par­tial value takeover with­out world takeover

Katja_Grace18 Apr 2024 3:00 UTC
24 points
2 comments1 min readEA link

Cos­mic AI safety

Magnus Vinding6 Dec 2024 22:32 UTC
22 points
3 comments6 min readEA link

AI safety and con­scious­ness re­search: A brainstorm

Daniel_Friedrich15 Mar 2023 14:33 UTC
11 points
1 comment9 min readEA link

Assess­ing the Danger­ous­ness of Malev­olent Ac­tors in AGI Gover­nance: A Pre­limi­nary Exploration

Callum Hinchcliffe14 Oct 2023 21:18 UTC
28 points
4 comments9 min readEA link

Pop­u­la­tion After a Catastrophe

Stan Pinsent2 Oct 2023 16:06 UTC
33 points
12 comments14 min readEA link

What val­ues will con­trol the Fu­ture? Overview, con­clu­sion, and di­rec­tions for fu­ture work

Jim Buhler18 Jul 2023 16:11 UTC
25 points
0 comments1 min readEA link

Pre­dict­ing what fu­ture peo­ple value: A terse in­tro­duc­tion to Ax­iolog­i­cal Futurism

Jim Buhler24 Mar 2023 19:15 UTC
62 points
10 comments2 min readEA link

Key char­ac­ter­is­tics for eval­u­at­ing fu­ture global gov­er­nance institutions

Juan Gil4 Oct 2021 19:44 UTC
23 points
0 comments10 min readEA link

Per­ma­nent So­cietal Im­prove­ments

Larks6 Sep 2015 1:30 UTC
11 points
10 comments4 min readEA link

An as­pira­tionally com­pre­hen­sive ty­pol­ogy of fu­ture locked-in scenarios

Milan Weibel🔹3 Apr 2023 2:11 UTC
12 points
0 comments4 min readEA link

Will Values and Com­pe­ti­tion De­cou­ple?

interstice28 Sep 2022 16:32 UTC
6 points
0 comments17 min readEA link

Don’t leave your finger­prints on the future

So8res8 Oct 2022 0:35 UTC
93 points
4 comments1 min readEA link

A new place to dis­cuss cog­ni­tive sci­ence, ethics and hu­man alignment

Daniel_Friedrich4 Nov 2022 14:34 UTC
9 points
1 comment2 min readEA link
(www.facebook.com)

Pro­mot­ing com­pas­sion­ate longtermism

jonleighton7 Dec 2022 14:26 UTC
117 points
5 comments12 min readEA link

[Opz­ionale] Tutte le pos­si­bili con­clu­sioni sul fu­turo dell’uman­ità sono incredibili

EA Italy17 Jan 2023 14:59 UTC
1 point
0 comments8 min readEA link

[Question] Be­sides ev­i­dence or log­i­cal ar­gu­ments, how do you in­vite peo­ple into EA val­ues?

ColinAitken7 Jan 2022 0:14 UTC
7 points
1 comment2 min readEA link

The Grabby Values Selec­tion Th­e­sis: What val­ues do space-far­ing civ­i­liza­tions plau­si­bly have?

Jim Buhler6 May 2023 19:28 UTC
47 points
12 comments4 min readEA link

What We Owe The Fu­ture is out today

William_MacAskill16 Aug 2022 15:13 UTC
293 points
68 comments2 min readEA link

Some gov­er­nance re­search ideas to pre­vent malev­olent con­trol over AGI and why this might mat­ter a hell of a lot

Jim Buhler23 May 2023 13:07 UTC
63 points
5 comments16 min readEA link

Ori­gin and al­ign­ment of goals, mean­ing, and morality

FalseCogs24 Aug 2023 14:05 UTC
1 point
2 comments35 min readEA link

Balanc­ing safety and waste

Daniel_Friedrich17 Mar 2024 10:57 UTC
6 points
0 comments7 min readEA link
No comments.