RSS

Value lock-in

TagLast edit: Aug 21, 2022, 5:48 PM by Pablo

Value lock-in is a state in which the values determining the long-term future of Earth-originating life can no longer be altered.

Further reading

MacAskill, William (2022) What We Owe the Future, New York: Basic Books, ch. 4.

Riedel, Jess (2021) Value lock-in notes, Jess Riedel’s Website, July 25.

Related entries

dystopia | hinge of history | longtermism | moral advocacy

AGI and Lock-In

Lukas FinnvedenOct 29, 2022, 1:56 AM
153 points
20 comments10 min readEA link
(docs.google.com)

Sys­temic Cas­cad­ing Risks: Rele­vance in Longter­mism & Value Lock-In

Richard RSep 2, 2022, 7:53 AM
59 points
10 comments16 min readEA link

[Question] Odds of re­cov­er­ing val­ues af­ter col­lapse?

Will AldredJul 24, 2022, 6:20 PM
66 points
13 comments3 min readEA link

Weak point in “most im­por­tant cen­tury”: lock-in

Holden KarnofskyNov 11, 2021, 10:02 PM
44 points
0 comments9 min readEA link

An en­tire cat­e­gory of risks is un­der­val­ued by EA [Sum­mary of pre­vi­ous fo­rum post]

Richard RSep 5, 2022, 3:07 PM
79 points
5 comments5 min readEA link

What is value lock-in? (YouTube video)

Jeroen Willems🔸Oct 27, 2022, 2:03 PM
23 points
2 comments4 min readEA link

Value lock-in is hap­pen­ing *now*

Isaac KingOct 15, 2024, 1:40 AM
12 points
17 comments4 min readEA link

Could it be a (bad) lock-in to re­place fac­tory farm­ing with al­ter­na­tive pro­tein?

FaiSep 10, 2022, 4:24 PM
86 points
38 comments9 min readEA link

[Question] Do you think de­creas­ing the con­sump­tion of an­i­mals is good/​bad? Think again?

Vasco Grilo🔸May 27, 2023, 8:22 AM
89 points
41 comments5 min readEA link

C.S. Lewis on Value Lock-In

caleboDec 18, 2021, 8:00 PM
16 points
0 comments3 min readEA link

Fu­ture benefits of miti­gat­ing food shocks caused by abrupt sun­light re­duc­tion scenarios

Vasco Grilo🔸Mar 4, 2023, 4:22 PM
20 points
0 comments28 min readEA link

Famine deaths due to the cli­matic effects of nu­clear war

Vasco Grilo🔸Oct 14, 2023, 12:05 PM
40 points
21 comments66 min readEA link

[Link Post] If We Don’t End Fac­tory Farm­ing Soon, It Might Be Here For­ever.

BrianKDec 7, 2022, 11:20 AM
90 points
13 comments1 min readEA link
(www.forbes.com)

[Cause Ex­plo­ra­tion Prizes] Dy­namic democ­racy to guard against au­thor­i­tar­ian lock-in

Open PhilanthropyAug 24, 2022, 10:53 AM
12 points
1 comment12 min readEA link

[Question] Do you worry about to­tal­i­tar­ian regimes us­ing AI Align­ment tech­nol­ogy to cre­ate AGI that sub­scribe to their val­ues?

diodio_yangFeb 28, 2023, 6:12 PM
25 points
12 comments2 min readEA link

Beyond Max­ipok — good re­flec­tive gov­er­nance as a tar­get for action

Owen Cotton-BarrattMar 15, 2024, 10:22 PM
43 points
2 comments7 min readEA link

Long Reflec­tion Read­ing List

Will AldredMar 24, 2024, 4:27 PM
92 points
7 comments14 min readEA link

[Question] Will the vast ma­jor­ity of tech­nolog­i­cal progress hap­pen in the longterm fu­ture?

Vasco Grilo🔸Jul 8, 2023, 8:40 AM
8 points
0 comments2 min readEA link

Par­tial value takeover with­out world takeover

Katja_GraceApr 18, 2024, 3:00 AM
24 points
2 comments1 min readEA link

All Pos­si­ble Views About Hu­man­ity’s Fu­ture Are Wild

Holden KarnofskyJul 13, 2021, 4:57 PM
217 points
47 comments8 min readEA link
(www.cold-takes.com)

Cos­mic AI safety

Magnus VindingDec 6, 2024, 10:32 PM
23 points
5 comments6 min readEA link

Fu­ture Mat­ters #6: FTX col­lapse, value lock-in, and coun­ter­ar­gu­ments to AI x-risk

PabloDec 30, 2022, 1:10 PM
58 points
2 comments21 min readEA link

“Aligned with who?” Re­sults of sur­vey­ing 1,000 US par­ti­ci­pants on AI values

Holly MorganMar 21, 2023, 10:07 PM
41 points
0 comments2 min readEA link
(www.lesswrong.com)

In­crease in fu­ture po­ten­tial due to miti­gat­ing food shocks caused by abrupt sun­light re­duc­tion scenarios

Vasco Grilo🔸Mar 28, 2023, 7:43 AM
12 points
2 comments8 min readEA link

Ques­tions about Value Lock-in, Pa­ter­nal­ism, and Empowerment

Sam BrownNov 16, 2022, 3:33 PM
4 points
0 comments1 min readEA link

Are we liv­ing at the most in­fluen­tial time in his­tory?

William_MacAskillSep 3, 2019, 4:55 AM
204 points
147 comments24 min readEA link

Fu­ture Mat­ters #7: AI timelines, AI skep­ti­cism, and lock-in

PabloFeb 3, 2023, 11:47 AM
54 points
0 comments17 min readEA link

Balanc­ing safety and waste

Daniel_FriedrichMar 17, 2024, 10:57 AM
6 points
0 comments7 min readEA link

Key char­ac­ter­is­tics for eval­u­at­ing fu­ture global gov­er­nance institutions

Juan GilOct 4, 2021, 7:44 PM
24 points
0 comments10 min readEA link

What We Owe The Fu­ture is out today

William_MacAskillAug 16, 2022, 3:13 PM
293 points
68 comments2 min readEA link

Per­ma­nent So­cietal Im­prove­ments

LarksSep 6, 2015, 1:30 AM
11 points
10 comments4 min readEA link

Will Values and Com­pe­ti­tion De­cou­ple?

intersticeSep 28, 2022, 4:32 PM
6 points
0 comments17 min readEA link

A new place to dis­cuss cog­ni­tive sci­ence, ethics and hu­man alignment

Daniel_FriedrichNov 4, 2022, 2:34 PM
9 points
1 comment2 min readEA link
(www.facebook.com)

Pro­mot­ing com­pas­sion­ate longtermism

jonleightonDec 7, 2022, 2:26 PM
117 points
5 comments12 min readEA link

[Question] Be­sides ev­i­dence or log­i­cal ar­gu­ments, how do you in­vite peo­ple into EA val­ues?

ColinAitkenJan 7, 2022, 12:14 AM
8 points
1 comment2 min readEA link

AI safety and con­scious­ness re­search: A brainstorm

Daniel_FriedrichMar 15, 2023, 2:33 PM
11 points
1 comment9 min readEA link

What val­ues will con­trol the Fu­ture? Overview, con­clu­sion, and di­rec­tions for fu­ture work

Jim BuhlerJul 18, 2023, 4:11 PM
27 points
0 comments1 min readEA link

Pre­dict­ing what fu­ture peo­ple value: A terse in­tro­duc­tion to Ax­iolog­i­cal Futurism

Jim BuhlerMar 24, 2023, 7:15 PM
62 points
10 comments2 min readEA link

An as­pira­tionally com­pre­hen­sive ty­pol­ogy of fu­ture locked-in scenarios

Milan Weibel🔹Apr 3, 2023, 2:11 AM
12 points
0 comments4 min readEA link

[Opz­ionale] Tutte le pos­si­bili con­clu­sioni sul fu­turo dell’uman­ità sono incredibili

EA ItalyJan 17, 2023, 2:59 PM
1 point
0 comments8 min readEA link

The Grabby Values Selec­tion Th­e­sis: What val­ues do space-far­ing civ­i­liza­tions plau­si­bly have?

Jim BuhlerMay 6, 2023, 7:28 PM
47 points
12 comments4 min readEA link

Some gov­er­nance re­search ideas to pre­vent malev­olent con­trol over AGI and why this might mat­ter a hell of a lot

Jim BuhlerMay 23, 2023, 1:07 PM
63 points
5 comments16 min readEA link

Ori­gin and al­ign­ment of goals, mean­ing, and morality

FalseCogsAug 24, 2023, 2:05 PM
1 point
2 comments35 min readEA link

Assess­ing the Danger­ous­ness of Malev­olent Ac­tors in AGI Gover­nance: A Pre­limi­nary Exploration

Callum HinchcliffeOct 14, 2023, 9:18 PM
28 points
4 comments9 min readEA link

Pop­u­la­tion After a Catastrophe

Stan PinsentOct 2, 2023, 4:06 PM
33 points
12 comments14 min readEA link

Don’t leave your finger­prints on the future

So8resOct 8, 2022, 12:35 AM
93 points
4 comments1 min readEA link
No comments.