RSS

Cen­ter on Long-Term Risk

TagLast edit: May 31, 2023, 3:12 AM by Eevee🔹

The Center on Long-Term Risk (CLR) is a research institute that aims to mitigate s-risks from advanced AI. Its research agenda focuses on encouraging cooperative behavior in and avoiding conflict between transformative AI systems.[1]

History

CLR was founded in July 2013 as the Foundational Research Institute;[2] it adopted its current name in March 2020.[3] CLR is part of the Effective Altruism Foundation.

Funding

As of June 2022, CLR has received over $1.2 million in funding from the Survival and Flourishing Fund.[4]

Further reading

Rice, Issa (2018) Timeline of Foundational Research Institute, Timelines Wiki.

Torges, Stefan (2022) CLR’s annual report 2021, Effective Altruism Forum, February 26.

External links

Center on Long-Term Risk. Official website.

Apply for a job.

Related entries

AI risk | cooperative AI | Cooperative AI Foundation | Effective Altruism Foundation | s-risk

  1. ^
  2. ^

    Center on Long-Term Risk (2020) Transparency, Center on Long-Term Risk, November.

  3. ^

    Vollmer, Jonas (2020) EAF/​FRI are now the Center on Long-Term Risk (CLR), Effective Altruism Foundation, March 6.

  4. ^

    Survival and Flourishing Fund (2020) SFF-2021-H1 S-process recommendations announcement, Survival and Flourishing Fund.

Effec­tive Altru­ism Foun­da­tion: Plans for 2020

Jonas VDec 23, 2019, 11:51 AM
82 points
13 comments15 min readEA link

[Link] EAF Re­search agenda: “Co­op­er­a­tion, Con­flict, and Trans­for­ma­tive Ar­tifi­cial In­tel­li­gence”

stefan.torgesJan 17, 2020, 1:28 PM
64 points
0 comments1 min readEA link

Cause pri­ori­ti­za­tion for down­side-fo­cused value systems

Lukas_GloorJan 31, 2018, 2:47 PM
76 points
11 comments48 min readEA link

Shal­low eval­u­a­tions of longter­mist organizations

NunoSempereJun 24, 2021, 3:31 PM
192 points
34 comments34 min readEA link

Re­duc­ing long-term risks from malev­olent actors

David_AlthausApr 29, 2020, 8:55 AM
344 points
93 comments37 min readEA link

Cen­ter on Long-Term Risk: 2021 Plans & 2020 Review

stefan.torgesDec 8, 2020, 1:39 PM
87 points
3 comments13 min readEA link

[Link post] Co­or­di­na­tion challenges for pre­vent­ing AI conflict

stefan.torgesMar 9, 2021, 9:39 AM
58 points
0 comments1 min readEA link
(longtermrisk.org)

Take­aways from EAF’s Hiring Round

stefan.torgesNov 19, 2018, 8:50 PM
112 points
22 comments16 min readEA link

Ap­ply to CLR as a re­searcher or sum­mer re­search fel­low!

ChiFeb 1, 2022, 10:24 PM
62 points
5 comments10 min readEA link

CLR’s An­nual Re­port 2021

stefan.torgesFeb 26, 2022, 12:47 PM
79 points
0 comments12 min readEA link

EAF/​FRI are now the Cen­ter on Long-Term Risk (CLR)

Jonas VMar 6, 2020, 4:40 PM
85 points
11 comments2 min readEA link

Why the Irre­ducible Nor­ma­tivity Wager (Mostly) Fails

Lukas_GloorJun 14, 2020, 1:33 PM
26 points
13 comments10 min readEA link

Lukas_Gloor’s Quick takes

Lukas_GloorJul 27, 2020, 2:35 PM
6 points
31 comments1 min readEA link

Launch­ing the EAF Fund

stefan.torgesNov 28, 2018, 5:13 PM
60 points
14 comments4 min readEA link

2021 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

LarksDec 23, 2021, 2:06 PM
176 points
18 comments73 min readEA link

Re­view of Fundrais­ing Ac­tivi­ties of EAF in 2018

stefan.torgesJun 4, 2019, 5:34 PM
49 points
3 comments8 min readEA link

Assess­ing the state of AI R&D in the US, China, and Europe – Part 1: Out­put indicators

stefan.torgesNov 1, 2019, 2:41 PM
21 points
0 comments14 min readEA link

First S-Risk In­tro Seminar

stefan.torgesDec 8, 2020, 9:23 AM
70 points
2 comments1 min readEA link

How Europe might mat­ter for AI governance

stefan.torgesJul 12, 2019, 11:42 PM
52 points
13 comments8 min readEA link

Mul­ti­verse-wide co­op­er­a­tion in a nutshell

Lukas_GloorNov 2, 2017, 10:17 AM
84 points
10 comments16 min readEA link

Why I think the Foun­da­tional Re­search In­sti­tute should re­think its approach

MikeJohnsonJul 20, 2017, 8:46 PM
45 points
76 comments20 min readEA link

In­gre­di­ents for cre­at­ing dis­rup­tive re­search teams

stefan.torgesMay 16, 2019, 4:23 PM
159 points
17 comments54 min readEA link

Against Irre­ducible Normativity

Lukas_GloorJun 9, 2020, 2:38 PM
48 points
22 comments33 min readEA link

Effec­tive Altru­ism Foun­da­tion: Plans for 2019

Jonas VDec 4, 2018, 4:41 PM
52 points
2 comments6 min readEA link

De­scrip­tive Pop­u­la­tion Ethics and Its Rele­vance for Cause Prioritization

David_AlthausApr 3, 2018, 1:31 PM
66 points
8 comments13 min readEA link

Cen­ter on Long-Term Risk: 2023 Fundraiser

stefan.torgesDec 9, 2022, 6:03 PM
169 points
4 comments13 min readEA link

Why Real­ists and Anti-Real­ists Disagree

Lukas_GloorJun 5, 2020, 7:51 AM
62 points
28 comments24 min readEA link

Work­ing at EA or­ga­ni­za­tions se­ries: Effec­tive Altru­ism Foundation

SoerenMindOct 26, 2015, 4:34 PM
6 points
2 comments2 min readEA link

Case stud­ies of self-gov­er­nance to re­duce tech­nol­ogy risk

jiaApr 6, 2021, 8:49 AM
55 points
6 comments7 min readEA link

Ste­fan Torges: In­gre­di­ents for build­ing dis­rup­tive re­search teams

EA GlobalOct 18, 2019, 8:23 AM
8 points
1 comment1 min readEA link
(www.youtube.com)

What Is Mo­ral Real­ism?

Lukas_GloorMay 22, 2018, 3:49 PM
72 points
30 comments31 min readEA link

List of EA fund­ing opportunities

MichaelA🔸Oct 26, 2021, 7:49 AM
174 points
42 comments6 min readEA link

Me­taeth­i­cal Fa­nat­i­cism (Dialogue)

Lukas_GloorJun 17, 2020, 12:33 PM
35 points
10 comments15 min readEA link

What 2026 looks like (Daniel’s me­dian fu­ture)

kokotajlodAug 7, 2021, 5:14 AM
38 points
1 comment2 min readEA link
(www.lesswrong.com)

In­cen­tiviz­ing fore­cast­ing via so­cial media

David_AlthausDec 16, 2020, 12:11 PM
70 points
19 comments15 min readEA link

First ap­pli­ca­tion round of the EAF Fund

stefan.torgesJul 6, 2019, 2:14 AM
78 points
4 comments3 min readEA link

EA read­ing list: suffer­ing-fo­cused ethics

richard_ngoAug 3, 2020, 9:40 AM
43 points
3 comments1 min readEA link

2020 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

LarksDec 21, 2020, 3:25 PM
155 points
16 comments68 min readEA link

An­nounc­ing the 2023 CLR Sum­mer Re­search Fellowship

stefan.torgesMar 17, 2023, 12:11 PM
81 points
0 comments3 min readEA link

2024 S-risk In­tro Fellowship

Center on Long-Term RiskOct 12, 2023, 7:14 PM
90 points
2 comments1 min readEA link

Brian To­masik on co­op­er­a­tion and peace

Vasco Grilo🔸May 20, 2024, 5:01 PM
27 points
1 comment4 min readEA link
(reducing-suffering.org)

Lead­er­ship change at the Cen­ter on Long-Term Risk

JesseCliftonJan 31, 2025, 9:08 PM
161 points
7 comments3 min readEA link

An­nounc­ing the CLR Foun­da­tions Course and CLR S-Risk Seminars

James FavilleNov 19, 2024, 1:18 AM
52 points
2 comments3 min readEA link

Repli­cat­ing and ex­tend­ing the grabby aliens model

Tristan CookApr 23, 2022, 12:36 AM
137 points
27 comments51 min readEA link

How im­por­tant are ac­cu­rate AI timelines for the op­ti­mal spend­ing sched­ule on AI risk in­ter­ven­tions?

Tristan CookDec 16, 2022, 4:05 PM
30 points
0 comments6 min readEA link

[Open po­si­tion] S-Risk Com­mu­nity Man­ager at CLR

stefan.torgesSep 22, 2022, 1:17 PM
58 points
0 comments1 min readEA link

2016 AI Risk Liter­a­ture Re­view and Char­ity Comparison

LarksDec 13, 2016, 4:36 AM
57 points
12 comments28 min readEA link

CLR Sum­mer Re­search Fel­low­ship 2024

Center on Long-Term RiskFeb 15, 2024, 6:26 PM
89 points
2 comments8 min readEA link

Neart­er­mists should con­sider AGI timelines in their spend­ing decisions

Tristan CookJul 26, 2022, 5:01 PM
68 points
4 comments4 min readEA link

2018 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

LarksDec 18, 2018, 4:48 AM
118 points
28 comments63 min readEA link

The op­ti­mal timing of spend­ing on AGI safety work; why we should prob­a­bly be spend­ing more now

Tristan CookOct 24, 2022, 5:42 PM
92 points
12 comments36 min readEA link

Nar­ra­tion: Re­duc­ing long-term risks from malev­olent actors

D0TheMathJul 15, 2021, 4:26 PM
23 points
0 comments1 min readEA link
(anchor.fm)

2019 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

LarksDec 19, 2019, 2:58 AM
147 points
28 comments62 min readEA link

EAF’s bal­lot ini­ti­a­tive dou­bled Zurich’s de­vel­op­ment aid

Jonas VJan 13, 2020, 11:32 AM
309 points
23 comments12 min readEA link

Ex­pres­sion of In­ter­est: Direc­tor of Oper­a­tions at the Cen­ter on Long-term Risk

Amrit Sidhu-Brar 🔸Jan 25, 2024, 6:43 PM
55 points
0 comments6 min readEA link

Cen­ter on Long-Term Risk: An­nual re­view and fundraiser 2023

Center on Long-Term RiskDec 13, 2023, 4:42 PM
78 points
3 comments4 min readEA link

Begin­ner’s guide to re­duc­ing s-risks [link-post]

Center on Long-Term RiskOct 17, 2023, 12:51 AM
129 points
3 comments3 min readEA link
(longtermrisk.org)

S-risk In­tro Fellowship

stefan.torgesDec 20, 2021, 5:26 PM
52 points
1 comment1 min readEA link
No comments.