RSS

Cen­ter on Long-Term Risk

TagLast edit: 31 May 2023 3:12 UTC by Eevee🔹

The Center on Long-Term Risk (CLR) is a research institute that aims to mitigate s-risks from advanced AI. Its research agenda focuses on encouraging cooperative behavior in and avoiding conflict between transformative AI systems.[1]

History

CLR was founded in July 2013 as the Foundational Research Institute;[2] it adopted its current name in March 2020.[3] CLR is part of the Effective Altruism Foundation.

Funding

As of June 2022, CLR has received over $1.2 million in funding from the Survival and Flourishing Fund.[4]

Further reading

Rice, Issa (2018) Timeline of Foundational Research Institute, Timelines Wiki.

Torges, Stefan (2022) CLR’s annual report 2021, Effective Altruism Forum, February 26.

External links

Center on Long-Term Risk. Official website.

Apply for a job.

Related entries

AI risk | cooperative AI | Cooperative AI Foundation | Effective Altruism Foundation | s-risk

  1. ^
  2. ^

    Center on Long-Term Risk (2020) Transparency, Center on Long-Term Risk, November.

  3. ^

    Vollmer, Jonas (2020) EAF/​FRI are now the Center on Long-Term Risk (CLR), Effective Altruism Foundation, March 6.

  4. ^

    Survival and Flourishing Fund (2020) SFF-2021-H1 S-process recommendations announcement, Survival and Flourishing Fund.

Effec­tive Altru­ism Foun­da­tion: Plans for 2020

Jonas V23 Dec 2019 11:51 UTC
82 points
13 comments15 min readEA link

[Link] EAF Re­search agenda: “Co­op­er­a­tion, Con­flict, and Trans­for­ma­tive Ar­tifi­cial In­tel­li­gence”

stefan.torges17 Jan 2020 13:28 UTC
64 points
0 comments1 min readEA link

Cause pri­ori­ti­za­tion for down­side-fo­cused value systems

Lukas_Gloor31 Jan 2018 14:47 UTC
75 points
10 comments48 min readEA link

Shal­low eval­u­a­tions of longter­mist organizations

NunoSempere24 Jun 2021 15:31 UTC
192 points
34 comments34 min readEA link

Re­duc­ing long-term risks from malev­olent actors

David_Althaus29 Apr 2020 8:55 UTC
341 points
93 comments37 min readEA link

Cen­ter on Long-Term Risk: 2021 Plans & 2020 Review

stefan.torges8 Dec 2020 13:39 UTC
87 points
3 comments13 min readEA link

[Link post] Co­or­di­na­tion challenges for pre­vent­ing AI conflict

stefan.torges9 Mar 2021 9:39 UTC
58 points
0 comments1 min readEA link
(longtermrisk.org)

Take­aways from EAF’s Hiring Round

stefan.torges19 Nov 2018 20:50 UTC
111 points
22 comments16 min readEA link

Ap­ply to CLR as a re­searcher or sum­mer re­search fel­low!

Chi1 Feb 2022 22:24 UTC
62 points
5 comments10 min readEA link

CLR’s An­nual Re­port 2021

stefan.torges26 Feb 2022 12:47 UTC
79 points
0 comments12 min readEA link

EAF/​FRI are now the Cen­ter on Long-Term Risk (CLR)

Jonas V6 Mar 2020 16:40 UTC
85 points
11 comments2 min readEA link

Why the Irre­ducible Nor­ma­tivity Wager (Mostly) Fails

Lukas_Gloor14 Jun 2020 13:33 UTC
25 points
13 comments10 min readEA link

Lukas_Gloor’s Quick takes

Lukas_Gloor27 Jul 2020 14:35 UTC
6 points
31 comments1 min readEA link

Launch­ing the EAF Fund

stefan.torges28 Nov 2018 17:13 UTC
60 points
14 comments4 min readEA link

2021 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks23 Dec 2021 14:06 UTC
176 points
18 comments73 min readEA link

Re­view of Fundrais­ing Ac­tivi­ties of EAF in 2018

stefan.torges4 Jun 2019 17:34 UTC
49 points
3 comments8 min readEA link

Assess­ing the state of AI R&D in the US, China, and Europe – Part 1: Out­put indicators

stefan.torges1 Nov 2019 14:41 UTC
21 points
0 comments14 min readEA link

First S-Risk In­tro Seminar

stefan.torges8 Dec 2020 9:23 UTC
70 points
2 comments1 min readEA link

How Europe might mat­ter for AI governance

stefan.torges12 Jul 2019 23:42 UTC
52 points
13 comments8 min readEA link

Mul­ti­verse-wide co­op­er­a­tion in a nutshell

Lukas_Gloor2 Nov 2017 10:17 UTC
84 points
10 comments16 min readEA link

Why I think the Foun­da­tional Re­search In­sti­tute should re­think its approach

MikeJohnson20 Jul 2017 20:46 UTC
45 points
76 comments20 min readEA link

In­gre­di­ents for cre­at­ing dis­rup­tive re­search teams

stefan.torges16 May 2019 16:23 UTC
159 points
17 comments54 min readEA link

Against Irre­ducible Normativity

Lukas_Gloor9 Jun 2020 14:38 UTC
48 points
22 comments33 min readEA link

Effec­tive Altru­ism Foun­da­tion: Plans for 2019

Jonas V4 Dec 2018 16:41 UTC
52 points
2 comments6 min readEA link

De­scrip­tive Pop­u­la­tion Ethics and Its Rele­vance for Cause Prioritization

David_Althaus3 Apr 2018 13:31 UTC
66 points
8 comments13 min readEA link

Cen­ter on Long-Term Risk: 2023 Fundraiser

stefan.torges9 Dec 2022 18:03 UTC
169 points
4 comments13 min readEA link

Why Real­ists and Anti-Real­ists Disagree

Lukas_Gloor5 Jun 2020 7:51 UTC
61 points
28 comments24 min readEA link

Work­ing at EA or­ga­ni­za­tions se­ries: Effec­tive Altru­ism Foundation

SoerenMind26 Oct 2015 16:34 UTC
6 points
2 comments2 min readEA link

Case stud­ies of self-gov­er­nance to re­duce tech­nol­ogy risk

jia6 Apr 2021 8:49 UTC
55 points
6 comments7 min readEA link

Ste­fan Torges: In­gre­di­ents for build­ing dis­rup­tive re­search teams

EA Global18 Oct 2019 8:23 UTC
8 points
1 comment1 min readEA link
(www.youtube.com)

What Is Mo­ral Real­ism?

Lukas_Gloor22 May 2018 15:49 UTC
72 points
30 comments31 min readEA link

List of EA fund­ing opportunities

MichaelA🔸26 Oct 2021 7:49 UTC
173 points
42 comments6 min readEA link

Me­taeth­i­cal Fa­nat­i­cism (Dialogue)

Lukas_Gloor17 Jun 2020 12:33 UTC
35 points
10 comments15 min readEA link

What 2026 looks like (Daniel’s me­dian fu­ture)

kokotajlod7 Aug 2021 5:14 UTC
38 points
1 comment2 min readEA link
(www.lesswrong.com)

In­cen­tiviz­ing fore­cast­ing via so­cial media

David_Althaus16 Dec 2020 12:11 UTC
70 points
19 comments15 min readEA link

First ap­pli­ca­tion round of the EAF Fund

stefan.torges6 Jul 2019 2:14 UTC
78 points
4 comments3 min readEA link

EA read­ing list: suffer­ing-fo­cused ethics

richard_ngo3 Aug 2020 9:40 UTC
43 points
3 comments1 min readEA link

2020 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks21 Dec 2020 15:25 UTC
155 points
16 comments68 min readEA link

An­nounc­ing the 2023 CLR Sum­mer Re­search Fellowship

stefan.torges17 Mar 2023 12:11 UTC
81 points
0 comments3 min readEA link

2024 S-risk In­tro Fellowship

Center on Long-Term Risk12 Oct 2023 19:14 UTC
89 points
2 comments1 min readEA link

Brian To­masik on co­op­er­a­tion and peace

Vasco Grilo🔸20 May 2024 17:01 UTC
27 points
1 comment4 min readEA link
(reducing-suffering.org)

Begin­ner’s guide to re­duc­ing s-risks [link-post]

Center on Long-Term Risk17 Oct 2023 0:51 UTC
129 points
3 comments3 min readEA link
(longtermrisk.org)

S-risk In­tro Fellowship

stefan.torges20 Dec 2021 17:26 UTC
52 points
1 comment1 min readEA link

Repli­cat­ing and ex­tend­ing the grabby aliens model

Tristan Cook23 Apr 2022 0:36 UTC
137 points
27 comments51 min readEA link

How im­por­tant are ac­cu­rate AI timelines for the op­ti­mal spend­ing sched­ule on AI risk in­ter­ven­tions?

Tristan Cook16 Dec 2022 16:05 UTC
30 points
0 comments6 min readEA link

[Open po­si­tion] S-Risk Com­mu­nity Man­ager at CLR

stefan.torges22 Sep 2022 13:17 UTC
58 points
0 comments1 min readEA link

2016 AI Risk Liter­a­ture Re­view and Char­ity Comparison

Larks13 Dec 2016 4:36 UTC
57 points
12 comments28 min readEA link

CLR Sum­mer Re­search Fel­low­ship 2024

Center on Long-Term Risk15 Feb 2024 18:26 UTC
89 points
2 comments8 min readEA link

Neart­er­mists should con­sider AGI timelines in their spend­ing decisions

Tristan Cook26 Jul 2022 17:01 UTC
67 points
4 comments4 min readEA link

2018 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks18 Dec 2018 4:48 UTC
118 points
28 comments63 min readEA link

The op­ti­mal timing of spend­ing on AGI safety work; why we should prob­a­bly be spend­ing more now

Tristan Cook24 Oct 2022 17:42 UTC
92 points
12 comments36 min readEA link

Nar­ra­tion: Re­duc­ing long-term risks from malev­olent actors

D0TheMath15 Jul 2021 16:26 UTC
23 points
0 comments1 min readEA link
(anchor.fm)

2019 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks19 Dec 2019 2:58 UTC
147 points
28 comments62 min readEA link

EAF’s bal­lot ini­ti­a­tive dou­bled Zurich’s de­vel­op­ment aid

Jonas V13 Jan 2020 11:32 UTC
309 points
23 comments12 min readEA link

Ex­pres­sion of In­ter­est: Direc­tor of Oper­a­tions at the Cen­ter on Long-term Risk

Amrit Sidhu-Brar 🔸25 Jan 2024 18:43 UTC
55 points
0 comments6 min readEA link

Cen­ter on Long-Term Risk: An­nual re­view and fundraiser 2023

Center on Long-Term Risk13 Dec 2023 16:42 UTC
78 points
3 comments4 min readEA link
No comments.