RSS

Cen­ter on Long-Term Risk

TagLast edit: 14 Jul 2022 2:48 UTC by BrownHairedEevee

The Center on Long-Term Risk (CLR) is a research institute that aims to mitigate s-risks from advanced AI. Its research agenda focuses on encouraging cooperative behavior in and avoiding conflict between transformative AI systems.[1]

History

CLR was founded in July 2013 as the Foundational Research Institute;[2] it adopted its current name in March 2020.[3] CLR is part of the Effective Altruism Foundation.

Funding

As of June 2022, CLR has received over $1.2 million in funding from the Survival and Flourishing Fund.[4]

Further reading

Rice, Issa (2018) Timeline of Foundational Research Institute, Timelines Wiki.

Torges, Stefan (2022) CLR’s annual report 2021, Effective Altruism Forum, February 26.

External links

Center on Long-Term Risk. Official website.

Apply for a job.

Related entries

AI risk | Effective Altruism Foundation | s-risk

  1. ^
  2. ^

    Center on Long-Term Risk (2020) Transparency, Center on Long-Term Risk, November.

  3. ^

    Vollmer, Jonas (2020) EAF/​FRI are now the Center on Long-Term Risk (CLR), Effective Altruism Foundation, March 6.

  4. ^

    Survival and Flourishing Fund (2020) SFF-2021-H1 S-process recommendations announcement, Survival and Flourishing Fund.

Effec­tive Altru­ism Foun­da­tion: Plans for 2020

Jonas Vollmer23 Dec 2019 11:51 UTC
82 points
13 comments15 min readEA link

[Link] EAF Re­search agenda: “Co­op­er­a­tion, Con­flict, and Trans­for­ma­tive Ar­tifi­cial In­tel­li­gence”

stefan.torges17 Jan 2020 13:28 UTC
64 points
0 comments1 min readEA link

Re­duc­ing long-term risks from malev­olent actors

David_Althaus29 Apr 2020 8:55 UTC
289 points
73 comments37 min readEA link

Cen­ter on Long-Term Risk: 2021 Plans & 2020 Review

stefan.torges8 Dec 2020 13:39 UTC
86 points
3 comments13 min readEA link

[Link post] Co­or­di­na­tion challenges for pre­vent­ing AI conflict

stefan.torges9 Mar 2021 9:39 UTC
48 points
0 comments1 min readEA link
(longtermrisk.org)

Cause pri­ori­ti­za­tion for down­side-fo­cused value systems

Lukas_Gloor31 Jan 2018 14:47 UTC
72 points
10 comments48 min readEA link

Shal­low eval­u­a­tions of longter­mist organizations

NunoSempere24 Jun 2021 15:31 UTC
169 points
34 comments34 min readEA link

Ap­ply to CLR as a re­searcher or sum­mer re­search fel­low!

Chi1 Feb 2022 22:24 UTC
62 points
5 comments10 min readEA link

Case stud­ies of self-gov­er­nance to re­duce tech­nol­ogy risk

jia6 Apr 2021 8:49 UTC
50 points
6 comments7 min readEA link

Ste­fan Torges: In­gre­di­ents for build­ing dis­rup­tive re­search teams

EA Global18 Oct 2019 8:23 UTC
8 points
1 comment1 min readEA link
(www.youtube.com)

Effec­tive Altru­ism Foun­da­tion: Plans for 2019

Jonas Vollmer4 Dec 2018 16:41 UTC
52 points
2 commentsEA link

Why the Irre­ducible Nor­ma­tivity Wager (Mostly) Fails

Lukas_Gloor14 Jun 2020 13:33 UTC
22 points
13 comments10 min readEA link

First S-Risk In­tro Seminar

stefan.torges8 Dec 2020 9:23 UTC
70 points
2 comments1 min readEA link

How Europe might mat­ter for AI governance

stefan.torges12 Jul 2019 23:42 UTC
52 points
13 comments8 min readEA link

In­gre­di­ents for cre­at­ing dis­rup­tive re­search teams

stefan.torges16 May 2019 16:23 UTC
153 points
15 comments54 min readEA link

Take­aways from EAF’s Hiring Round

stefan.torges19 Nov 2018 20:50 UTC
109 points
20 comments16 min readEA link

First ap­pli­ca­tion round of the EAF Fund

stefan.torges6 Jul 2019 2:14 UTC
78 points
4 comments3 min readEA link

Launch­ing the EAF Fund

stefan.torges28 Nov 2018 17:13 UTC
60 points
14 comments4 min readEA link

Re­view of Fundrais­ing Ac­tivi­ties of EAF in 2018

stefan.torges4 Jun 2019 17:34 UTC
49 points
3 comments8 min readEA link

Assess­ing the state of AI R&D in the US, China, and Europe – Part 1: Out­put indicators

stefan.torges1 Nov 2019 14:41 UTC
21 points
0 comments14 min readEA link

Lukas_Gloor’s Shortform

Lukas_Gloor27 Jul 2020 14:35 UTC
6 points
23 comments1 min readEA link

Work­ing at EA or­ga­ni­za­tions se­ries: Effec­tive Altru­ism Foundation

SoerenMind26 Oct 2015 16:34 UTC
6 points
2 commentsEA link

EA read­ing list: suffer­ing-fo­cused ethics

richard_ngo3 Aug 2020 9:40 UTC
41 points
3 comments1 min readEA link

In­cen­tiviz­ing fore­cast­ing via so­cial media

David_Althaus16 Dec 2020 12:11 UTC
70 points
19 comments18 min readEA link

De­scrip­tive Pop­u­la­tion Ethics and Its Rele­vance for Cause Prioritization

David_Althaus3 Apr 2018 13:31 UTC
59 points
9 comments13 min readEA link

What Is Mo­ral Real­ism?

Lukas_Gloor22 May 2018 15:49 UTC
69 points
26 comments31 min readEA link

Why Real­ists and Anti-Real­ists Disagree

Lukas_Gloor5 Jun 2020 7:51 UTC
59 points
28 comments24 min readEA link

Mul­ti­verse-wide co­op­er­a­tion in a nutshell

Lukas_Gloor2 Nov 2017 10:17 UTC
61 points
11 commentsEA link

Against Irre­ducible Normativity

Lukas_Gloor9 Jun 2020 14:38 UTC
44 points
22 comments33 min readEA link

Me­taeth­i­cal Fa­nat­i­cism (Dialogue)

Lukas_Gloor17 Jun 2020 12:33 UTC
24 points
10 comments15 min readEA link

Why I think the Foun­da­tional Re­search In­sti­tute should re­think its approach

MikeJohnson20 Jul 2017 20:46 UTC
42 points
78 commentsEA link

2020 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks21 Dec 2020 15:25 UTC
150 points
16 comments68 min readEA link

What 2026 looks like (Daniel’s me­dian fu­ture)

kokotajlod7 Aug 2021 5:14 UTC
32 points
1 comment2 min readEA link
(www.lesswrong.com)

List of EA fund­ing opportunities

MichaelA26 Oct 2021 7:49 UTC
155 points
29 comments6 min readEA link

2021 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks23 Dec 2021 14:06 UTC
162 points
18 comments73 min readEA link

CLR’s An­nual Re­port 2021

stefan.torges26 Feb 2022 12:47 UTC
79 points
0 comments12 min readEA link

2019 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks19 Dec 2019 2:58 UTC
147 points
28 comments62 min readEA link

2018 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks18 Dec 2018 4:48 UTC
116 points
28 comments63 min readEA link

2016 AI Risk Liter­a­ture Re­view and Char­ity Comparison

Larks13 Dec 2016 4:36 UTC
56 points
22 commentsEA link

EAF’s bal­lot ini­ti­a­tive dou­bled Zurich’s de­vel­op­ment aid

Jonas Vollmer13 Jan 2020 11:32 UTC
294 points
25 comments12 min readEA link

EAF/​FRI are now the Cen­ter on Long-Term Risk (CLR)

Jonas Vollmer6 Mar 2020 16:40 UTC
85 points
11 comments2 min readEA link

Nar­ra­tion: Re­duc­ing long-term risks from malev­olent actors

D0TheMath15 Jul 2021 16:26 UTC
23 points
0 comments1 min readEA link
(anchor.fm)

S-risk In­tro Fellowship

stefan.torges20 Dec 2021 17:26 UTC
52 points
0 comments1 min readEA link
No comments.