RSS

Cen­ter on Long-Term Risk

TagLast edit: 5 Jul 2021 7:18 UTC by EdoArad

The Center on Long-Term Risk (CLR) is a research institute focused on s-risks from emerging technologies.

CLR was founded on July 2013 as the Foundational Research Institute (Center on Long-term Risk 2020); it adopted its current name on March 2020 (Vollmer 2020). CLR is part of the Effective Altruism Foundation.

Bibliography

Center on Long-Term Risk (2020) Transparency, Center on Long-Term Risk, November.

Rice, Issa (2018) Timeline of Foundational Research Institute, Timelines Wiki.

Vollmer, Jonas (2020) EAF/​FRI are now the Center on Long-Term Risk (CLR), Effective Altruism Foundation, March 6.

External links

Center on Long-Term Risk. Official website.

Related entries

Effective Altruism Foundation | s-risk

Effec­tive Altru­ism Foun­da­tion: Plans for 2020

Jonas Vollmer23 Dec 2019 11:51 UTC
82 points
13 comments15 min readEA link

[Link] EAF Re­search agenda: “Co­op­er­a­tion, Con­flict, and Trans­for­ma­tive Ar­tifi­cial In­tel­li­gence”

stefan.torges17 Jan 2020 13:28 UTC
61 points
0 comments1 min readEA link

Re­duc­ing long-term risks from malev­olent actors

David_Althaus29 Apr 2020 8:55 UTC
270 points
65 comments37 min readEA link

[Link post] Co­or­di­na­tion challenges for pre­vent­ing AI conflict

stefan.torges9 Mar 2021 9:39 UTC
48 points
0 comments1 min readEA link
(longtermrisk.org)

Shal­low eval­u­a­tions of longter­mist organizations

NunoSempere24 Jun 2021 15:31 UTC
159 points
34 comments34 min readEA link

Cen­ter on Long-Term Risk: 2021 Plans & 2020 Review

stefan.torges8 Dec 2020 13:39 UTC
80 points
1 comment13 min readEA link

Case stud­ies of self-gov­er­nance to re­duce tech­nol­ogy risk

Jia6 Apr 2021 8:49 UTC
46 points
6 comments7 min readEA link

Ste­fan Torges: In­gre­di­ents for build­ing dis­rup­tive re­search teams

EA Global18 Oct 2019 8:23 UTC
8 points
1 comment1 min readEA link
(www.youtube.com)

Effec­tive Altru­ism Foun­da­tion: Plans for 2019

Jonas Vollmer4 Dec 2018 16:41 UTC
52 points
2 commentsEA link

Mo­ral Anti-Real­ism Se­quence #4: Why the Mo­ral Real­ism Wager Fails

Lukas_Gloor14 Jun 2020 13:33 UTC
22 points
13 comments11 min readEA link

First S-Risk In­tro Seminar

stefan.torges8 Dec 2020 9:23 UTC
62 points
2 comments1 min readEA link

How Europe might mat­ter for AI governance

stefan.torges12 Jul 2019 23:42 UTC
48 points
13 comments8 min readEA link

In­gre­di­ents for cre­at­ing dis­rup­tive re­search teams

stefan.torges16 May 2019 16:23 UTC
139 points
14 comments54 min readEA link

Take­aways from EAF’s Hiring Round

stefan.torges19 Nov 2018 20:50 UTC
103 points
20 comments16 min readEA link

First ap­pli­ca­tion round of the EAF Fund

stefan.torges6 Jul 2019 2:14 UTC
75 points
4 comments3 min readEA link

Launch­ing the EAF Fund

stefan.torges28 Nov 2018 17:13 UTC
60 points
14 comments4 min readEA link

Re­view of Fundrais­ing Ac­tivi­ties of EAF in 2018

stefan.torges4 Jun 2019 17:34 UTC
49 points
3 comments8 min readEA link

Assess­ing the state of AI R&D in the US, China, and Europe – Part 1: Out­put indicators

stefan.torges1 Nov 2019 14:41 UTC
19 points
1 comment14 min readEA link

Lukas_Gloor’s Shortform

Lukas_Gloor27 Jul 2020 14:35 UTC
6 points
16 comments1 min readEA link

Cause pri­ori­ti­za­tion for down­side-fo­cused value systems

Lukas_Gloor31 Jan 2018 14:47 UTC
65 points
10 comments48 min readEA link

Work­ing at EA or­ga­ni­za­tions se­ries: Effec­tive Altru­ism Foundation

SoerenMind26 Oct 2015 16:34 UTC
6 points
2 commentsEA link

EA read­ing list: suffer­ing-fo­cused ethics

richard_ngo3 Aug 2020 9:40 UTC
41 points
3 comments1 min readEA link

In­cen­tiviz­ing fore­cast­ing via so­cial media

David_Althaus16 Dec 2020 12:11 UTC
70 points
19 comments18 min readEA link

De­scrip­tive Pop­u­la­tion Ethics and Its Rele­vance for Cause Prioritization

David_Althaus3 Apr 2018 13:31 UTC
49 points
9 comments13 min readEA link

Mo­ral Anti-Real­ism Se­quence #1: What Is Mo­ral Real­ism?

Lukas_Gloor22 May 2018 15:49 UTC
65 points
26 comments31 min readEA link

Mo­ral Anti-Real­ism Se­quence #2: Why Real­ists and Anti-Real­ists Disagree

Lukas_Gloor5 Jun 2020 7:51 UTC
57 points
28 comments24 min readEA link

Mul­ti­verse-wide co­op­er­a­tion in a nutshell

Lukas_Gloor2 Nov 2017 10:17 UTC
43 points
11 commentsEA link

Mo­ral Anti-Real­ism Se­quence #3: Against Irre­ducible Normativity

Lukas_Gloor9 Jun 2020 14:38 UTC
39 points
22 comments33 min readEA link

Mo­ral Anti-Real­ism Se­quence #5: Me­taeth­i­cal Fa­nat­i­cism (Dialogue)

Lukas_Gloor17 Jun 2020 12:33 UTC
24 points
10 comments15 min readEA link

Why I think the Foun­da­tional Re­search In­sti­tute should re­think its approach

MikeJohnson20 Jul 2017 20:46 UTC
29 points
78 commentsEA link

2020 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks21 Dec 2020 15:25 UTC
141 points
14 comments68 min readEA link

What 2026 looks like (Daniel’s me­dian fu­ture)

kokotajlod7 Aug 2021 5:14 UTC
32 points
1 comment2 min readEA link
(www.lesswrong.com)

2019 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks19 Dec 2019 2:58 UTC
146 points
28 comments62 min readEA link

2018 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks18 Dec 2018 4:48 UTC
115 points
28 comments63 min readEA link

2016 AI Risk Liter­a­ture Re­view and Char­ity Comparison

Larks13 Dec 2016 4:36 UTC
53 points
22 commentsEA link

EAF’s bal­lot ini­ti­a­tive dou­bled Zurich’s de­vel­op­ment aid

Jonas Vollmer13 Jan 2020 11:32 UTC
291 points
24 comments12 min readEA link

EAF/​FRI are now the Cen­ter on Long-Term Risk (CLR)

Jonas Vollmer6 Mar 2020 16:40 UTC
85 points
11 comments2 min readEA link

Nar­ra­tion: Re­duc­ing long-term risks from malev­olent actors

D0TheMath15 Jul 2021 16:26 UTC
23 points
0 comments1 min readEA link
(anchor.fm)
No comments.