RSS

Ma­chine In­tel­li­gence Re­search Institute

TagLast edit: 21 Jul 2022 14:23 UTC by Leo

The Machine Intelligence Research Institute (MIRI) is a non-profit research institute. Its mission is “to develop formal tools for the clean design and analysis of general-purpose AI systems, with the intent of making such systems safer and more reliable when they are developed.”[1]

History

MIRI was founded in 2000 as the Singularity Institute for Artificial Intelligence by Brian Atkins, Sabine Atkins and Eliezer Yudkowsky.[2] It adopted its current name in 2013.[3]

Funding

As of July 2022, MIRI has received over $14.7 million in funding from Open Philanthropy,[4] nearly $900,000 from the Survival and Flourishing Fund,[5][6] and over $670,000 from Effective Altruism Funds.[7][8][9][10]

Further reading

Karnofsky, Holden (2012) Thoughts on the Singularity Institute (SI), LessWrong, May 11.

LessWrong (2020) Machine Intelligence Research Institute (MIRI), LessWrong Wiki, July 9.

Luke Muehlhauser (2015) ‘Machine Intelligence Research Institute’, in Ryan Carey (ed.), The Effective Altruism Handbook, 1st ed., Oxford: The Centre for Effective Altruism, pp. 127-129

Rice, Issa et al. (2017) Timeline of Machine Intelligence Research Institute Full timeline, Timelines Wiki, June 30 (updated 19 June 2022).

External links

Machine Intelligence Research Institute. Official website.

Apply for a job.

Donate to MIRI.

Related entries

AI alignment | Center for Applied Rationality | Eliezer Yudkowsky | rationality community

  1. ^

    Machine Intelligence Research Institute (2021) About MIRI, Machine Intelligence Research Institute.

  2. ^

    Singularity Institute for Artificial Intelligence (2000) About SIAI, Singularity Institute for Artificial Intelligence.

  3. ^

    Muehlhauser, Luke (2013) We are now the “Machine Intelligence Research Institute” (MIRI), Machine Intelligence Research Institute, January 30.

  4. ^

    Open Philanthropy (2022) Grants database: Machine Intelligence Research Institute, Open Philanthropy.

  5. ^

    Survival and Flourishing Fund (2019a) SFF-2020-H1 S-process recommendations announcement, Survival and Flourishing Fund.

  6. ^

    Survival and Flourishing Fund (2019b) SFF-2020-H2 S-process recommendations announcement, Survival and Flourishing Fund.

  7. ^

    Long-Term Future Fund (2018) July 2018: Long-Term Future Fund grants, Effective Altruism Funds, July.

  8. ^

    Long-Term Future Fund (2018) November 2018: Long-Term Future Fund grants, Effective Altruism Funds, November.

  9. ^

    Long-Term Future Fund (2019) April 2019: Long-Term Future Fund grants and recommendations, Effective Altruism Funds, April.

  10. ^

    Long-Term Future Fund (2020) April 2020: Long-Term Future Fund grants and recommendations, Effective Altruism Funds, April.

Visi­ble Thoughts Pro­ject and Bounty Announcement

So8res30 Nov 2021 0:35 UTC
35 points
2 comments12 min readEA link

MIRI 2024 Mis­sion and Strat­egy Update

Malo5 Jan 2024 1:10 UTC
153 points
38 comments1 min readEA link

AGI Ruin: A List of Lethalities

EliezerYudkowsky6 Jun 2022 23:28 UTC
162 points
53 comments30 min readEA link
(www.lesswrong.com)

2020 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks21 Dec 2020 15:25 UTC
155 points
16 comments70 min readEA link

Let’s con­duct a sur­vey on the qual­ity of MIRI’s implementation

Robert_Wiblin19 Feb 2016 7:18 UTC
11 points
21 comments3 min readEA link

2021 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks23 Dec 2021 14:06 UTC
176 points
18 comments75 min readEA link

Nav­i­gat­ing the episte­molo­gies of effec­tive altruism

Ozzie_Gooen23 Sep 2013 19:50 UTC
0 points
1 comment5 min readEA link

MIRI posts its tech­ni­cal re­search agenda [link]

RyanCarey24 Dec 2014 0:27 UTC
4 points
1 comment3 min readEA link

Why I’m donat­ing to MIRI this year

Owen Cotton-Barratt30 Nov 2016 22:21 UTC
34 points
31 comments7 min readEA link

My cur­rent thoughts on MIRI’s “highly re­li­able agent de­sign” work

Daniel_Dewey7 Jul 2017 1:17 UTC
60 points
59 comments19 min readEA link

In­tro to car­ing about AI al­ign­ment as an EA cause

So8res14 Apr 2017 0:42 UTC
28 points
10 comments25 min readEA link

MIRI’s 2018 Fundraiser

Malo27 Nov 2018 6:22 UTC
20 points
1 comment7 min readEA link

MIRI 2017 Fundraiser and Strat­egy Update

Malo1 Dec 2017 20:06 UTC
6 points
4 comments12 min readEA link

MIRI is seek­ing an Office Man­ager /​ Force Multiplier

RobBensinger5 Jul 2015 19:02 UTC
8 points
1 comment7 min readEA link

Ask MIRI Any­thing (AMA)

RobBensinger11 Oct 2016 19:54 UTC
18 points
77 comments1 min readEA link

MIRI’s 2019 Fundraiser

Malo7 Dec 2019 0:30 UTC
19 points
2 comments9 min readEA link

I’m Buck Sh­legeris, I do re­search and out­reach at MIRI, AMA

Buck15 Nov 2019 22:44 UTC
122 points
228 comments2 min readEA link

MIRI Sum­mer Fel­lows Pro­gram: Ap­pli­ca­tions open

colm23 Feb 2019 4:49 UTC
13 points
1 comment2 min readEA link

MIRI Up­date and Fundrais­ing Case

So8res9 Oct 2016 22:05 UTC
18 points
16 comments10 min readEA link

CHCAI/​MIRI re­search in­tern­ship in AI safety

ElizabethBarnes17 Feb 2017 11:26 UTC
14 points
2 comments2 min readEA link

MIRI Con­ver­sa­tions: Tech­nol­ogy Fore­cast­ing & Grad­u­al­ism (Distil­la­tion)

TheMcDouglas13 Jul 2022 10:45 UTC
27 points
9 comments19 min readEA link
No comments.