RSS

AI Safety Camp

TagLast edit: 28 Oct 2022 15:58 UTC by Lizka

AI Safety Camp (AISC) is a non-profit initiative that runs programs serving students and early-career researchers who want to work on reducing existential risk from AI.

Funding

As of July 2022, AISC has received $290,000 in funding from the Future Fund,[1] $180,000 from Effective Altruism Funds,[2][3][4][5] and $130,000 from the Survival and Flourishing Fund.[6]

External links

AI Safety Camp. Official website.

Related entries

AI safety | existential risk | Building the field of AI safety

  1. ^

    Future Fund (2022) Our grants and investments: AI Safety Camp, Future Fund.

  2. ^

    Long-Term Future Fund (2019) April 2019: Long-Term Future Fund grants and recommendations, Effective Altruism Funds, April.

  3. ^

    Long-Term Future Fund (2019) August 2019: Long-Term Future Fund grants and recommendations, Effective Altruism Funds, August.

  4. ^

    Long-Term Future Fund (2019) November 2019: Long-Term Future Fund grants, Effective Altruism Funds, November.

  5. ^

    Long-Term Future Fund (2021) May 2021: Long-Term Future Fund grants, Effective Altruism Funds, May.

  6. ^

    Survival and Flourishing Fund (2020) SFF-2021-H2 S-process recommendations announcement, Survival and Flourishing Fund.

Im­pact Assess­ment of AI Safety Camp (Arb Re­search)

Sam Holton23 Jan 2024 16:32 UTC
87 points
23 comments11 min readEA link

This might be the last AI Safety Camp

Remmelt24 Jan 2024 9:29 UTC
87 points
32 comments1 min readEA link

Ap­ply for AI Safety Camp Toronto 2020!

SebK7 Jan 2020 0:15 UTC
7 points
0 comments1 min readEA link

Fund­ing case: AI Safety Camp

Remmelt12 Dec 2023 9:05 UTC
45 points
13 comments5 min readEA link
(manifund.org)

AISC 2024 - Pro­ject Summaries

Nicky Pochinkov27 Nov 2023 22:35 UTC
13 points
1 comment18 min readEA link

The first AI Safety Camp & onwards

Remmelt7 Jun 2018 18:49 UTC
25 points
2 comments8 min readEA link

How teams went about their re­search at AI Safety Camp edi­tion 5

Remmelt28 Jun 2021 15:18 UTC
24 points
0 comments6 min readEA link

An­nounc­ing the sec­ond AI Safety Camp

AnneWissemann11 Jun 2018 19:11 UTC
9 points
10 comments1 min readEA link

Pod­cast: In­ter­view se­ries fea­tur­ing Dr. Peter Park

Jacob-Haimes26 Mar 2024 0:35 UTC
1 point
0 comments2 min readEA link
(into-ai-safety.github.io)

Long-Term Fu­ture Fund: Au­gust 2019 grant recommendations

Habryka3 Oct 2019 18:46 UTC
79 points
70 comments64 min readEA link

2020 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks21 Dec 2020 15:25 UTC
155 points
16 comments68 min readEA link

Long-Term Fu­ture Fund: May 2021 grant recommendations

abergal27 May 2021 6:44 UTC
110 points
17 comments57 min readEA link

Long-Term Fu­ture Fund: Novem­ber 2019 short grant writeups

Habryka5 Jan 2020 0:15 UTC
46 points
11 comments9 min readEA link

For­mal­ize the Hash­iness Model of AGI Un­con­tain­abil­ity

Remmelt9 Nov 2024 16:10 UTC
2 points
0 comments5 min readEA link
(docs.google.com)

2021 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks23 Dec 2021 14:06 UTC
176 points
18 comments73 min readEA link

AI Safety Camp, Vir­tual Edi­tion 2023

Linda Linsefors6 Jan 2023 0:55 UTC
24 points
0 comments1 min readEA link

A Study of AI Science Models

Eleni_A13 May 2023 19:14 UTC
12 points
4 comments24 min readEA link

Pro­jects I would like to see (pos­si­bly at AI Safety Camp)

Linda Linsefors27 Sep 2023 21:27 UTC
9 points
0 comments1 min readEA link

Ap­ply to lead a pro­ject dur­ing the next vir­tual AI Safety Camp

Linda Linsefors13 Sep 2023 13:29 UTC
16 points
0 comments1 min readEA link
(aisafety.camp)

Talk­ing to Congress: Can con­stituents con­tact­ing their leg­is­la­tor in­fluence policy?

Tristan Williams7 Mar 2024 9:24 UTC
45 points
3 comments19 min readEA link

AISC9 has ended and there will be an AISC10

Linda Linsefors29 Apr 2024 10:53 UTC
36 points
0 comments1 min readEA link

Aspira­tion-based, non-max­i­miz­ing AI agent designs

Bob Jacobs 🔸7 May 2024 16:13 UTC
12 points
1 comment38 min readEA link

Take­aways from a sur­vey on AI al­ign­ment resources

DanielFilan5 Nov 2022 23:45 UTC
18 points
9 comments6 min readEA link
(www.lesswrong.com)

AI Safety Camp 10

Robert Kralisch26 Oct 2024 11:36 UTC
12 points
0 comments18 min readEA link
(www.lesswrong.com)

Thoughts on AI Safety Camp

Charlie Steiner13 May 2022 7:47 UTC
18 points
0 comments7 min readEA link

“Open Source AI” is a lie, but it doesn’t have to be

Jacob-Haimes30 Apr 2024 19:42 UTC
15 points
4 comments6 min readEA link
(jacob-haimes.github.io)

In­vi­ta­tion to lead a pro­ject at AI Safety Camp (Vir­tual Edi­tion, 2025)

Linda Linsefors23 Aug 2024 14:18 UTC
30 points
2 comments1 min readEA link

Agen­tic Mess (A Failure Story)

Karl von Wendt6 Jun 2023 13:16 UTC
30 points
3 comments1 min readEA link

AI safety starter pack

mariushobbhahn28 Mar 2022 16:05 UTC
126 points
13 comments6 min readEA link

INTERVIEW: Round 2 - StakeOut.AI w/​ Dr. Peter Park

Jacob-Haimes18 Mar 2024 21:26 UTC
8 points
0 comments1 min readEA link
(into-ai-safety.github.io)

A Guide to Fore­cast­ing AI Science Capabilities

Eleni_A29 Apr 2023 6:51 UTC
19 points
1 comment4 min readEA link

How teams went about their re­search at AI Safety Camp edi­tion 8

Remmelt9 Sep 2023 16:34 UTC
13 points
1 comment1 min readEA link

Ma­chine Learn­ing for Scien­tific Dis­cov­ery—AI Safety Camp

Eleni_A6 Jan 2023 3:06 UTC
9 points
0 comments1 min readEA link
No comments.