RSS

AI Safety Camp

TagLast edit: Jan 21, 2025, 12:11 PM by Remmelt

AI Safety Camp (AISC) is a non-profit initiative that runs programs serving students and early-career researchers who want to work on reducing existential risk from AI.

Funding

As of July 2022, AISC has received $290,000 in funding from the Future Fund,[1] $180,000 from Effective Altruism Funds,[2][3][4][5] and $130,000 from the Survival and Flourishing Fund.[6]

External links

AI Safety Camp. Official website.

Related entries

AI safety | existential risk | Building the field of AI safety

  1. ^

    Future Fund (2022) Our grants and investments: AI Safety Camp, Future Fund.

  2. ^

    Long-Term Future Fund (2019) April 2019: Long-Term Future Fund grants and recommendations, Effective Altruism Funds, April.

  3. ^

    Long-Term Future Fund (2019) August 2019: Long-Term Future Fund grants and recommendations, Effective Altruism Funds, August.

  4. ^

    Long-Term Future Fund (2019) November 2019: Long-Term Future Fund grants, Effective Altruism Funds, November.

  5. ^

    Long-Term Future Fund (2021) May 2021: Long-Term Future Fund grants, Effective Altruism Funds, May.

  6. ^

    Survival and Flourishing Fund (2020) SFF-2021-H2 S-process recommendations announcement, Survival and Flourishing Fund.

Im­pact Assess­ment of AI Safety Camp (Arb Re­search)

Sam HoltonJan 23, 2024, 4:32 PM
87 points
23 comments11 min readEA link

This might be the last AI Safety Camp

RemmeltJan 24, 2024, 9:29 AM
87 points
32 comments1 min readEA link

Ap­ply for AI Safety Camp Toronto 2020!

SebKJan 7, 2020, 12:15 AM
7 points
0 comments1 min readEA link

For­mal­ize the Hash­iness Model of AGI Un­con­tain­abil­ity

RemmeltNov 9, 2024, 4:10 PM
2 points
0 comments5 min readEA link
(docs.google.com)

Fund­ing case: AI Safety Camp 10

RemmeltDec 12, 2023, 9:05 AM
45 points
13 comments5 min readEA link
(manifund.org)

AISC 2024 - Pro­ject Summaries

Nicky PochinkovNov 27, 2023, 10:35 PM
13 points
1 comment18 min readEA link

The first AI Safety Camp & onwards

RemmeltJun 7, 2018, 6:49 PM
25 points
2 comments8 min readEA link

How teams went about their re­search at AI Safety Camp edi­tion 5

RemmeltJun 28, 2021, 3:18 PM
24 points
0 comments6 min readEA link

An­nounc­ing the sec­ond AI Safety Camp

AnneWissemannJun 11, 2018, 7:11 PM
9 points
10 comments1 min readEA link

Long-Term Fu­ture Fund: Au­gust 2019 grant recommendations

Habryka [Deactivated]Oct 3, 2019, 6:46 PM
79 points
70 comments64 min readEA link

Long-Term Fu­ture Fund: May 2021 grant recommendations

abergalMay 27, 2021, 6:44 AM
110 points
17 comments57 min readEA link

Pod­cast: In­ter­view se­ries fea­tur­ing Dr. Peter Park

Jacob-HaimesMar 26, 2024, 12:35 AM
1 point
0 comments2 min readEA link
(into-ai-safety.github.io)

Long-Term Fu­ture Fund: Novem­ber 2019 short grant writeups

Habryka [Deactivated]Jan 5, 2020, 12:15 AM
46 points
11 comments9 min readEA link

2020 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

LarksDec 21, 2020, 3:25 PM
155 points
16 comments68 min readEA link

Pro­jects I would like to see (pos­si­bly at AI Safety Camp)

Linda LinseforsSep 27, 2023, 9:27 PM
9 points
0 comments1 min readEA link

2021 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

LarksDec 23, 2021, 2:06 PM
176 points
18 comments73 min readEA link

AI Safety Camp, Vir­tual Edi­tion 2023

Linda LinseforsJan 6, 2023, 12:55 AM
24 points
0 comments1 min readEA link

A Study of AI Science Models

Eleni_AMay 13, 2023, 7:14 PM
12 points
4 comments24 min readEA link

Thoughts on AI Safety Camp

Charlie SteinerMay 13, 2022, 7:47 AM
18 points
0 comments7 min readEA link

Ap­ply to lead a pro­ject dur­ing the next vir­tual AI Safety Camp

Linda LinseforsSep 13, 2023, 1:29 PM
16 points
0 comments1 min readEA link
(aisafety.camp)

Talk­ing to Congress: Can con­stituents con­tact­ing their leg­is­la­tor in­fluence policy?

Tristan WilliamsMar 7, 2024, 9:24 AM
47 points
3 comments19 min readEA link

AISC9 has ended and there will be an AISC10

Linda LinseforsApr 29, 2024, 10:53 AM
36 points
0 comments1 min readEA link

Aspira­tion-based, non-max­i­miz­ing AI agent designs

Bob JacobsMay 7, 2024, 4:13 PM
12 points
1 comment38 min readEA link

Take­aways from a sur­vey on AI al­ign­ment resources

DanielFilanNov 5, 2022, 11:45 PM
20 points
9 comments6 min readEA link
(www.lesswrong.com)

AI Safety Camp 10

Robert KralischOct 26, 2024, 11:36 AM
15 points
0 comments18 min readEA link
(www.lesswrong.com)

Fund­ing Case: AI Safety Camp 11

RemmeltDec 23, 2024, 8:39 AM
42 points
2 comments6 min readEA link
(manifund.org)

We don’t want to post again “This might be the last AI Safety Camp”

RemmeltJan 21, 2025, 12:03 PM
34 points
2 comments1 min readEA link
(manifund.org)

Ma­chine Learn­ing for Scien­tific Dis­cov­ery—AI Safety Camp

Eleni_AJan 6, 2023, 3:06 AM
9 points
0 comments1 min readEA link

A Guide to Fore­cast­ing AI Science Capabilities

Eleni_AApr 29, 2023, 6:51 AM
19 points
1 comment4 min readEA link

INTERVIEW: Round 2 - StakeOut.AI w/​ Dr. Peter Park

Jacob-HaimesMar 18, 2024, 9:26 PM
8 points
0 comments1 min readEA link
(into-ai-safety.github.io)

Agen­tic Mess (A Failure Story)

Karl von WendtJun 6, 2023, 1:16 PM
30 points
3 comments1 min readEA link

How teams went about their re­search at AI Safety Camp edi­tion 8

RemmeltSep 9, 2023, 4:34 PM
13 points
1 comment1 min readEA link

AI safety starter pack

mariushobbhahnMar 28, 2022, 4:05 PM
126 points
13 comments6 min readEA link

“Open Source AI” is a lie, but it doesn’t have to be

Jacob-HaimesApr 30, 2024, 7:42 PM
15 points
4 comments6 min readEA link
(jacob-haimes.github.io)

In­vi­ta­tion to lead a pro­ject at AI Safety Camp (Vir­tual Edi­tion, 2025)

Linda LinseforsAug 23, 2024, 2:18 PM
30 points
2 comments1 min readEA link
No comments.