I suggest people actually dig themselves for evidence as to whether the program is working.
The first four points you raised seem to rely on prestige or social proof. While those can be good indicators of merit, they are also gameable.
Ie.
one program can focus on ensuring they are prestigious (to attract time-strapped alignment mentors and picky grantmakers)
another program can decide not to (because they’re not willing to sacrifice other aspects they care about).
If there is one thing you can take away from Linda and I is that we do not focus on acquiring prestige. Even the name “AI Safety Camp” is not prestigious. It sounds kinda like a bootcamp. I prefer the name because it keeps away potential applicants who are in it for the social admiration or influence.
AISC might not make efficient use of mentor / PI time, which is a key goal of MATS and one of the reasons it’s been successful.
You are welcome to ask research leads of the current edition.
“Resource-efficiency: We are not competing with other programs for scarce mentor time. Instead, we prospect for thoughtful research leads who at some point could become well-recognized researchers.”
All but 2 of the papers listed on Manifund as coming from AISC projects are from 2021 or earlier… Because I’m interested in the current quality in the presence of competing programs, I looked at the two from 2022 or later: this in a second-tier journal and this in a NeurIPS workshop, with no top conference papers.
We also do not focus on getting participants to submit papers to highly selective journals or ML conferences (though not necessarily highly selective for quality of research with regards to preventing AI-induced extinction).
AI Safety Camp is about enabling researchers that are still on the periphery of the community to learn by doing and test their fit for roles in which they can help ensure future AI are safe.
So the way to see the papers that were published is what happened after organisers did not optimise for the publication of papers, and some came out anyway.
Most groundbreaking AI Safety research that people now deem valuable was not originally published in a peer-reviewed journal. I do not think we should aim for prestigious venues now.
I would consider published papers as part of a ‘sanity check’ for evaluating editions after the fact.
If the relative number of (weighted) published papers, received grants, and org positions would have gone down for later editions, that would have been concerning. You are welcome to do your own analysis here.
Because there seems to be little direct research…
What do you mean with this claim?
If you mean research outputs, I would suggest not just focussing on peer-reviewed papers but include LessWrong/AF posts as well. Here is an overview of ~50 research outputs from past camps.
Again, AI Safety Camp acts as a training program for people who are often new to the community. The program is not like MATS in that sense.
It is relevant to consider the quality of research thinking coming out of the camp. If you or someone else had the time to look through some of those posts, I’m curious to get your sense.
Why does the founder, Remmelt Ellen, keep posting things described as…
For the record, I’m at best a co-founder.
Linda was the first camp’s initiator. Credit to her.
Now on to your point:
If you clicked through Paul’s somewhat hyperbolic comment of “the entire scientific community would probably consider this writing to be crankery” and consider my response, what are your thoughts on whether that response is reasonable or not?
Ie. consider whether the response is relevant, soundly premised, and consistently reasoned.
If you really want social proof, consider that the ex-Pentagon engineer whom Paul was reacting to got $170K in funding from SFF and has now discussed the argument in-depth for 6 hours with a long-time research collaborator (Anders Sandberg). If you would ask Anders about the post about causality limits described by a commenter as “stream of consciousness”, Anders could explain to you what the author intended to convey.
Perhaps dismissing a new relevant argument out of hand, particularly if it does not match intuitions and motivations common to our community, is not the best move?
Acknowledging here: I should not have shared some of those linkposts because they were not polished enough and did not do a good job at guiding people through the reasoning about fundamental controllability limits and substrate-needs convergence. That ended up causing more friction. My bad.
--> Edit: more here
The impact assessment was commissioned by AISC, not independent.
This is a valid concern. I have worried about conflicts of interest.
I really wanted the evaluators at Arb to do neutral research, without us organisers getting in the way. Linda and I both emphasised this at an orienting call they invited us too.
From Arb’s side, Gavin deliberately stood back and appointed Sam Holton as the main evaluator, who has no connections with AI Safety Camp. Misha did participate in early editions of the camp though.
All in, this is enough to take the report with a grain of salt. Worth picking apart the analysis and looking for any unsound premises.
Glad you raised these concerns!
I suggest people actually dig themselves for evidence as to whether the program is working.
The first four points you raised seem to rely on prestige or social proof. While those can be good indicators of merit, they are also gameable.
Ie.
one program can focus on ensuring they are prestigious (to attract time-strapped alignment mentors and picky grantmakers)
another program can decide not to (because they’re not willing to sacrifice other aspects they care about).
If there is one thing you can take away from Linda and I is that we do not focus on acquiring prestige. Even the name “AI Safety Camp” is not prestigious. It sounds kinda like a bootcamp. I prefer the name because it keeps away potential applicants who are in it for the social admiration or influence.
You are welcome to ask research leads of the current edition.
Note from the Manifund post:
“Resource-efficiency: We are not competing with other programs for scarce mentor time. Instead, we prospect for thoughtful research leads who at some point could become well-recognized researchers.”
We also do not focus on getting participants to submit papers to highly selective journals or ML conferences (though not necessarily highly selective for quality of research with regards to preventing AI-induced extinction).
AI Safety Camp is about enabling researchers that are still on the periphery of the community to learn by doing and test their fit for roles in which they can help ensure future AI are safe.
So the way to see the papers that were published is what happened after organisers did not optimise for the publication of papers, and some came out anyway.
Most groundbreaking AI Safety research that people now deem valuable was not originally published in a peer-reviewed journal. I do not think we should aim for prestigious venues now.
I would consider published papers as part of a ‘sanity check’ for evaluating editions after the fact. If the relative number of (weighted) published papers, received grants, and org positions would have gone down for later editions, that would have been concerning. You are welcome to do your own analysis here.
What do you mean with this claim?
If you mean research outputs, I would suggest not just focussing on peer-reviewed papers but include LessWrong/AF posts as well. Here is an overview of ~50 research outputs from past camps.
Again, AI Safety Camp acts as a training program for people who are often new to the community. The program is not like MATS in that sense.
It is relevant to consider the quality of research thinking coming out of the camp. If you or someone else had the time to look through some of those posts, I’m curious to get your sense.
For the record, I’m at best a co-founder. Linda was the first camp’s initiator. Credit to her.
Now on to your point:
If you clicked through Paul’s somewhat hyperbolic comment of “the entire scientific community would probably consider this writing to be crankery” and consider my response, what are your thoughts on whether that response is reasonable or not? Ie. consider whether the response is relevant, soundly premised, and consistently reasoned.
If you really want social proof, consider that the ex-Pentagon engineer whom Paul was reacting to got $170K in funding from SFF and has now discussed the argument in-depth for 6 hours with a long-time research collaborator (Anders Sandberg). If you would ask Anders about the post about causality limits described by a commenter as “stream of consciousness”, Anders could explain to you what the author intended to convey.
Perhaps dismissing a new relevant argument out of hand, particularly if it does not match intuitions and motivations common to our community, is not the best move?
Acknowledging here: I should not have shared some of those linkposts because they were not polished enough and did not do a good job at guiding people through the reasoning about fundamental controllability limits and substrate-needs convergence. That ended up causing more friction. My bad. --> Edit: more here
This is a valid concern. I have worried about conflicts of interest.
I really wanted the evaluators at Arb to do neutral research, without us organisers getting in the way. Linda and I both emphasised this at an orienting call they invited us too.
From Arb’s side, Gavin deliberately stood back and appointed Sam Holton as the main evaluator, who has no connections with AI Safety Camp. Misha did participate in early editions of the camp though.
All in, this is enough to take the report with a grain of salt. Worth picking apart the analysis and looking for any unsound premises.