This is good to know! I’m glad that the experience helped you get involved in AI Safety work.
Could you search for the LTFF grant here and provide me the link? I must have missed it in my searches.
(Also, it looks I missed two of the four alumni working at Apollo. Will update!)
I appreciate you sharing this. I’ll add it to our list of anecdotes.
Also welcoming people sharing any setbacks or negative experiences they had. We want to know if people have sucky experiences so we find ways to make it not sucky next time. Hoping to have a more comprehensive sense of this from Arb Research’s survey!
”I think our first follow-up grant was 125k USD. Should be on the LTFF website somewhere. There were subsequent grants also related to the AISC project though. And Apollo Research’s interpretability agenda also has some relationship with ideas I developed at AISC.”
This is good to know! I’m glad that the experience helped you get involved in AI Safety work.
Could you search for the LTFF grant here and provide me the link? I must have missed it in my searches.
(Also, it looks I missed two of the four alumni working at Apollo. Will update!)
I appreciate you sharing this. I’ll add it to our list of anecdotes.
Also welcoming people sharing any setbacks or negative experiences they had. We want to know if people have sucky experiences so we find ways to make it not sucky next time. Hoping to have a more comprehensive sense of this from Arb Research’s survey!
It turns out there are
fivesix AI Safety Camp alumni working at Apollo, including the two co-founders.I got to go through alumni’s LinkedIn profiles to update our records of post-camp positions.
It’s on my to-do list.
Helpful comment from you Lucius in the sheet:
”I think our first follow-up grant was 125k USD. Should be on the LTFF website somewhere. There were subsequent grants also related to the AISC project though. And Apollo Research’s interpretability agenda also has some relationship with ideas I developed at AISC.”
--> I updated the sheet.