Just want to highlight the bit where you describe how you exceeded your goals (at least, that’s my takeaway):
As our Gran Canaria “pilot camp” grew in ambition, we implicitly worked towards the outcomes we expected to see for the “main camp”:
Three or more draft papers have been written that are considered to be promising by the research community.
Three or more researchers who participated in the project would obtain funding or a research role in AI safety/strategy in the year following the camp.
It is too soon to say about whether the first goal will be met, although with one paper in preparation and one team having already obtained funding it is looking plausible. The second goal was already met less than a month after the camp.
Thanks, yeah, perhaps we should have included that in the summary.
Personally, I was impressed with the committedness with which the researchers worked on their problems (and generally stepped in when there was dishwashing and other chores to be done). My sense is that the camp filled a ‘gap in the market’ where a small young group that’s serious about AI alignment research wanted to work with others to develop their skills and start producing output.
Just want to highlight the bit where you describe how you exceeded your goals (at least, that’s my takeaway):
Congrats!
Thanks, yeah, perhaps we should have included that in the summary.
Personally, I was impressed with the committedness with which the researchers worked on their problems (and generally stepped in when there was dishwashing and other chores to be done). My sense is that the camp filled a ‘gap in the market’ where a small young group that’s serious about AI alignment research wanted to work with others to develop their skills and start producing output.