Hello world.
Lewis Hammond
Apply to the Cooperative AI PhD Fellowship by October 14th!
[Job] Managing Director at the Cooperative AI Foundation ($5000 Referral Bonus)
I’d suggest a city group that aims to attract students from each of the universities in the city, as well as other non-students. Hopefully this would help to build a critical mass for the group more easily, as well as connecting people from different places. As an aside, I’ll be in Pitttsburgh between September and December – see you there!
Refer the Cooperative AI Foundation’s New COO, Receive $5000
The Cooperative AI Foundation (CAIF) is hiring for a Chief Operating Officer.
CAIF is a new charitable entity – backed by an initial endowment of $15 million – whose mission is to support research that will improve the cooperative intelligence of advanced AI for the benefit of all humanity. The role of Chief Operating Officer will be critical for the scaling and smooth running of the foundation, both now and in the years to come.
Learn more here and apply by
23 June 2022 23:59 UTC10 July 2022 23:59 UTC. Feel free to contact us if you’d like to discuss the role, or CAIF’s work, further before applying.
I’m really excited about this! :)
One further thought on pitching Athena: I think there is an additional, simpler, and possibly less contentious argument about why increasing diversity is valuable for AI safety research, which is basically “we need everyone we can get”. If a large percentage of relevant people don’t feel as welcome/able to work on AI safety because of, e.g., their gender, then that is a big problem. Moreover, it is a big problem even if one doesn’t care about diversity intrinsically, or even if one is sceptical of the benefits of more diverse research teams.
To be clear, I think we should care about diversity intrinsically, but the argument above nicely sidesteps replies of the form “yes, diversity is important, but we need to prioritise reducing AI x-risk above that, and you haven’t given me a detailed story for how diversity in-and-of-itself helps AI x-risk, e.g., one’s gender does not, prima facie, seem very relevant to one’s ability to conduct AI safety research”. This also isn’t to dispute any of your reasons in the post, by the way, merely to add to them :)