Rethink Priorities’ AI Governance & Strategy team (which I co-lead) has room for more funding. There’s some info about our work and the work of RP’s other x-risk-focused team* here and elsewhere in that post. One piece of public work by us so far is Understanding the diffusion of large language models: summary. We also have a lot of work that’s unfortunately not public, either because it’s still in progress or e.g. due to information hazards. I could share some more info via a DM if you want.
We also have yet to release a thorough public overview of the team, but we aim to do so in the coming months.
(*That other team—the General Longtermism team—may also be interested in funding, but I don’t want to speak for them. I could probably connect you with them if you want.)
Oh also, just noticed I forgot to add info on how to donate, in case you or others are interested—that info can be found at https://rethinkpriorities.org/donate
Rethink Priorities’ AI Governance & Strategy team (which I co-lead) has room for more funding. There’s some info about our work and the work of RP’s other x-risk-focused team* here and elsewhere in that post. One piece of public work by us so far is Understanding the diffusion of large language models: summary. We also have a lot of work that’s unfortunately not public, either because it’s still in progress or e.g. due to information hazards. I could share some more info via a DM if you want.
We also have yet to release a thorough public overview of the team, but we aim to do so in the coming months.
(*That other team—the General Longtermism team—may also be interested in funding, but I don’t want to speak for them. I could probably connect you with them if you want.)
Big fan of RT, thanks for sharing!
Glad to hear that!
Oh also, just noticed I forgot to add info on how to donate, in case you or others are interested—that info can be found at https://rethinkpriorities.org/donate