One thing we know for certain is that we are definitely doing AI Governance and Strategy work. We have not decided these other avenues yet—I think we will decide them in large part based on who we hire for our roles and in consulting with the people we hire once they are hired and come to agreements as a team. I definitely think that there is a lot to contribute in every field, but we will weigh neglectedness and our comparative advantage in figuring out what to work on.
I expect we’ll also talk a lot to various people outside of RP who have important decisions to make and could potentially be influenced by us and/or who just have strong expertise and judgement in one or more relevant domains (e.g., major EA funders, EA-aligned policy advisors, strong senior researchers) to get their thoughts on what it’d be most useful to do and the pros and cons of various avenues we might pursue.
(We sort-of passively do this in an ongoing way, and I’ve been doing a bit more recently regarding nuclear risk and AI governance & strategy, but I think we’d probably ramp it up when choosing directions for next year. I’m saying “I think” because the longtermism department haven’t yet done our major end-of-year reflection and next-year planning.)
One thing we know for certain is that we are definitely doing AI Governance and Strategy work. We have not decided these other avenues yet—I think we will decide them in large part based on who we hire for our roles and in consulting with the people we hire once they are hired and come to agreements as a team. I definitely think that there is a lot to contribute in every field, but we will weigh neglectedness and our comparative advantage in figuring out what to work on.
I expect we’ll also talk a lot to various people outside of RP who have important decisions to make and could potentially be influenced by us and/or who just have strong expertise and judgement in one or more relevant domains (e.g., major EA funders, EA-aligned policy advisors, strong senior researchers) to get their thoughts on what it’d be most useful to do and the pros and cons of various avenues we might pursue.
(We sort-of passively do this in an ongoing way, and I’ve been doing a bit more recently regarding nuclear risk and AI governance & strategy, but I think we’d probably ramp it up when choosing directions for next year. I’m saying “I think” because the longtermism department haven’t yet done our major end-of-year reflection and next-year planning.)