I expect weāll also talk a lot to various people outside of RP who have important decisions to make and could potentially be influenced by us and/āor who just have strong expertise and judgement in one or more relevant domains (e.g., major EA funders, EA-aligned policy advisors, strong senior researchers) to get their thoughts on what itād be most useful to do and the pros and cons of various avenues we might pursue.
(We sort-of passively do this in an ongoing way, and Iāve been doing a bit more recently regarding nuclear risk and AI governance & strategy, but I think weād probably ramp it up when choosing directions for next year. Iām saying āI thinkā because the longtermism department havenāt yet done our major end-of-year reflection and next-year planning.)
I expect weāll also talk a lot to various people outside of RP who have important decisions to make and could potentially be influenced by us and/āor who just have strong expertise and judgement in one or more relevant domains (e.g., major EA funders, EA-aligned policy advisors, strong senior researchers) to get their thoughts on what itād be most useful to do and the pros and cons of various avenues we might pursue.
(We sort-of passively do this in an ongoing way, and Iāve been doing a bit more recently regarding nuclear risk and AI governance & strategy, but I think weād probably ramp it up when choosing directions for next year. Iām saying āI thinkā because the longtermism department havenāt yet done our major end-of-year reflection and next-year planning.)