I don’t know if this is sufficiently different from my current work to be an interesting answer, but I can imagine wanting to work on AIRCS full time, possibly expanding it in the following ways:
Run more of the workshops. This would require training more staff. For example, Anna Salamon currently leads all the workshops and wouldn’t have time to run twice as many.
Expand the scope of the workshops to non-CS people, for example non-technical EAs.
Expand the focus of the workshop from AI to be more general. Eg I’ve been thinking about running something tentatively called “Future Camp”, where people come in and spend a few days thinking and learning about longtermism and what the future is going to be like, with the goal of equipping people to think more clearly about futurism questions like the timelines of various transformative technologies and what can be done to make those technologies go better.
Making the workshop be more generally about EA. The idea would be that the workshop does the same kind of thing that EA Global tries to do for relatively new EAs—expose them to more experienced EAs and to content that will be helpful for them, and help them network with each other and think more deeply about EA ideas. This is kind of like what CFAR workshops are like, but this would focus on inducting people into the EA community rather than the rationalist community. CFAR workshops sort of fill this role, but IMO they could be more optimized for this.
Learn more ML and then figure out what I think needs to happen to make ML-flavored AI alignment go well, work on those things.
Try to write up the case for skepticism of various approaches to ML-based AGI alignment, eg the approaches of Paul Christiano and Chris Olah—these people deserve better rebuttals from a MIRI-style perspective than I think they’ve gotten so far, because writing things is hard and time consuming.
Other:
Work on EA outreach some other way, through programs like EA residencies or the SSC tour.
Work on a particular project which some people I know are working on, which isn’t public at the moment. I think it has the potential to be really impactful from a longtermist perspective.
Here are some things I might do:
Inside AI alignment:
I don’t know if this is sufficiently different from my current work to be an interesting answer, but I can imagine wanting to work on AIRCS full time, possibly expanding it in the following ways:
Run more of the workshops. This would require training more staff. For example, Anna Salamon currently leads all the workshops and wouldn’t have time to run twice as many.
Expand the scope of the workshops to non-CS people, for example non-technical EAs.
Expand the focus of the workshop from AI to be more general. Eg I’ve been thinking about running something tentatively called “Future Camp”, where people come in and spend a few days thinking and learning about longtermism and what the future is going to be like, with the goal of equipping people to think more clearly about futurism questions like the timelines of various transformative technologies and what can be done to make those technologies go better.
Making the workshop be more generally about EA. The idea would be that the workshop does the same kind of thing that EA Global tries to do for relatively new EAs—expose them to more experienced EAs and to content that will be helpful for them, and help them network with each other and think more deeply about EA ideas. This is kind of like what CFAR workshops are like, but this would focus on inducting people into the EA community rather than the rationalist community. CFAR workshops sort of fill this role, but IMO they could be more optimized for this.
Learn more ML and then figure out what I think needs to happen to make ML-flavored AI alignment go well, work on those things.
Try to write up the case for skepticism of various approaches to ML-based AGI alignment, eg the approaches of Paul Christiano and Chris Olah—these people deserve better rebuttals from a MIRI-style perspective than I think they’ve gotten so far, because writing things is hard and time consuming.
Other:
Work on EA outreach some other way, through programs like EA residencies or the SSC tour.
Work on a particular project which some people I know are working on, which isn’t public at the moment. I think it has the potential to be really impactful from a longtermist perspective.
Work on reducing s-risks