Executive summary: The author lists 9 projects they would pursue if not working on safety standards, including ambitious interpretability, onboarding senior researchers, extending mentoring pipelines, grantmaking, writing takes, and running the Long-Term Future Fund. They believe technical AI safety is crucial but other work is valuable too, and the community should be more robust.
Key points:
Ambitious mechanistic interpretability research could help understand powerful models and advance AI safety. Projects include defining explanations, metrics, analyzing neural networks, and balancing quality and realism.
Late stage project management like turning research into proper papers is valuable for communicating ideas clearly.
Creating concrete research projects and agendas helps onboard new researchers and secure funding. But deep expertise is needed to contribute meaningfully.
Alleviating bottlenecks at Open Philanthropy could increase AI safety funding substantially. Working there directly or designing scalable programs could help.
Increasing funding to other organizations beyond Open Philanthropy would also help the ecosystem. This could involve fundraising, convincing adjacent funders, or earning to give.
Running the Long-Term Future Fund well is important for having an independent grantmaker and funding independent work. But the position seems challenging.
Onboarding senior researchers directly through networking and showcasing promising research helps. Becoming a PhD student also creates opportunities.
Extending mentorship pipelines smoothes transitions to full-time AI safety jobs. This involves encouraging PhDs, internships, fellowships, mentoring, and concrete projects.
Writing blog posts clarifies thinking and spreads ideas. But impact depends on audience and uptake.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The author lists 9 projects they would pursue if not working on safety standards, including ambitious interpretability, onboarding senior researchers, extending mentoring pipelines, grantmaking, writing takes, and running the Long-Term Future Fund. They believe technical AI safety is crucial but other work is valuable too, and the community should be more robust.
Key points:
Ambitious mechanistic interpretability research could help understand powerful models and advance AI safety. Projects include defining explanations, metrics, analyzing neural networks, and balancing quality and realism.
Late stage project management like turning research into proper papers is valuable for communicating ideas clearly.
Creating concrete research projects and agendas helps onboard new researchers and secure funding. But deep expertise is needed to contribute meaningfully.
Alleviating bottlenecks at Open Philanthropy could increase AI safety funding substantially. Working there directly or designing scalable programs could help.
Increasing funding to other organizations beyond Open Philanthropy would also help the ecosystem. This could involve fundraising, convincing adjacent funders, or earning to give.
Running the Long-Term Future Fund well is important for having an independent grantmaker and funding independent work. But the position seems challenging.
Onboarding senior researchers directly through networking and showcasing promising research helps. Becoming a PhD student also creates opportunities.
Extending mentorship pipelines smoothes transitions to full-time AI safety jobs. This involves encouraging PhDs, internships, fellowships, mentoring, and concrete projects.
Writing blog posts clarifies thinking and spreads ideas. But impact depends on audience and uptake.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.