Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Re: papers. Arb recently did a writeup and conference submission for Alex Turner; weβre happy to help others with writeups or to mentor people who want to try providing this service. DM me for either.
Thanks for sharing this Lawrence, I find it really helpful to hear views of direct /β technical peopleβs views on gaps they see (as opposed to full-time field or community builders views)!
This is so well-written!
Executive summary: The author lists 9 projects they would pursue if not working on safety standards, including ambitious interpretability, onboarding senior researchers, extending mentoring pipelines, grantmaking, writing takes, and running the Long-Term Future Fund. They believe technical AI safety is crucial but other work is valuable too, and the community should be more robust.
Key points:
Ambitious mechanistic interpretability research could help understand powerful models and advance AI safety. Projects include defining explanations, metrics, analyzing neural networks, and balancing quality and realism.
Late stage project management like turning research into proper papers is valuable for communicating ideas clearly.
Creating concrete research projects and agendas helps onboard new researchers and secure funding. But deep expertise is needed to contribute meaningfully.
Alleviating bottlenecks at Open Philanthropy could increase AI safety funding substantially. Working there directly or designing scalable programs could help.
Increasing funding to other organizations beyond Open Philanthropy would also help the ecosystem. This could involve fundraising, convincing adjacent funders, or earning to give.
Running the Long-Term Future Fund well is important for having an independent grantmaker and funding independent work. But the position seems challenging.
Onboarding senior researchers directly through networking and showcasing promising research helps. Becoming a PhD student also creates opportunities.
Extending mentorship pipelines smoothes transitions to full-time AI safety jobs. This involves encouraging PhDs, internships, fellowships, mentoring, and concrete projects.
Writing blog posts clarifies thinking and spreads ideas. But impact depends on audience and uptake.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.