Post summary (feel free to suggest edits!):
The author asks whether EA aims to be a question about doing good effectively, or a community based around ideology. In their experience, it has mainly been the latter, but many EAs have expressed they’d prefer it be the former.
They argue the best concrete step toward EA as a question would be to collaborate more with people outside the EA community, without attempting to bring them into the community. This includes policymakers on local and national levels, people with years of expertise in the fields EA works in, and people who are most affected by EA-backed programs.
Specific ideas include EAG actively recruiting these people, EA groups co-hosting more joint community meetups, EA orgs measuring preferences of those impacted by their programs, applying evidence-based decision-making to all fields (not just top cause areas), engaging with people and critiques outside the EA ecosystem, funding and collaborating with non-EA orgs (eg. via grants), and EA orgs hiring non-EAs.
(If you’d like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)
Post summary (feel free to suggest edits!):
Rob paraphrases Nate’s thoughts on capabilities work and the landscape of AGI organisations. Nate thinks:
Capabilities work is a bad idea, because it isn’t needed for alignment to progress and it could speed up timelines. We already have many ML systems to study, which our understanding lags behind. Publishing that work is even worse.
He appreciates OpenAI’s charter, openness to talk to EAs / rationalists, clearer alignment effort than FAIR or Google Brain, and transparency about their plans. He considers DeepMind and Anthropic on par and slightly ahead respectively on taking alignment seriously.
OpenAI, Anthropic, and DeepMind are unusually safety-conscious AI capabilities orgs (e.g., much better than FAIR or Google Brain). But reality doesn’t grade on a curve, there’s still a lot to improve, and they should still call a halt to mainstream SotA-advancing potentially-AGI-relevant ML work, since the timeline-shortening harms currently outweigh the benefits.
(If you’d like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)