Thanks so much for these reflections. Would you consider saying more about which other actions seem most promising to you, beyond articulating a robust case against “hard-core utilitarianism” and improving the community’s ability to identify and warn about bad actors? For the reasons I gave here, I think it would be valuable for leaders in the EA community to be talking much more concretely about opportunities to reduce the risk that future efforts inspired by EA ideas might cause unintended harm.
Thanks so much for these reflections. Would you consider saying more about which other actions seem most promising to you, beyond articulating a robust case against “hard-core utilitarianism” and improving the community’s ability to identify and warn about bad actors? For the reasons I gave here, I think it would be valuable for leaders in the EA community to be talking much more concretely about opportunities to reduce the risk that future efforts inspired by EA ideas might cause unintended harm.