I think there are two competing failure modes:
(1) The epistemic community around EA, rationality, and AI safety, should stay open to criticism of key empirical assumptions (like the level of risks from AI, risks of misalignments, etc.) in a healthy way.
(2) We should still condemn people who adopt contrarian takes with unreasonable-seeming levels of confidence and then take actions based on them that we think are likely doing damage.
In addition, there’s possibly also a question of “how much do people who benefit from AI safety funding and AI safety association have an obligation to not take unilateral actions that most of the informed people in the community consider negative.” (FWIW I don’t think the obligation here would be absolute even if Epoch had been branded as centrally ‘AI safety,’ and I acknowledge that the branding issue seems contested; also, it wasn’t Jamie [edit: Jaime] the founder who left in this way, and of the people who went off to found this new org, Matthew Barnett, for instance, has been really open about his contrarian takes, so insofar as Epoch’s funders had concerns about the alignment of employees at Epoch, it was also—to some degree, at least—on them to ask for more information or demand some kind of security guarantee if they felt worried. And maybe this did happen—I’m just flagging that I don’t feel like we onlookers necessarily have the info, and so it’s not clear whether anyone has violated norms of social cooperation here or whether we’re just dealing with people getting close to the boundaries of unilateral action in a way that is still defensible because they’ve never claimed to be more aligned than they were, never accepted funding that came with specific explicit assumptions, etc.)
Imagine delegates of views you find actually significantly appealing. (At that level, I think the original post here is correct and your delegates will either use all their caring capacity for helping insects, or insects will be unimportant to them.) Instead of picking one of these delegates, you go with their compromise solution that might look something like, “Ask yourself if you have a comparative advantage at helping insects—If not, stay on the lookout for low-effort ways to help insects and low-effort ways to avoid causing great harm to the cause of helping insects, but otherwise do things that other delegates would prioritize where you have more of a comparative advantage.”
If you view all of morality as “out there” and objective, this approach might seem a bit unsatisfying because—on that view—either insects matter, or they don’t. But if Brian Tomasik is right about consciousness and if morality even as an effective altruist is still quite a lot about finding out “What motivates me to get up in the morning?,” rather than “What’s the one objectively important aim that all effective altruists should pursue?,” then saulius’s point goes through, IMO.
You can have a moral parliament view not just as an approach to moral uncertainty, but also as your approach to undecidedness about what to do in light of all the arguments and appeals you find yourself confronted with. There’s no guarantee that the feeling of undecidedness will go away under ideal conditions for moral reflection, in which case it would probably feel arbitrary and unsatisfying to go with an overall solution that says “insects matter by far the most” or “insects hardly matter at all as a cause area.”