One said climate change, two said global health, and two said AI safety. Neither of the people who said AI safety had any background in AI. If after Arete, someone without background in AI decides that AI safety is the most important issue, then something likely has gone wrong (Note: prioritizing any non-mainstream cause area after Arete is epistemically shaky. By mainstream, I mean a cause area that someone would have a high prior on).
This seems like a strange position to me. Do you think people have to have a background in climate science to decide that climate change is the most important problem, or development economics to decide that global poverty is the moral imperative of our time? Many people will not have a background relevant to any major problem; are they permitted to have any top priority?
I think (apologies if I am mis-understanding you) you try to get around this by suggesting that ‘mainstream’ causes can have much higher priors and lower evidential burdens. But that just seems like deference to wider society, and the process by which mainstream causes became dominant does not seem very epistemically reliable to me.
Overall this post seems like a grab-bag of not very closely connected suggestions. Many of them directly contradict each other. For example, you suggest that EA organizations should prefer to hire domain experts over EA-aligned individuals. And you also suggest that EA orgs should be run democratically. But if you hire a load of non-EAs and then you let them control the org… you don’t have an EA org any more. Similarly, you bemoan that people feel the need to use pseudonyms to express their opinions and a lack of diversity of political beliefs … and then criticize named individuals for being ‘worryingly close to racist, misogynistic, and even fascist ideas’ in essentially a classic example of the cancel culture that causes people to choose pseudonyms and causes the movement to be monolithically left wing.
I think this is in fact a common feature of many of the proposals: they generally seek to reduce what is differentiated about EA. If we adopted all these proposals, I am not sure there would be anything very distinctive remaining. We would simply be a tiny and interchangeable part of the amorphous blob of left wing organizations.
It is true this does not apply to all of the proposals. I agree that, for example, EAs should re-invent the wheel less and utilize domain expertise more. But I can’t say this post really caused me to update in their favour vs just randomly including some proposals I already agreed with. I think you would have been better off focusing on a smaller number of proposals and developing the arguments for them in more depth—and in particular considering counterarguments.