You don’t have to be concerned about somewhat outré ideas (more outré than AI risk I guess) becoming popular among EAs since their tractability – how easily someone can gain widespread support for scaling them up – will necessarily be very limited. That will make them vastly inferior to causes for whose importance there is such widespread support. There may be exceptions to this rule, but I think by and large it holds.
You don’t have to be concerned about somewhat outré ideas (more outré than AI risk I guess) becoming popular among EAs since their tractability – how easily someone can gain widespread support for scaling them up – will necessarily be very limited. That will make them vastly inferior to causes for whose importance there is such widespread support. There may be exceptions to this rule, but I think by and large it holds.