I think it’s worth further distinguishing between political engagement generally and supporting/opposing political candidates or parties, since parties come with a lot of baggage that EA doesn’t want to commit/associate to and is more zero-sum. Animal welfare initiatives and the Zurich initiative are political, but they
are in line with EA cause prioritization and don’t commit/associate us to anything more than we already are committed/associated to (e.g. views on other controversial topics)
don’t touch the usual culture war issues politics is getting very polarized over that EAs might find unimportant or EAs are themselves divided on.
aren’t so zero-sum within EA because of the narrow focus. While many EAs don’t prioritize those causes and find them wasteful, I think far fewer find them (very) actively harmful (except insofar as they take resources away from more important things). When you support or oppose a politician, there are many ways in which they could be good or bad according to a given EA, and you’re more likely to actually do harm according to some other EA’s values.
I think it’s worth further distinguishing between political engagement generally and supporting/opposing political candidates or parties, since parties come with a lot of baggage that EA doesn’t want to commit/associate to and is more zero-sum. Animal welfare initiatives and the Zurich initiative are political, but they
are in line with EA cause prioritization and don’t commit/associate us to anything more than we already are committed/associated to (e.g. views on other controversial topics)
don’t touch the usual culture war issues politics is getting very polarized over that EAs might find unimportant or EAs are themselves divided on.
aren’t so zero-sum within EA because of the narrow focus. While many EAs don’t prioritize those causes and find them wasteful, I think far fewer find them (very) actively harmful (except insofar as they take resources away from more important things). When you support or oppose a politician, there are many ways in which they could be good or bad according to a given EA, and you’re more likely to actually do harm according to some other EA’s values.