I support some people in the EA community taking big bets on electoral politics, but just to articulate some of the objections:
solving the “how to convince enough people to elect you president” problem is probably easier than a lot of other problems
Even compared to very difficult other problems, I’m not sure this is true; exactly one person is allowed to solve this problem every four years, and it’s an extremely crowded competition. (Both parties had to have two debate stages for their most recent competitive cycles, and in both cases someone who had been a famous public figure for decades won.)
And even if you fail to win, even moderately succeeding provides (via predictable media tendencies) a far larger platform to influence others to do Effective things.
It provides a larger platform, but politics is also an extremely epistemically adversarial arena: it is way more likely someone decides they hate EA ideas if an EA is running against a candidate they like. In some cases this trade-off is probably worth it; you might think that convincing a million people is worth tens of millions thinking you’re crazy. But sometimes the people who decide you’re crazy (and a threat to their preferred candidates) are going to be (e.g.) influential AI ethicists, which could make it much harder to influence certain decisions later.
So, just saying—it is very difficult and risky, so anyone considering working on this needs to plan carefully!
I support some people in the EA community taking big bets on electoral politics, but just to articulate some of the objections:
Even compared to very difficult other problems, I’m not sure this is true; exactly one person is allowed to solve this problem every four years, and it’s an extremely crowded competition. (Both parties had to have two debate stages for their most recent competitive cycles, and in both cases someone who had been a famous public figure for decades won.)
It provides a larger platform, but politics is also an extremely epistemically adversarial arena: it is way more likely someone decides they hate EA ideas if an EA is running against a candidate they like. In some cases this trade-off is probably worth it; you might think that convincing a million people is worth tens of millions thinking you’re crazy. But sometimes the people who decide you’re crazy (and a threat to their preferred candidates) are going to be (e.g.) influential AI ethicists, which could make it much harder to influence certain decisions later.
So, just saying—it is very difficult and risky, so anyone considering working on this needs to plan carefully!