Run For President
“EA Criticism and Red Teaming” entry.
I will try to be as simple and boring as possible.
My critique of the EA community is that they should reallocate sufficient resources to run a candidate for president of the United States.
If the point of EA is to do as much good as possible—and I think they have a pretty good handle on what those things should be—then it seems like a key goal should be finding the most powerful levers for accomplishing those things. The work done under this umbrella has been great, but one thing that would help even more is a way to cause way more of it to happen. There are especially a lot of incredibly beneficial things that could be done, but the ability to do them is currently gated behind “government.”
Fortunately, boring old representative governments have developed a way to open that gate: winning elections. So, I propose that you try that method. There are a thousand good objections to this, some of them very good, but my only defense is that solving the “how to convince enough people to elect you president” problem is probably easier than a lot of other problems, and the payoff seems pretty high. And even if you fail to win, even moderately succeeding provides (via predictable media tendencies) a far larger platform to influence others to do Effective things.
To bring it full circle: I think you should become the “Red Team”—run as a Republican, their party seems more prone to outsider nominations.
I support some people in the EA community taking big bets on electoral politics, but just to articulate some of the objections:
Even compared to very difficult other problems, I’m not sure this is true; exactly one person is allowed to solve this problem every four years, and it’s an extremely crowded competition. (Both parties had to have two debate stages for their most recent competitive cycles, and in both cases someone who had been a famous public figure for decades won.)
It provides a larger platform, but politics is also an extremely epistemically adversarial arena: it is way more likely someone decides they hate EA ideas if an EA is running against a candidate they like. In some cases this trade-off is probably worth it; you might think that convincing a million people is worth tens of millions thinking you’re crazy. But sometimes the people who decide you’re crazy (and a threat to their preferred candidates) are going to be (e.g.) influential AI ethicists, which could make it much harder to influence certain decisions later.
So, just saying—it is very difficult and risky, so anyone considering working on this needs to plan carefully!