I’m inclined to agree that EAs should think more politically in general.
But the value of specific actions depends on both scale/leverage and the probability of success.
Influencing governments in the short-term has a low probability of success, unless you’re already in a position of power or it’s an issue that is relatively uncontroversial (e.g. with limited trade-offs).
Because of the scale of government spending, it could still be worth trying—but the main value might be in learning lessons on how to get better at influencing in the future, rather than having any immediate impact.
From the perspective of the long term, helping humans to improve how they govern themselves, might be the necessary condition for any other causes. Without it, even miracle scientific breakthrough will not produce positive outcomes.
I’m inclined to agree that EAs should think more politically in general.
But the value of specific actions depends on both scale/leverage and the probability of success.
Influencing governments in the short-term has a low probability of success, unless you’re already in a position of power or it’s an issue that is relatively uncontroversial (e.g. with limited trade-offs).
Because of the scale of government spending, it could still be worth trying—but the main value might be in learning lessons on how to get better at influencing in the future, rather than having any immediate impact.
From the perspective of the long term, helping humans to improve how they govern themselves, might be the necessary condition for any other causes. Without it, even miracle scientific breakthrough will not produce positive outcomes.