Political Science Quantitative Methods and EA

Typically, there are five major subfields within political science (as practiced in the US):

1) American Politics

2) Comparative Politics

3) International Relations

4) Political Theory

5) Quantitative Methods

Within quantitative methods, there are two major subdivisions:

  1. Statistical Methods

  2. Formal Modeling

Statistical Methods: This primarily covers applications of probability, statistics, survey research, machine learning, and causal inference to political science. For the most part, it is actually applied quite well within political science. My only minor suggestion is that there be more graduate student training on the theoretical underpinnings of probability, statistics, machine learning, and causal inference. There is a lot of excellent training for political science graduate students on a) what is in the current probability/​statistics/​survey-research/​machine-learning/​causal-inference toolbox, and b) knowing when and how to deploy different tools in the toolbox. This application training is the more important for political science than theoretical training, because political scientists are not typically expected to develop new tools for the toolbox. However, it would be desirable if graduate students were given more training in theoretical foundations, because as the tools in the toolbox change, that theoretical training would allow political scientists to adapt over the decades of their career. I’m listing some books which I think we greatly improve this training.

  1. Causality: Models, Reasoning, and Inference by Judea Pearl

  2. Foundations of Modern Probability by Olav Kallenberg

  3. Mathematical Statistics: Volumes 1 & 2 by Peter Bickel and Kjell Doksum

Formal Modeling: This covers quantitative methods such as game theory, simulations, expected utility theory, fair division, social choice, and mechanism design. EA could use a lot of insight from political science. Here are some of those areas:

  • Approximation Algorithms for Finding Nash Equilibria in Social Games

    • Christos Papadimitriou and his colleagues published a few papers about fifteen years ago regarding the computational intractability of finding Nash equilibria in real world games. We can basically see this in simple games like chess and go, which are computationally intractable. DeepMind can crush any human world champion in these games, not because DeepMind knows optimal play, but because it uses approximation algorithms to improve its play. The vast majority of social games in politics are much more complex than chess or go, so clearly, even agents seeking to maximize expected utility will use approximation algorithms in real world politics. To improve the predictive power of our models of real world politics, getting a better sense of what approximation algorithms are used and the probabilities that they use them in particular cases will significantly improve the predictive capacities of our models. This will entail the greater use of algorithmic game theory in political science formal modelling.

  • Redistricting Processes

  • Rethinking Mechanism Design for Social Sciences

    • Roughly speaking, given that agents have the capacity to misrepresent their beliefs or to act strategically, mechanism design is the development of mechanisms to produce desirable goals. For example, Vickrey auctions are a mechanism used to prevent overbidding caused by strategic bidding. Mechanism design in the social sciences has drawn a lot from cryptography. In cryptography, there is great importance on making it as computationally intractable as possible to break the code. In a similar vein, there has been great emphasis in social science mechanism design to make it as computationally intractable as possible to figure out optimal strategy. For example, it is claimed that some group decision making algorithms are superior to others because they are computationally intractable for a player to figure out optimal strategy. But the goals of cryptography and social processes can be different. For example, in cryptography, the game is typically zero-sum in the sense that you either break the code or don’t break the code. In social games, like group decision making, the goal is not to have optimal strategy, but to have a good enough strategy to beat all your opponents. First a CS example, and then a political one. Consider a mediocre chess player like myself teaming with DeepMind resources to play the human world chess champion unaided, Magnus Carlsen. I would absolutely crush Carlsen, because even though DeepMind resources don’t know optimal chess play, they know enough to beat any unaided human player. Similarly in elections, some election method may be computationally intractable for determination of optimal play, but perhaps a coalition of voters aided by Cambridge Analytica resources could be good enough to crush competing coalitions that are at a comparative computational resource disadvantage. The point is, we need new formal metrics for the goals we might have. Saying that such and such a group decision making mechanism is factorial time complex to find optimal strategy in worst case scenario (i.e., a more formal way of saying optimal strategy is computationally intractable) doesn’t make a mechanism any better than an other. We need other kind of metrics, useful for social sciences, like a formal version of “it is easy for all players to find optimal strategy regardless of computational resource equality” (which would be a way of saying a well trained chicken and DeepMind are on even ground competing in tic-tac-toe), or formal criteria for “any-given-level-of-optimality-of-play complexity is practically resource invariant”.

Those are just some comments on quantitative methods in political science and EA. I look forward to your comments. Thank you.