How important are quantitative abilities and your country of citizenship for policy careers?
Dear EA forum,
After reading 80K hours, it seems like careers in public policy, governance (etc) are highly recommended. For example, the article titled Philosophy Academia says:
‘A high degree of personal fit for philosophy may suggest a good fit for other less professionally risky and potentially higher impact paths as well, such as a PhD in economics or a career shaping public policy.‘
Articles related to law school emphasize the value of policy careers as well. EG: the 80k hours page on becoming a congressional staffer say this is an excellent path for US lawyers to pursue, the article on corporate law recommends policy careers, and there was recently an excellent recent post on the forum detailing the pros and cons of law school in the USA (one of the greatest pros was law’s relevance for policy.)
However, two things worry me about this view of policy careers, both in my own case and for other people like me. First, similar to Scott Alexander, I’m not good at (and find it difficult to be interested in) maths, and the gap between my verbal and quantitative scores on aptitude tests is really wide. So when I read about how promising policy careers are in terms of impact, I begin thinking something like: hold on, don’t many of the people in policy end up predicting the behavior of individuals and states, including in economic contexts? If so, wouldn’t one’s aptitude for those areas be heavily correlated with mathematical ability? It seems difficult even in principle to imagine social science being any other way— indeed, 80k expresses their dissatisfaction with mainstream historical work as generally lacking quantitative rigor.
One may object that certain areas of policy, such as law, are more verbally focused. EG: the actual practice of law involves a lot of reading, interpretation, and (maybe?) philosophical acumen. But if I understand 80k correctly, those are precisely the positions they expect to be less impactful. One could also object that a normative (as opposed to descriptive) analysis of policies is going to be verbally focused. Which is true, but that is the kind of thing that people study in philosophy departments (and 80k are less optimistic about).
My second concern is as follows:
Suppose one lives in a place like Australia, New Zealand, or any country with a smaller and less impactful government. Even if one was to become a senior public servant in these countries, could you really have much of an influence on say, US AI policy? I also wonder if the same is true of elected officials. Even if say, the Australian prime minister wrote to the US president about AI safety, isn’t the US president likely to (politely) tell him or her to sit down?
So for people who live in smaller countries and have IQ scores heavily skewed towards verbal reasoning, should the 80K advice be reversed? Is philosophy likely to be more promising than these careers in expectation?
(More detail on my reasoning for those who are interested: I am defining ‘philosophy’ very broadly here. This could include theoretical work in fields like psychology, or whatever. Any sort of research that someone could do without performing arithmetic. I also realize that it is very unlikely that any one philosophy postgrad will produce groundbreaking research, and I assume that as with many fields, most of one’s expected impact is contained in the counterfactual scenarios where one is spectacularly successful. I am also assuming that a philosophy postgrad has more time than a policy professional to do things like community building, become familiar with core EA/LessWrong ideas [which seems valuable for all sorts of reasons, including community building ones]. Apologies for this post not being meticulously thought out— I am in a crucial academic period for the next few months, but after that I would really like to consider the above points more thoroughly).
Thank you very much for your feedback. I will edit this post to try and incorporate it.
I don’t think the philosophy advice hinges on quant vs qual intelligence; it’s more to do with the punishing job market and most philosophers systematically lacking influence on the world.*
Non-US work has many virtues. Every national EA group is an end in itself and an indispensable part of the global funnel, as well as an “experiment in living”, which could discover better ways of organising. Working well in any civil service produces positive externalities (you’re decreasing the network distance from the community to that policy world). I would guess it’s proportionately easier for one talented person to influence a smaller country and so implement strategies that could be replicated by other EAs, and so catch the many unforeseen upside risks that come out of country-sized distributions. Some international actions and treaties give one vote to each country regardless of size too.
See Jan Kulveit on local EA groups and this cool case study on AI policy in a small country.
* While, sure, one in a million ends up having ~more impact than the most impactful scientists. The trouble is the unsure sign of this impact.
(Note: the above is not an argument against working in the US, which is probably correctly rated in EA.)
Oh, just one other thing that I found interesting about your post: the article you linked (on the words ‘unsure sign’) takes some pretty pessimistic metaphilosophical positions. EG: ‘experimental philosophers have the right idea, because at least they’re not relying on intuitions’ (or words to that effect). On a related note, despite loving some of the content that comes out of LessWrong folks, I think I am more optimistic about traditional philosophy than they are. The impression I got from studying metaethics is that intuitions are indispensable when considering moral claims, for instance. I don’t think that evolutionary debunking arguments undermine (at least all of) those intuitions either. However, I will keep an open mind!
Anyway, the point of the above is: I wonder how much these questions regarding career advice depend on certain metaphilosophical views, like how optimistic one is about mainstream research in ethics?
(Though funnily enough, the authors of the 80K article aren’t such pessimists— EG: MacAskill has plenty of ethics papers which feature the method of reflective equilibrium).
Most philosophers will automatically be metaphilosophical optimists. I’d love to know what fraction of the dropouts are pessimists.
Thanks for the response, Gavin. Interesting points. I sort of wonder though, what other impactful fields are there besides philosophy for people with IQ scores heavily polarized towards verbal reasoning? I use the term philosophy really broadly— it could be theoretical research in other disciplines. For instance, some people on LessWrong think that theoretical issues involving both psychology and philosophy could be really impactful, EG: Akrasia. However the kind of theoretical work in public policy/international relations and so on either seems related to philosophy of (social) science, political philosophy, and so on. On the empirical side, isn’t it fair to say that the way individuals and states actually behave would be best tackled by someone whose comparative advantage lay in areas such as economic and statistical analysis? I’m finding it difficult to imagine how someone who was verbally skewed could be especially good at the usual scientific description/prediction (wouldn’t doing this well involve either a lot of economic a priori stuff, or collecting huge amounts of data?)
Just a note about the public service in Australia, it seems like they may optimizing pretty hard to recruit people with quantitative ability (this is just based on some cursory research of mine, so don’t give my testimony too much weight). For example, when I took tests for Australian Public Service agencies, they emphasized the spatial rotation/arithmetic kind of questions. I’m mediocre at these, but score really high on the verbal measures. Anyway, based on such scores, I would end up working for some second tier agency (if that) after undergrad. I could go to law school of course, but I don’t know whether the kind of policy work that (traditional, not the new ‘law and economics’ stuff) lawyers do is likely to be more impactful than the sort that ethicists/political philosophers do.
The other thing which puts me (and probably lots of other ‘philosophy’ types) off the public service is, I don’t know how patient I could be performing mundane tasks for years without good opportunities to distinguish myself.
Also, thank you for the links, I will check them out later! I will give your points regarding the value of political/policy positions in smaller governments some thought. My initial (perhaps misguided) impression is that in areas like AI safety, countries like the US and China may not pay much attention to what Australian politicians say. But if an Australian helps the EA/LessWrong communities (or others) solve some sort of philosophical problem (even in a small way), the downstream effects of that might have a sizable effect on those countries.
Lots of things for verbal types to do! Just one: it turns out that precise writing is in very short supply; I know great researchers who are way more productive with writing support.
I also encourage you not to take the tests too seriously. Nor your current dislikes. I’m a philosophy type, but I made myself technical enough for an AI PhD, slowly overcoming a heavy bias against maths. It is unlikely that you couldn’t do the same if you wanted.
I suppose the trouble with tests in the context of the public service is that getting a good score on them is necessary to be hired. Further, I am skeptical that training can improve one’s skills on tasks like spatial rotation (as evidence for this, IQ scores are pretty stable across a person’s lifespan). I’m leaning towards agreeing with what Scott Alexander says in the article I linked here— he does a good job of humorously laying out what seems to be a common response to people claiming that they’re not as good at or interested in maths (and why he thinks this response misses the mark in his own case).
But even if I could scrape by, I have the following worry. To what extent is general policy work actually improved by areas which are to my comparative advantage? Yes, the point about precise writing is a good one. But to the extent that my understanding of ethics is better than the average person’s (which I would say is at least plausible of EAs in general), I’m not sure public service jobs present many opportunities to make use of that understanding. My general impression of the public service is that either you’re given quite specific tasks to perform (I doubt I would be exceptional at said tasks relative to other ambitious young public servants), or perhaps at higher levels you’re given some quite general end and then propose efficient means of achieving that end. In the second case, it seems like being good at economics and so on (which I am not) would be great.
I am more optimistic about some sort of political role, because intuitively, political parties spend more time putting forward ethical arguments than the public service. But I have another worry about politics— suppose you end up in one of the (perhaps uncommon) possible worlds where you gain some measure of political influence. Isn’t a large part of contemporary politics just improving the efficiency of basic services? If so, if you end up taking the place of someone who knew a lot about economics (suppose you got there by being charismatic, a good public speaker and so on), couldn’t this result in things being less efficient? Or is it your impression that politicians can pretty safely pass economic policy on to the experts, and spend most of their time putting forward ethical arguments and so on?
Thanks again for your responses!
On the topic of policy work in smaller countries, Founders Pledge write the following in their article about Longtermist Institutional Reform (with my emphasis added in bold):
Also, have a look at this blog post: Why scale is overrated: The case for increasing EA policy efforts in smaller countries—EA Forum (effectivealtruism.org).
If accepting the assumptions above, another reason to work on policy change in smaller countries (as well) is to duplicate efforts. Significant policy change is often dependent on policy windows and luck, and the more efforts in parallell, the bigger chance of success in at least one country. Also, the risk of failure (and potentially politicizing the issue for good) is smaller if in a small country. After several parallell attempts, policy advocates in the US (or other large countries) can refer to the successful example in whatever smaller country where the campaign was successful (NZ, Australia, the Nordic countries, etc.)
Thank you Eirik. This certainly bears thinking about. At the moment I am struggling to see whether the counterarguments people raised in that thread (EG: the US federal government is more likely to follow decisions made by smaller state/local governments than overseas ones) go through or not. I ought to research this more before making my next career decision in a few months’ time!
When RyanCarey raised the issue of AI safety in that thread (and how influential smaller states would be regarding AI policy), I was interested to see Jakob respond:
’You’re right that if your main concern is linked to specific, urgent causes, you may prefer more direct routes to impact in the countries that matter most’
This sounds like me— at the moment (at least certain aspects of) longtermism seem pretty persuasive to me, as do the arguments that Ord and co. make regarding the probability of AGI happening within the next century.
Of course, it could be that I am failing to consider certain important points about Australia’s potential influence on AI development/safety.