Honestly it seems kind of weird that on the EA Forum there isn’t just a checkbox for this.
Taymon
I’ve often thought that there should be separate “phatic” and “substantive” comment sections.
What does “IMPCO” mean? Search engines are failing me.
The Fun Theory Sequence (which is on a similar topic) had some things to say about the Culture.
Obligatory link to Scott Alexander’s “Ambijectivity” regarding the contentiousness of defining great art.
In the last paragraph, did you mean to write “the uncertainty surrounding the expected value of each policy option is high”?
While true, I think most proposed EA policy projects are much too small in scope to be able to move the needle on trust, and so need to take the currently-existing level of trust as a given.
I agree that that the word ‘populism’ is very prone to misunderstandings but I think the term ‘technocracy’ is acceptably precise. While precision is important, I think we should balance this against the benefits of using more common words, which make it easier for the reader to make connections with other arguments in favour of or against a concept.
I should clarify: I think the misunderstandings are symptoms of a deeper problem, which is that the concept of “technocracy” is too many different things rolled into one word. This isn’t about jargon vs. non-jargon; substituting a more jargon-y word doesn’t help. (I think this is part of why it’s taken on such negative connotations, because people can easily roll anything they don’t like into it; that’s not itself a strong reason not to use it, but it’s illustrative.)
“Technocracy” works okay-ish in contexts like this thread where we’re all mostly speaking in vague generalities to begin with, but when discussing specific policies or even principles for thinking about policy, “I think this is too technocratic” just isn’t helpful. More specific things like “I think this policy exposes the people executing it to too much moral hazard”, or “I think this policy is too likely to have unknown-unknowns that some other group of people could have warned us about”, are better. Indeed, those are very different concerns and I see no reason to believe that EA-in-general errs the same amount, or even in the same direction, for each of them. (If words like “moral hazard” are too jargon-y then you can just replace them with their plain-English definitions.)
I also think that EAs haven’t sufficiently considered populism as a tool to deal with moral uncertainty.
I agree that there hasn’t been much systematic study of this question (at least not that I’m aware of), and maybe there should be. That being said, I’m deeply skeptical that it’s a good idea, and I think most other EAs who’ve considered it are too, which is why you don’t hear it proposed very often.
Some reasons for this include:
The public routinely endorses policies or principles that are nonsensical or would obviously result in terrible outcomes. Examples include Philip Tetlock’s research on taboo tradeoffs [PDF], and this poll from Reuters (h/t Matt Yglesias): “Nearly 70 percent of Americans, including a majority of Republicans, want the United States to take ‘aggressive’ action to combat climate change—but only a third would support an extra tax of $100 a year to help.”
You kind of can’t ask the public what they think about complicated questions; they’re very diverse and there’s a lot of inferential distance. You can do things like polls, but they’re often only proxies for what you really want to know, and pollster degrees-of-freedom can cause the results to be biased.
When EAs look back on history, and ask ourselves what we would/should have done if we’d been around then—particularly on questions (like whether slavery is good or bad) whose morally correct answers are no longer disputed—it seems to look like we would/should have sided with technocrats over populists, much more often than the reverse. A commonly-cited example is William Wilberforce, largely responsible for the abolition of slavery in the British Empire. Admittedly, I’d like to see some attempt to check how representative this is (though I don’t expect that question to be answerable comprehensively).
I am not convinced that there is much thinking amongst EAs about experts misusing technocracy by focusing on their own interests
In at least one particular case (AI safety), a somewhat deliberate decision was made to deemphasize this concern, because of a belief not only that it’s not the most important concern, but that focus on it is actively harmful to concerns that are more important.
For example, Eliezer (who pioneered the argument for worrying about accident risk from advanced AI) contends that the founding of OpenAI was an instance of this. In his telling, DeepMind had previously had a quasi-monopoly on capacity to make progress towards transformative AI, because no other well-resourced actors were working seriously on the problem. This allowed them to have a careful culture about safety and to serve as a coordination point, so that all safety-conscious AI researchers around the world could work towards the common goal of not deploying something dangerous. Elon Musk was dissatisfied with the amount of moral hazard that this exposed DeepMind CEO Demis Hassabis to, so he founded a competing organization with the explicit goal of eliminating moral hazard from advanced AI by giving control of it to everyone (as is reflected in their name, though they later pivoted away from this around the time Musk stopped being involved). This forced both organizations to put more emphasis on development speed, lest the other one build transformative AI first and do something bad with it, and encouraged other actors to do likewise by destroying the coordination point. The result is a race to the precipice [PDF], where everyone has to compromise on safety and therefore accident risk is dramatically more likely.
More generally, politics is fun to argue about and people like to look for villains, so there’s a risk that emphasis on person-vs.-person conflicts sucks up all the oxygen and accident risk doesn’t get addressed. This is applicable more broadly than just AI safety, and is at least an argument for being careful about certain flavors of discourse.
One prominent dissenter from this consensus is Andrew Critch from CHAI; you can read the comments on his post for some thoughtful argument among EAs working on AI safety about this question.
I’m not sure what to think about other kinds of policies that EA cares about; I can’t think of very many off the top of my head that have large amounts of the kind of moral hazard that advanced AI has. This seems to me like another kind of question that has to be answered on a case-by-case basis.
I don’t think there has been much thinking about whether equally distributed political power should or should not be an end in itself.
On the current margin, that’s not really the question; the question is whether it’s an end-in-itself whose weight in the consequentialist calculus should be high enough to overcome other considerations. I don’t feel any qualms about adopting “no” as a working assumption to that question. I do think I value this to some extent, and I think it’s right and good for that to affect my views on rich-country policies where the stakes are relatively low, but in the presence of (actual or expected future) mass death or torture, as is the case in the cause areas EA prioritizes, I think these considerations have to give way. It’s not impossible that something could change my mind about this, but I don’t think it’s likely enough that I want to wait for further evidence before going out and doing things.
Of course, there are a bunch of ways that unequally distributed political power could cause problems big enough that EAs ought to worry about them, but now you’re no longer talking about it as an end-in-itself, but rather as a means to some other outcome.
it seems fairly clear to me that more populism is preferable under higher uncertainty, and more technocracy is preferable when plausible policy options have a greater range of expected values.
I’m sorry, I don’t understand what the difference is between those things.
I think someone should research policy changes in democratic countries which counterfactually led to the world getting a lot better or worse (under a range of different moral theories, and under public opinion), and the extent to which these changes were technocratic or populist. This would be useful to establish the track records of technocracy and populism, giving us a better reason to generally lean one way or the other.
This is exactly the kind of thing that I think won’t work, because reality is underpowered.
I forgot to link this earlier, but it turns out that some such research already exists (minus the stipulation that it has to be in democratic countries, but I don’t think this is necessarily a fatal problem; there are key similarities with politics in non-democratic countries). In 2009, Daron Acemoglu (a highly-respected-including-by-EAs academic who studies governance) and some other people wrote a paper [PDF] arguing that the First French Empire created a natural experiment, and examining the results. Scott reviewed it in a follow-up post to his earlier exchange with Weyl. The authors’ conclusion (spoilered because Scott’s post encourages readers to try to predict the results in advance) is that
technocratic-ish policies got better results.
I consider this moderately strong evidence against heuristics in the opposite direction, but very weak evidence in favor of heuristics in the same direction. There are quite a lot of caveats, some of which Scott gets into in the post. One of these is that the broader technocracy-vs.-populism question subsumes a number of other heuristics, which, in real life, we can apply independently of that single-axis variable. (His specific example might be controversial, but I can think of others that are harder to argue with, such as (on the technocratic side) “policies have to be incentive-compatible”, or (on the populist side) “don’t ignore large groups of people when they tell you you’ve missed something”.) Once we do that, the value of a general catch-all heuristic in one direction or the other will presumably be much diminished.
Also, there are really quite a lot of researcher degrees-of-freedom in a project like this, which makes it very hard to have any confidence that the conclusions were caused by the underlying ground truth and not by the authors’ biases. And just on a statistical level, sample sizes are always going to be tiny compared to the size of highly multi-dimensional policyspace.
So that’s why I’m pessimistic about this research program, and think we should just try to figure stuff out on a case-by-case basis instead, without waiting for generally-applicable results to come in.
Since you mentioned it, I should clarify that I have no strong opinion on whether EA should be more technocratic or more populist on the current margin. (Though it’s probably fair to say that I’m basically in favor of the status quo, because arguments against it mostly consist of claims that EA has missed something important and obvious, and I tend to find these unpersuasive. I suppose one could argue this makes me pro-technocracy, if one thought the status quo was highly technocratic.) In any case, my contention is that it’s not a crucial consideration.
First of all, thanks for this post. The previous post on this topic (full disclosure: I haven’t yet managed to read the paper in detail) poisoned the discourse pretty badly by being largely concerned with meta-debate and by throwing out associations between the authors’ dispreferred policy views and various unsavory-sounding concepts. I was worried that this meant nobody would try to address these questions in a constructive manner, and I’m glad someone has.
I also agree that there’s been a bit of unreflectiveness in the adoption of a technocratic-by-default baseline assumption in EA. I was mostly a populist pre-EA, gradually became a technocrat because the people around me who shared my values were technocrats, and I don’t think this was attributable to anyone convincing me that my previous viewpoint was wrong, for the most part. (By contrast, while social effects/frog-boiling were probably important in eroding my resistance to adopting EA views on AI safety, the reason I was thinking about adopting such views in the first place was because I read arguments for them that I couldn’t refute.) I’m guessing this has happened to other people too. This is probably worrying and I don’t think it’s necessarily applicable to just this issue.
That said, I didn’t know what to actually do about any of this, and after reading this post, I still don’t. I think my biggest disagreement is that I don’t think the concept of “technocracy” is actually very helpful, even if it’s pointing at a real cluster of things.
I’m reading you as advocating that your four key questions be treated as crucial considerations for EA. I don’t think this is going to work, because these questions do not actually have general answers. Reality is underpowered. Social science is nowhere near being capable of providing fully-general answers to questions this huge. I don’t think it’s even capable of providing good heuristics, because this kind of question is what’s left after all known-good heuristics have already been taken into account; that’s why it keeps coming up again and again. There is just no avoiding addressing these questions on a case-by-case basis for each individual policy that comes up.
One might argue that the concept of “technocracy” is nevertheless useful for reminding people that they need to actually consider this vague cluster of potential risks and downsides when formulating or making the case for a policy, instead of just forgetting about them. My objection here is that, as far as I can tell, EAs already do this. (To give just one example, Eliezer Yudkowsky has explicitly written about moral hazard in AGI development.) If this doesn’t change our minds, it’s because we think all the alternatives are worse even after accounting for these risks. You can make an argument that we got the assessment wrong, but again, I think it has to be grounded in specifics.
If we don’t routinely use the word “technocracy”, then maybe that’s just because the word tends to mean a lot of different things to a lot of different people; you’ve adopted a particular convention in this post, but it’s far from universal. Even if the meanings are related, they’re not precise, and EAs value precision in writing. Routinely describing proposed policies as “populist” or “technocratic” seems likely to result in frequent misunderstandings.
Finally, since it sounds like there are concerns about lack of existing writing in the EAsphere about these questions, I’d like to link some good ones:
Scott Alexander’s back-and-forth with Glen Weyl (part 1, part 2; don’t miss Scott’s response in the comments, and I think Weyl said further things on Twitter although I don’t have links). Uses the word “technocracy”, and is probably the most widely-read explicit discussion of technocracy-vs.-populism in the EAsphere. I think that Scott, at least, cannot reasonably be accused of never having thought about this.
Scott’s review of Rob Reich’s book Just Giving. Doesn’t use the word “technocracy”, but gets into similar issues, and presumably Reich’s perspective in the book comes from many of the same concerns that drove this piece, which I think is what Peter Singer was responding to in the EA Handbook post that you linked. Builds on the earlier post “Against Against Billionaire Philanthropy” (see also highlights from the comments).
“Against Multilateralism”, by Sarah Constantin. Maybe the EAsphere post that most explicitly lays out the case for something-like-populism (though ultimately not siding with it). Argues with Weyl again, though it actually predates his engagement with Scott and EA. Ends with some promising directions that, if further explored, could maybe be our best hope currently available of making general progress on this class of questions (though I still don’t think they rise to the level of crucial considerations).
This (often framed as being about the hard problem of consciousness) has long been a topic of argument in the rationalsphere. What I’ve observed is that some people have a strong intuition that they have a particular continuous subjective experience that constitutes what they think of as being “them”, and other people don’t. I don’t think this is because the people in the former group haven’t thought about it. As far as I can tell, very little progress has been made by either camp of converting the other to their preferred viewpoint, because the intuitions remain even after the arguments have been made.
I think SpaceX’s regular non-Mars-colonization activities are in fact taken seriously by relevant governments, and the Mars colonization stuff seems like it probably won’t happen and also wouldn’t be that big a deal if it did (in terms of, like, national security; it would definitely affect who gets into the history books). So it doesn’t seem to me like governments are necessarily acting irrationally there.
Same with cryptocurrency; its implications for investor protection, tax evasion, capital controls evasion, and facilitating illicit transactions are indeed taken seriously, and while governments would obviously care quite a lot if it displaced fiat currency, I just don’t think there’s any way that’s happening. If it does, then this is probably because fiat currency itself somehow stopped working and something was needed to fill the void; if governments think this scenario is at all plausible, then presumably their attention would be on the first part where fiat currency fails, since that’s much more within their control and cryptocurrency isn’t really a relevant input.
The scientific and regulatory culture around fusion power seems to be shaped, as you suggest, by the long history of failures in that domain; judging by similar situations in other fields, I wouldn’t be surprised if no one wanted to admit to putting any credence in it, so that they wouldn’t look stupid in case it fails again.
The state of pandemic preparedness does indeed seem like just straight-up government incompetence.
As far as I’m aware, the first person to explicitly address the question “why are literary utopias consistently places you wouldn’t actually want to live?” was George Orwell, in “Why Socialists Don’t Believe in Fun”. I consider this important prior art for anyone looking at this question.
EAsphere readers may also be familiar with the Fun Theory Sequence, which Orwell was an important influence on.
On a related note, I get the impression that utopianism was not as outright intellectually discredited and unfashionable when Orwell wrote as it is today (e.g., the above essay predates Walden Two), even though most of the problems given in this piece were clearly already present and visible at that time. That seems like it does have something to do with the events of the 20th century, and their effects on the intellectual climate.
A number of people invited me to 1:1s to ask me for career advice in my field, which is software engineering. Mostly of the “how do I get hired” kind rather than the “how do I pick a career path that’s most in line with EA strategic priorities” kind that 80,000 Hours specializes in. Unfortunately I’m not very good at this kind of advice (I haven’t looked for a new job in more than eight years) and haven’t been able to find anywhere else I could send people to that would be more helpful. I think there used to be an affinity group or something for EA software engineers, but I don’t think it’s active anymore.
Anyone know of anything like this? If not, and if you’re the kind of person who’s well-positioned to start a group like this, consider this a request for one.