“I was mostly a populist pre-EA, gradually became a technocrat because the people around me who shared my values were technocrats.” Same!
You’re correct in reading my post as “technocracy vs populism is a crucial consideration”.
I think social science is unlikely to offer us a good, general answer to technocracy vs populism, but I think it can offer us a better answer than we currently have, because I feel that we have mostly skipped attempting to take a scientific approach to the question, but nonetheless have accepted ‘more technocracy than the status quo’ as the answer.
Also, I am confident that social science can offer us useful heuristics for when we look at specific cases.
For example, Scott’s article (thank you for linking it) looks at some positive examples of historical policy changes that (he claims) were mostly technocratic.
I think someone should research policy changes in democratic countries which counterfactually led to the world getting a lot better or worse (under a range of different moral theories, and under public opinion), and the extent to which these changes were technocratic or populist. This would be useful to establish the track records of technocracy and populism, giving us a better reason to generally lean one way or the other.
We could also look specifically at how public opinion and expert opinion may have differed at the time as policymakers approached these decisions, to work out if more technocracy or more populism has a better track record under the conditions of a large disagreement between public and expert opinion.
Also, based on the pros and cons of technocracy and populism that I outlined, it seems fairly clear to me that more populism is preferable under higher uncertainty, and more technocracy is preferable when plausible policy options have a greater range of expected values.
I think part of what makes existential risk studies so difficult is that these heuristics don’t help, because existential risk studies involves both extremely high uncertainty and plausible policy options with an extremely large range of expected values.
Possibly, these situations most suit a ‘third’ approach, where experts lobby the public rather than policymakers directly. If this is successful, public and expert opinion could become very similar, and the more similar they are, the more similar technocratic and populist approaches become, meaning that striking the right balance between them matters considerably less. (This would mean that Nick Bostrom and Toby Ord were way ahead of me by publishing Superintelligence and The Precipice).
I like that EA actively thinks about the risks associated with moral uncertainty, but I am not convinced that there is much thinking amongst EAs about experts misusing technocracy by focusing on their own interests, and I don’t think there has been much thinking about whether equally distributed political power should or should not be an end in itself. I also think that EAs haven’t sufficiently considered populism as a tool to deal with moral uncertainty. (I think the focus of moral uncertainty has generally been on experts themselves trying to account for various moral theories when forming opinions).
Also, to clarify, I am not arguing against more technocracy. I think it’s entirely reasonable for EAs to conclude that more technocracy is better than the status quo even after considering the risks, but I think it’s important for this conclusion to be made in a “scientific / rational / systematic / evidence and careful reasoning ” way. Currently, I don’t think this is generally the case even if EAs do think about moral uncertainty, for the reasons that I outlined in the paragraph above this one.
I agree that that the word ‘populism’ is very prone to misunderstandings but I think the term ‘technocracy’ is acceptably precise. While precision is important, I think we should balance this against the benefits of using more common words, which make it easier for the reader to make connections with other arguments in favour of or against a concept.
I also think that EAs haven’t sufficiently considered populism as a tool to deal with moral uncertainty.
I agree that there hasn’t been much systematic study of this question (at least not that I’m aware of), and maybe there should be. That being said, I’m deeply skeptical that it’s a good idea, and I think most other EAs who’ve considered it are too, which is why you don’t hear it proposed very often.
Some reasons for this include:
The public routinely endorses policies or principles that are nonsensical or would obviously result in terrible outcomes. Examples include Philip Tetlock’s research on taboo tradeoffs [PDF], and this poll from Reuters (h/t Matt Yglesias): “Nearly 70 percent of Americans, including a majority of Republicans, want the United States to take ‘aggressive’ action to combat climate change—but only a third would support an extra tax of $100 a year to help.”
You kind of can’t ask the public what they think about complicated questions; they’re very diverse and there’s a lot of inferential distance. You can do things like polls, but they’re often only proxies for what you really want to know, and pollster degrees-of-freedom can cause the results to be biased.
When EAs look back on history, and ask ourselves what we would/should have done if we’d been around then—particularly on questions (like whether slavery is good or bad) whose morally correct answers are no longer disputed—it seems to look like we would/should have sided with technocrats over populists, much more often than the reverse. A commonly-cited example is William Wilberforce, largely responsible for the abolition of slavery in the British Empire. Admittedly, I’d like to see some attempt to check how representative this is (though I don’t expect that question to be answerable comprehensively).
I agree that populism as a tool for dealing with moral uncertainty has obvious weaknesses (thank you for explaining some of these in detail), but in my view the weaknesses are not large enough for a systematic exploration of this question to be not worth the time.
I also agree that other EAs viewing these weaknesses as too severe would be a good explanation for why this hasn’t been done yet.
I agree that that the word ‘populism’ is very prone to misunderstandings but I think the term ‘technocracy’ is acceptably precise. While precision is important, I think we should balance this against the benefits of using more common words, which make it easier for the reader to make connections with other arguments in favour of or against a concept.
I should clarify: I think the misunderstandings are symptoms of a deeper problem, which is that the concept of “technocracy” is too many different things rolled into one word. This isn’t about jargon vs. non-jargon; substituting a more jargon-y word doesn’t help. (I think this is part of why it’s taken on such negative connotations, because people can easily roll anything they don’t like into it; that’s not itself a strong reason not to use it, but it’s illustrative.)
“Technocracy” works okay-ish in contexts like this thread where we’re all mostly speaking in vague generalities to begin with, but when discussing specific policies or even principles for thinking about policy, “I think this is too technocratic” just isn’t helpful. More specific things like “I think this policy exposes the people executing it to too much moral hazard”, or “I think this policy is too likely to have unknown-unknowns that some other group of people could have warned us about”, are better. Indeed, those are very different concerns and I see no reason to believe that EA-in-general errs the same amount, or even in the same direction, for each of them. (If words like “moral hazard” are too jargon-y then you can just replace them with their plain-English definitions.)
I think someone should research policy changes in democratic countries which counterfactually led to the world getting a lot better or worse (under a range of different moral theories, and under public opinion), and the extent to which these changes were technocratic or populist. This would be useful to establish the track records of technocracy and populism, giving us a better reason to generally lean one way or the other.
This is exactly the kind of thing that I think won’t work, because reality is underpowered.
I forgot to link this earlier, but it turns out that some such research already exists (minus the stipulation that it has to be in democratic countries, but I don’t think this is necessarily a fatal problem; there are key similarities with politics in non-democratic countries). In 2009, Daron Acemoglu (a highly-respected-including-by-EAs academic who studies governance) and some other people wrote a paper [PDF] arguing that the First French Empire created a natural experiment, and examining the results. Scott reviewed it in a follow-up post to his earlier exchange with Weyl. The authors’ conclusion (spoilered because Scott’s post encourages readers to try to predict the results in advance) is that
technocratic-ish policies got better results.
I consider this moderately strong evidence against heuristics in the opposite direction, but very weak evidence in favor of heuristics in the same direction. There are quite a lot of caveats, some of which Scott gets into in the post. One of these is that the broader technocracy-vs.-populism question subsumes a number of other heuristics, which, in real life, we can apply independently of that single-axis variable. (His specific example might be controversial, but I can think of others that are harder to argue with, such as (on the technocratic side) “policies have to be incentive-compatible”, or (on the populist side) “don’t ignore large groups of people when they tell you you’ve missed something”.) Once we do that, the value of a general catch-all heuristic in one direction or the other will presumably be much diminished.
Also, there are really quite a lot of researcher degrees-of-freedom in a project like this, which makes it very hard to have any confidence that the conclusions were caused by the underlying ground truth and not by the authors’ biases. And just on a statistical level, sample sizes are always going to be tiny compared to the size of highly multi-dimensional policyspace.
So that’s why I’m pessimistic about this research program, and think we should just try to figure stuff out on a case-by-case basis instead, without waiting for generally-applicable results to come in.
Since you mentioned it, I should clarify that I have no strong opinion on whether EA should be more technocratic or more populist on the current margin. (Though it’s probably fair to say that I’m basically in favor of the status quo, because arguments against it mostly consist of claims that EA has missed something important and obvious, and I tend to find these unpersuasive. I suppose one could argue this makes me pro-technocracy, if one thought the status quo was highly technocratic.) In any case, my contention is that it’s not a crucial consideration.
I think we are disagreeing in a general sense about the usefulness of imprecise and unreliable, but systematically obtained answers to big questions, when trying to answer smaller sub-questions. If we think these answers are less useful, we are less likely to decide that ‘technocracy vs populism in general’ is a crucial consideration. If we think these answers are more useful, we are more likely to decide that ‘technocracy vs populism in general’ is a crucial consideration.
I do agree the conclusion of Acemoglu’s paper (admittedly, it is too long for me to read) is only weak evidence in favour of more technocracy, but if other papers were able to identify more natural experiments and came to similar conclusions, in theory I think that could generate enough evidence for ‘more technocracy’ (or ‘more populism’) to be a sufficiently strong prior / heuristic to be useful when looking at individual cases, which is why I still think ‘technocracy vs populism’ is a crucial consideration.
Update: Having read another comment, it seems likely that expert opinion most replaces other expert opinion in the context of policymaking. That changes my mind on whether technocracy vs populism is a crucial consideration, since it is only relevant to ‘promoting evidence-based policy’, a very minor EA cause area.
I am not convinced that there is much thinking amongst EAs about experts misusing technocracy by focusing on their own interests
In at least one particular case (AI safety), a somewhat deliberate decision was made to deemphasize this concern, because of a belief not only that it’s not the most important concern, but that focus on it is actively harmful to concerns that are more important.
For example, Eliezer (who pioneered the argument for worrying about accident risk from advanced AI) contends that the founding of OpenAI was an instance of this. In his telling, DeepMind had previously had a quasi-monopoly on capacity to make progress towards transformative AI, because no other well-resourced actors were working seriously on the problem. This allowed them to have a careful culture about safety and to serve as a coordination point, so that all safety-conscious AI researchers around the world could work towards the common goal of not deploying something dangerous. Elon Musk was dissatisfied with the amount of moral hazard that this exposed DeepMind CEO Demis Hassabis to, so he founded a competing organization with the explicit goal of eliminating moral hazard from advanced AI by giving control of it to everyone (as is reflected in their name, though they later pivoted away from this around the time Musk stopped being involved). This forced both organizations to put more emphasis on development speed, lest the other one build transformative AI first and do something bad with it, and encouraged other actors to do likewise by destroying the coordination point. The result is a race to the precipice [PDF], where everyone has to compromise on safety and therefore accident risk is dramatically more likely.
More generally, politics is fun to argue about and people like to look for villains, so there’s a risk that emphasis on person-vs.-person conflicts sucks up all the oxygen and accident risk doesn’t get addressed. This is applicable more broadly than just AI safety, and is at least an argument for being careful about certain flavors of discourse.
One prominent dissenter from this consensus is Andrew Critch from CHAI; you can read the comments on his post for some thoughtful argument among EAs working on AI safety about this question.
I’m not sure what to think about other kinds of policies that EA cares about; I can’t think of very many off the top of my head that have large amounts of the kind of moral hazard that advanced AI has. This seems to me like another kind of question that has to be answered on a case-by-case basis.
I don’t think there has been much thinking about whether equally distributed political power should or should not be an end in itself.
On the current margin, that’s not really the question; the question is whether it’s an end-in-itself whose weight in the consequentialist calculus should be high enough to overcome other considerations. I don’t feel any qualms about adopting “no” as a working assumption to that question. I do think I value this to some extent, and I think it’s right and good for that to affect my views on rich-country policies where the stakes are relatively low, but in the presence of (actual or expected future) mass death or torture, as is the case in the cause areas EA prioritizes, I think these considerations have to give way. It’s not impossible that something could change my mind about this, but I don’t think it’s likely enough that I want to wait for further evidence before going out and doing things.
Of course, there are a bunch of ways that unequally distributed political power could cause problems big enough that EAs ought to worry about them, but now you’re no longer talking about it as an end-in-itself, but rather as a means to some other outcome.
it seems fairly clear to me that more populism is preferable under higher uncertainty, and more technocracy is preferable when plausible policy options have a greater range of expected values.
I’m sorry, I don’t understand what the difference is between those things.
With overseas aid budgets, the set of plausible policy options, such as decreasing and increasing the budget by different amounts, has a large range of expected values, and the uncertainty surrounding the expected value of each policy option is low. For this, I think more technocratic approaches are preferable.
With income tax rates, the set of plausible policy options, such as decreasing and increasing income tax rates by different amounts, has a smaller range of expected values, and the uncertainty surrounding the expected value of each policy option is high. For this, I think more populist approaches are preferable.
Thanks for your well thought-out comment.
“I was mostly a populist pre-EA, gradually became a technocrat because the people around me who shared my values were technocrats.” Same!
You’re correct in reading my post as “technocracy vs populism is a crucial consideration”.
I think social science is unlikely to offer us a good, general answer to technocracy vs populism, but I think it can offer us a better answer than we currently have, because I feel that we have mostly skipped attempting to take a scientific approach to the question, but nonetheless have accepted ‘more technocracy than the status quo’ as the answer.
Also, I am confident that social science can offer us useful heuristics for when we look at specific cases.
For example, Scott’s article (thank you for linking it) looks at some positive examples of historical policy changes that (he claims) were mostly technocratic.
I think someone should research policy changes in democratic countries which counterfactually led to the world getting a lot better or worse (under a range of different moral theories, and under public opinion), and the extent to which these changes were technocratic or populist. This would be useful to establish the track records of technocracy and populism, giving us a better reason to generally lean one way or the other.
We could also look specifically at how public opinion and expert opinion may have differed at the time as policymakers approached these decisions, to work out if more technocracy or more populism has a better track record under the conditions of a large disagreement between public and expert opinion.
Also, based on the pros and cons of technocracy and populism that I outlined, it seems fairly clear to me that more populism is preferable under higher uncertainty, and more technocracy is preferable when plausible policy options have a greater range of expected values.
I think part of what makes existential risk studies so difficult is that these heuristics don’t help, because existential risk studies involves both extremely high uncertainty and plausible policy options with an extremely large range of expected values.
Possibly, these situations most suit a ‘third’ approach, where experts lobby the public rather than policymakers directly. If this is successful, public and expert opinion could become very similar, and the more similar they are, the more similar technocratic and populist approaches become, meaning that striking the right balance between them matters considerably less. (This would mean that Nick Bostrom and Toby Ord were way ahead of me by publishing Superintelligence and The Precipice).
I like that EA actively thinks about the risks associated with moral uncertainty, but I am not convinced that there is much thinking amongst EAs about experts misusing technocracy by focusing on their own interests, and I don’t think there has been much thinking about whether equally distributed political power should or should not be an end in itself. I also think that EAs haven’t sufficiently considered populism as a tool to deal with moral uncertainty. (I think the focus of moral uncertainty has generally been on experts themselves trying to account for various moral theories when forming opinions).
Also, to clarify, I am not arguing against more technocracy. I think it’s entirely reasonable for EAs to conclude that more technocracy is better than the status quo even after considering the risks, but I think it’s important for this conclusion to be made in a “scientific / rational / systematic / evidence and careful reasoning ” way. Currently, I don’t think this is generally the case even if EAs do think about moral uncertainty, for the reasons that I outlined in the paragraph above this one.
I agree that that the word ‘populism’ is very prone to misunderstandings but I think the term ‘technocracy’ is acceptably precise. While precision is important, I think we should balance this against the benefits of using more common words, which make it easier for the reader to make connections with other arguments in favour of or against a concept.
Finally, thanks for all the links!
I agree that there hasn’t been much systematic study of this question (at least not that I’m aware of), and maybe there should be. That being said, I’m deeply skeptical that it’s a good idea, and I think most other EAs who’ve considered it are too, which is why you don’t hear it proposed very often.
Some reasons for this include:
The public routinely endorses policies or principles that are nonsensical or would obviously result in terrible outcomes. Examples include Philip Tetlock’s research on taboo tradeoffs [PDF], and this poll from Reuters (h/t Matt Yglesias): “Nearly 70 percent of Americans, including a majority of Republicans, want the United States to take ‘aggressive’ action to combat climate change—but only a third would support an extra tax of $100 a year to help.”
You kind of can’t ask the public what they think about complicated questions; they’re very diverse and there’s a lot of inferential distance. You can do things like polls, but they’re often only proxies for what you really want to know, and pollster degrees-of-freedom can cause the results to be biased.
When EAs look back on history, and ask ourselves what we would/should have done if we’d been around then—particularly on questions (like whether slavery is good or bad) whose morally correct answers are no longer disputed—it seems to look like we would/should have sided with technocrats over populists, much more often than the reverse. A commonly-cited example is William Wilberforce, largely responsible for the abolition of slavery in the British Empire. Admittedly, I’d like to see some attempt to check how representative this is (though I don’t expect that question to be answerable comprehensively).
I agree that populism as a tool for dealing with moral uncertainty has obvious weaknesses (thank you for explaining some of these in detail), but in my view the weaknesses are not large enough for a systematic exploration of this question to be not worth the time.
I also agree that other EAs viewing these weaknesses as too severe would be a good explanation for why this hasn’t been done yet.
I should clarify: I think the misunderstandings are symptoms of a deeper problem, which is that the concept of “technocracy” is too many different things rolled into one word. This isn’t about jargon vs. non-jargon; substituting a more jargon-y word doesn’t help. (I think this is part of why it’s taken on such negative connotations, because people can easily roll anything they don’t like into it; that’s not itself a strong reason not to use it, but it’s illustrative.)
“Technocracy” works okay-ish in contexts like this thread where we’re all mostly speaking in vague generalities to begin with, but when discussing specific policies or even principles for thinking about policy, “I think this is too technocratic” just isn’t helpful. More specific things like “I think this policy exposes the people executing it to too much moral hazard”, or “I think this policy is too likely to have unknown-unknowns that some other group of people could have warned us about”, are better. Indeed, those are very different concerns and I see no reason to believe that EA-in-general errs the same amount, or even in the same direction, for each of them. (If words like “moral hazard” are too jargon-y then you can just replace them with their plain-English definitions.)
Thanks for the clarification. I agree that this would be a good explanation for why the term ‘technocracy’ doesn’t come up that often in EA.
This is exactly the kind of thing that I think won’t work, because reality is underpowered.
I forgot to link this earlier, but it turns out that some such research already exists (minus the stipulation that it has to be in democratic countries, but I don’t think this is necessarily a fatal problem; there are key similarities with politics in non-democratic countries). In 2009, Daron Acemoglu (a highly-respected-including-by-EAs academic who studies governance) and some other people wrote a paper [PDF] arguing that the First French Empire created a natural experiment, and examining the results. Scott reviewed it in a follow-up post to his earlier exchange with Weyl. The authors’ conclusion (spoilered because Scott’s post encourages readers to try to predict the results in advance) is that
technocratic-ish policies got better results.
I consider this moderately strong evidence against heuristics in the opposite direction, but very weak evidence in favor of heuristics in the same direction. There are quite a lot of caveats, some of which Scott gets into in the post. One of these is that the broader technocracy-vs.-populism question subsumes a number of other heuristics, which, in real life, we can apply independently of that single-axis variable. (His specific example might be controversial, but I can think of others that are harder to argue with, such as (on the technocratic side) “policies have to be incentive-compatible”, or (on the populist side) “don’t ignore large groups of people when they tell you you’ve missed something”.) Once we do that, the value of a general catch-all heuristic in one direction or the other will presumably be much diminished.
Also, there are really quite a lot of researcher degrees-of-freedom in a project like this, which makes it very hard to have any confidence that the conclusions were caused by the underlying ground truth and not by the authors’ biases. And just on a statistical level, sample sizes are always going to be tiny compared to the size of highly multi-dimensional policyspace.
So that’s why I’m pessimistic about this research program, and think we should just try to figure stuff out on a case-by-case basis instead, without waiting for generally-applicable results to come in.
Since you mentioned it, I should clarify that I have no strong opinion on whether EA should be more technocratic or more populist on the current margin. (Though it’s probably fair to say that I’m basically in favor of the status quo, because arguments against it mostly consist of claims that EA has missed something important and obvious, and I tend to find these unpersuasive. I suppose one could argue this makes me pro-technocracy, if one thought the status quo was highly technocratic.) In any case, my contention is that it’s not a crucial consideration.
Thank you for explaining all of this.
I think we are disagreeing in a general sense about the usefulness of imprecise and unreliable, but systematically obtained answers to big questions, when trying to answer smaller sub-questions. If we think these answers are less useful, we are less likely to decide that ‘technocracy vs populism in general’ is a crucial consideration. If we think these answers are more useful, we are more likely to decide that ‘technocracy vs populism in general’ is a crucial consideration.
I do agree the conclusion of Acemoglu’s paper (admittedly, it is too long for me to read) is only weak evidence in favour of more technocracy, but if other papers were able to identify more natural experiments and came to similar conclusions, in theory I think that could generate enough evidence for ‘more technocracy’ (or ‘more populism’) to be a sufficiently strong prior / heuristic to be useful when looking at individual cases, which is why I still think ‘technocracy vs populism’ is a crucial consideration.
Update: Having read another comment, it seems likely that expert opinion most replaces other expert opinion in the context of policymaking. That changes my mind on whether technocracy vs populism is a crucial consideration, since it is only relevant to ‘promoting evidence-based policy’, a very minor EA cause area.
In at least one particular case (AI safety), a somewhat deliberate decision was made to deemphasize this concern, because of a belief not only that it’s not the most important concern, but that focus on it is actively harmful to concerns that are more important.
For example, Eliezer (who pioneered the argument for worrying about accident risk from advanced AI) contends that the founding of OpenAI was an instance of this. In his telling, DeepMind had previously had a quasi-monopoly on capacity to make progress towards transformative AI, because no other well-resourced actors were working seriously on the problem. This allowed them to have a careful culture about safety and to serve as a coordination point, so that all safety-conscious AI researchers around the world could work towards the common goal of not deploying something dangerous. Elon Musk was dissatisfied with the amount of moral hazard that this exposed DeepMind CEO Demis Hassabis to, so he founded a competing organization with the explicit goal of eliminating moral hazard from advanced AI by giving control of it to everyone (as is reflected in their name, though they later pivoted away from this around the time Musk stopped being involved). This forced both organizations to put more emphasis on development speed, lest the other one build transformative AI first and do something bad with it, and encouraged other actors to do likewise by destroying the coordination point. The result is a race to the precipice [PDF], where everyone has to compromise on safety and therefore accident risk is dramatically more likely.
More generally, politics is fun to argue about and people like to look for villains, so there’s a risk that emphasis on person-vs.-person conflicts sucks up all the oxygen and accident risk doesn’t get addressed. This is applicable more broadly than just AI safety, and is at least an argument for being careful about certain flavors of discourse.
One prominent dissenter from this consensus is Andrew Critch from CHAI; you can read the comments on his post for some thoughtful argument among EAs working on AI safety about this question.
I’m not sure what to think about other kinds of policies that EA cares about; I can’t think of very many off the top of my head that have large amounts of the kind of moral hazard that advanced AI has. This seems to me like another kind of question that has to be answered on a case-by-case basis.
On the current margin, that’s not really the question; the question is whether it’s an end-in-itself whose weight in the consequentialist calculus should be high enough to overcome other considerations. I don’t feel any qualms about adopting “no” as a working assumption to that question. I do think I value this to some extent, and I think it’s right and good for that to affect my views on rich-country policies where the stakes are relatively low, but in the presence of (actual or expected future) mass death or torture, as is the case in the cause areas EA prioritizes, I think these considerations have to give way. It’s not impossible that something could change my mind about this, but I don’t think it’s likely enough that I want to wait for further evidence before going out and doing things.
Of course, there are a bunch of ways that unequally distributed political power could cause problems big enough that EAs ought to worry about them, but now you’re no longer talking about it as an end-in-itself, but rather as a means to some other outcome.
I’m sorry, I don’t understand what the difference is between those things.
I think examples and better wording might help:
With overseas aid budgets, the set of plausible policy options, such as decreasing and increasing the budget by different amounts, has a large range of expected values, and the uncertainty surrounding the expected value of each policy option is low. For this, I think more technocratic approaches are preferable.
With income tax rates, the set of plausible policy options, such as decreasing and increasing income tax rates by different amounts, has a smaller range of expected values, and the uncertainty surrounding the expected value of each policy option is high. For this, I think more populist approaches are preferable.
In the last paragraph, did you mean to write “the uncertainty surrounding the expected value of each policy option is high”?
Yes I did, apologies, just corrected it.