I think this leaves an important open question, which is, what should the norm be if someone thinks someone else is not merely being less-than-maximally effective, but actually doing harm.
Taymon
Taymon’s Quick takes
The basic premise of this post: It’s better to solve 0.00001% of a $7 billion problem than to solve 100% of a $500 problem. (One could quibble with various oversimplifications that this formulation makes for the sake of pithiness, but the basic point is uncontroversial within EA.)
The key question: If this point is both true and obvious, why do so many people outside EA not buy it, and why do so many people within EA harbor inner doubts or feelings of failure when acting in accordance with it?
We should ask ourselves this not only to better inspire and motivate each other, or to better persuade outsiders, but also because it’s possible that this phenomenon is a hint that we’ve missed some important consideration.
I think the point about Aristides de Sousa Mendes is a bit of a red herring.
It seems like more-or-less a historical accident that Sousa Mendes is more obscure than, e.g., Oskar Schindler. Even so, he’s fairly well-known, and has pretty definitively gone down in history as a great hero. I don’t think “but he only solved 0.1% of a six-million-life problem” is an objection that anyone actually has. Saving 10,000 lives impresses people, and it doesn’t seem to impress them less just because another six million people were still going on dying in the background.
(The main counterargument that I can think of is the findings of the heuristics-and-biases literature on scope neglect, e.g., Daniel Kahneman’s experiment asking people to donate to save oil-soaked birds. I think that this kind of situation is a little different; here, you’re not appealing to something that people already care about, you’re producing a new problem out of thin air and asking people to quickly figure out how to fit it into their existing priorities. I think it makes sense that this setup doesn’t elicit careful thought about prioritization, since that’s hard, and instead people fall back on scope-insensitive heuristics. But this is a very rough argument and possibly there’s more literature here that I should be reading.)
When people are skeptical, either vocally or internally in the backs of their minds, of the efficacy of donating $500 to the Rapid Response Fund, I don’t think it’s because they think the effects will be analogous to what Sousa Mendes did, but that that’s not good enough. I think it’s because they suspect that the effects won’t be analogous to what Sousa Mendes did.
In a post about a different topic (behavioral economics), Scott Alexander writes:
1% of a small number isn’t worth it! 1% of a big number is very worth it, especially if that big number is a number of lives!
A few caveats. First, a small number only matters if it’s real. It’s very easy to get spurious small effects, so much so that any time you see a small effect you should wonder if it’s real.
I think people are worried about something like this, and I think it’s not unreasonable for them to worry.
I once observed an argument on a work mailing list with someone who was skeptical of EA. The core of this person’s disagreement with us is that they think we’ve underestimated the insidiousness of the winner’s curse. From this perspective, GiveWell’s top charity selection process doesn’t identify the best interventions—it identifies the interventions whose proponents are most willing to engage in p-hacking. Therefore, you should instead support local charities that you have personally volunteered for and whose beneficiaries you have personally met—not because of some moral-philosophical idea of greater obligations to people near you, but because this is the only kind of charity that you can know is doing any good at all.
GiveWell in particular is careful enough that I don’t worry too much that they’ve fallen into this trap. But ACE, in its earlier years, infamously recommended interventions whose efficacy turned out to have been massively overestimated. I suspect that this is also true of some interventions that are still getting significant attention and resources from within EA, even if I can’t confidently say which ones.
And then of course there’s just that big problems are complicated and the argument for why any particular intervention is effective typically has a lot of steps and/or requires you to trust various institutions whose inner workings you don’t understand that well. All this adds up to a sense that small donations to solve a big problem wind up just throwing money into a black hole, with no one really helped.
This, I think, is the real challenge that EA needs to overcome: not the small size of our best efforts relative to the scope of the problem, but skepticism, implicit or explicit, loud or quiet, justified or unjustified, that our best efforts produce real results at all.
Cross-posting my comment from Substack:
When an industry is fragmented, it will struggle to coordinate lobbying efforts against legislative welfare reforms.
How sure are we that this is the case? Matt Yglesias argues:
[Reader question]: “I just bought a used car from a dealership and it was pretty miserable experience. Why do you think there hasn’t been more of a push to allow car manufacturers to sell directly to consumers? It doesn’t seem like the car dealership lobby could be thaaat strong (especially compared to home owners associations or other areas of regulatory capture)”
No, it really is that strong.
There’s a very widespread misperception that the biggest companies have the most clout in politics, when actually highly fragmented industries like auto dealers have more clout as a collective. Just a small example is that when congress was putting the Dodd-Frank financial regulation overhaul together, Elizabeth Warren rolled the entire financial services industry and got her Consumer Financial Protection Bureau created. But to round up the votes in congress, she had to swallow an exemption from CFPB oversight for auto loans because the car dealerships had the clout to demand that.
The key to dealership strength is that there’s a dealership owner (or several) in every district, and they are rooted in the local community — often involved in sponsoring sports teams, visible on local television news, and generally playing a major role as a local influencer. People feel sentimental about local businesses. Republicans like free markets but they love businessmen, so if businessmen want to back an anti-market policy, Republicans are inclined to agree. Democrats are more skeptical of businessmen but less enthusiastic about markets, so it lands in the same place.
Maybe the political economy around mid-size factory farms is different from that around car dealerships, such that these dynamics don’t apply or apply differently. But I would want to better understand the differences. (Is it just that factory farms don’t sell directly to consumers? But my hometown had a membrane filter manufacturing plant when I was growing up, and I think it was similarly locally influential.)
A number of people invited me to 1:1s to ask me for career advice in my field, which is software engineering. Mostly of the “how do I get hired” kind rather than the “how do I pick a career path that’s most in line with EA strategic priorities” kind that 80,000 Hours specializes in. Unfortunately I’m not very good at this kind of advice (I haven’t looked for a new job in more than eight years) and haven’t been able to find anywhere else I could send people to that would be more helpful. I think there used to be an affinity group or something for EA software engineers, but I don’t think it’s active anymore.
Anyone know of anything like this? If not, and if you’re the kind of person who’s well-positioned to start a group like this, consider this a request for one.
Honestly it seems kind of weird that on the EA Forum there isn’t just a checkbox for this.
I’ve often thought that there should be separate “phatic” and “substantive” comment sections.
What does “IMPCO” mean? Search engines are failing me.
The Fun Theory Sequence (which is on a similar topic) had some things to say about the Culture.
Obligatory link to Scott Alexander’s “Ambijectivity” regarding the contentiousness of defining great art.
In the last paragraph, did you mean to write “the uncertainty surrounding the expected value of each policy option is high”?
While true, I think most proposed EA policy projects are much too small in scope to be able to move the needle on trust, and so need to take the currently-existing level of trust as a given.
I agree that that the word ‘populism’ is very prone to misunderstandings but I think the term ‘technocracy’ is acceptably precise. While precision is important, I think we should balance this against the benefits of using more common words, which make it easier for the reader to make connections with other arguments in favour of or against a concept.
I should clarify: I think the misunderstandings are symptoms of a deeper problem, which is that the concept of “technocracy” is too many different things rolled into one word. This isn’t about jargon vs. non-jargon; substituting a more jargon-y word doesn’t help. (I think this is part of why it’s taken on such negative connotations, because people can easily roll anything they don’t like into it; that’s not itself a strong reason not to use it, but it’s illustrative.)
“Technocracy” works okay-ish in contexts like this thread where we’re all mostly speaking in vague generalities to begin with, but when discussing specific policies or even principles for thinking about policy, “I think this is too technocratic” just isn’t helpful. More specific things like “I think this policy exposes the people executing it to too much moral hazard”, or “I think this policy is too likely to have unknown-unknowns that some other group of people could have warned us about”, are better. Indeed, those are very different concerns and I see no reason to believe that EA-in-general errs the same amount, or even in the same direction, for each of them. (If words like “moral hazard” are too jargon-y then you can just replace them with their plain-English definitions.)
I also think that EAs haven’t sufficiently considered populism as a tool to deal with moral uncertainty.
I agree that there hasn’t been much systematic study of this question (at least not that I’m aware of), and maybe there should be. That being said, I’m deeply skeptical that it’s a good idea, and I think most other EAs who’ve considered it are too, which is why you don’t hear it proposed very often.
Some reasons for this include:
The public routinely endorses policies or principles that are nonsensical or would obviously result in terrible outcomes. Examples include Philip Tetlock’s research on taboo tradeoffs [PDF], and this poll from Reuters (h/t Matt Yglesias): “Nearly 70 percent of Americans, including a majority of Republicans, want the United States to take ‘aggressive’ action to combat climate change—but only a third would support an extra tax of $100 a year to help.”
You kind of can’t ask the public what they think about complicated questions; they’re very diverse and there’s a lot of inferential distance. You can do things like polls, but they’re often only proxies for what you really want to know, and pollster degrees-of-freedom can cause the results to be biased.
When EAs look back on history, and ask ourselves what we would/should have done if we’d been around then—particularly on questions (like whether slavery is good or bad) whose morally correct answers are no longer disputed—it seems to look like we would/should have sided with technocrats over populists, much more often than the reverse. A commonly-cited example is William Wilberforce, largely responsible for the abolition of slavery in the British Empire. Admittedly, I’d like to see some attempt to check how representative this is (though I don’t expect that question to be answerable comprehensively).
I am not convinced that there is much thinking amongst EAs about experts misusing technocracy by focusing on their own interests
In at least one particular case (AI safety), a somewhat deliberate decision was made to deemphasize this concern, because of a belief not only that it’s not the most important concern, but that focus on it is actively harmful to concerns that are more important.
For example, Eliezer (who pioneered the argument for worrying about accident risk from advanced AI) contends that the founding of OpenAI was an instance of this. In his telling, DeepMind had previously had a quasi-monopoly on capacity to make progress towards transformative AI, because no other well-resourced actors were working seriously on the problem. This allowed them to have a careful culture about safety and to serve as a coordination point, so that all safety-conscious AI researchers around the world could work towards the common goal of not deploying something dangerous. Elon Musk was dissatisfied with the amount of moral hazard that this exposed DeepMind CEO Demis Hassabis to, so he founded a competing organization with the explicit goal of eliminating moral hazard from advanced AI by giving control of it to everyone (as is reflected in their name, though they later pivoted away from this around the time Musk stopped being involved). This forced both organizations to put more emphasis on development speed, lest the other one build transformative AI first and do something bad with it, and encouraged other actors to do likewise by destroying the coordination point. The result is a race to the precipice [PDF], where everyone has to compromise on safety and therefore accident risk is dramatically more likely.
More generally, politics is fun to argue about and people like to look for villains, so there’s a risk that emphasis on person-vs.-person conflicts sucks up all the oxygen and accident risk doesn’t get addressed. This is applicable more broadly than just AI safety, and is at least an argument for being careful about certain flavors of discourse.
One prominent dissenter from this consensus is Andrew Critch from CHAI; you can read the comments on his post for some thoughtful argument among EAs working on AI safety about this question.
I’m not sure what to think about other kinds of policies that EA cares about; I can’t think of very many off the top of my head that have large amounts of the kind of moral hazard that advanced AI has. This seems to me like another kind of question that has to be answered on a case-by-case basis.
I don’t think there has been much thinking about whether equally distributed political power should or should not be an end in itself.
On the current margin, that’s not really the question; the question is whether it’s an end-in-itself whose weight in the consequentialist calculus should be high enough to overcome other considerations. I don’t feel any qualms about adopting “no” as a working assumption to that question. I do think I value this to some extent, and I think it’s right and good for that to affect my views on rich-country policies where the stakes are relatively low, but in the presence of (actual or expected future) mass death or torture, as is the case in the cause areas EA prioritizes, I think these considerations have to give way. It’s not impossible that something could change my mind about this, but I don’t think it’s likely enough that I want to wait for further evidence before going out and doing things.
Of course, there are a bunch of ways that unequally distributed political power could cause problems big enough that EAs ought to worry about them, but now you’re no longer talking about it as an end-in-itself, but rather as a means to some other outcome.
it seems fairly clear to me that more populism is preferable under higher uncertainty, and more technocracy is preferable when plausible policy options have a greater range of expected values.
I’m sorry, I don’t understand what the difference is between those things.
I think someone should research policy changes in democratic countries which counterfactually led to the world getting a lot better or worse (under a range of different moral theories, and under public opinion), and the extent to which these changes were technocratic or populist. This would be useful to establish the track records of technocracy and populism, giving us a better reason to generally lean one way or the other.
This is exactly the kind of thing that I think won’t work, because reality is underpowered.
I forgot to link this earlier, but it turns out that some such research already exists (minus the stipulation that it has to be in democratic countries, but I don’t think this is necessarily a fatal problem; there are key similarities with politics in non-democratic countries). In 2009, Daron Acemoglu (a highly-respected-including-by-EAs academic who studies governance) and some other people wrote a paper [PDF] arguing that the First French Empire created a natural experiment, and examining the results. Scott reviewed it in a follow-up post to his earlier exchange with Weyl. The authors’ conclusion (spoilered because Scott’s post encourages readers to try to predict the results in advance) is that
technocratic-ish policies got better results.
I consider this moderately strong evidence against heuristics in the opposite direction, but very weak evidence in favor of heuristics in the same direction. There are quite a lot of caveats, some of which Scott gets into in the post. One of these is that the broader technocracy-vs.-populism question subsumes a number of other heuristics, which, in real life, we can apply independently of that single-axis variable. (His specific example might be controversial, but I can think of others that are harder to argue with, such as (on the technocratic side) “policies have to be incentive-compatible”, or (on the populist side) “don’t ignore large groups of people when they tell you you’ve missed something”.) Once we do that, the value of a general catch-all heuristic in one direction or the other will presumably be much diminished.
Also, there are really quite a lot of researcher degrees-of-freedom in a project like this, which makes it very hard to have any confidence that the conclusions were caused by the underlying ground truth and not by the authors’ biases. And just on a statistical level, sample sizes are always going to be tiny compared to the size of highly multi-dimensional policyspace.
So that’s why I’m pessimistic about this research program, and think we should just try to figure stuff out on a case-by-case basis instead, without waiting for generally-applicable results to come in.
Since you mentioned it, I should clarify that I have no strong opinion on whether EA should be more technocratic or more populist on the current margin. (Though it’s probably fair to say that I’m basically in favor of the status quo, because arguments against it mostly consist of claims that EA has missed something important and obvious, and I tend to find these unpersuasive. I suppose one could argue this makes me pro-technocracy, if one thought the status quo was highly technocratic.) In any case, my contention is that it’s not a crucial consideration.
First of all, thanks for this post. The previous post on this topic (full disclosure: I haven’t yet managed to read the paper in detail) poisoned the discourse pretty badly by being largely concerned with meta-debate and by throwing out associations between the authors’ dispreferred policy views and various unsavory-sounding concepts. I was worried that this meant nobody would try to address these questions in a constructive manner, and I’m glad someone has.
I also agree that there’s been a bit of unreflectiveness in the adoption of a technocratic-by-default baseline assumption in EA. I was mostly a populist pre-EA, gradually became a technocrat because the people around me who shared my values were technocrats, and I don’t think this was attributable to anyone convincing me that my previous viewpoint was wrong, for the most part. (By contrast, while social effects/frog-boiling were probably important in eroding my resistance to adopting EA views on AI safety, the reason I was thinking about adopting such views in the first place was because I read arguments for them that I couldn’t refute.) I’m guessing this has happened to other people too. This is probably worrying and I don’t think it’s necessarily applicable to just this issue.
That said, I didn’t know what to actually do about any of this, and after reading this post, I still don’t. I think my biggest disagreement is that I don’t think the concept of “technocracy” is actually very helpful, even if it’s pointing at a real cluster of things.
I’m reading you as advocating that your four key questions be treated as crucial considerations for EA. I don’t think this is going to work, because these questions do not actually have general answers. Reality is underpowered. Social science is nowhere near being capable of providing fully-general answers to questions this huge. I don’t think it’s even capable of providing good heuristics, because this kind of question is what’s left after all known-good heuristics have already been taken into account; that’s why it keeps coming up again and again. There is just no avoiding addressing these questions on a case-by-case basis for each individual policy that comes up.
One might argue that the concept of “technocracy” is nevertheless useful for reminding people that they need to actually consider this vague cluster of potential risks and downsides when formulating or making the case for a policy, instead of just forgetting about them. My objection here is that, as far as I can tell, EAs already do this. (To give just one example, Eliezer Yudkowsky has explicitly written about moral hazard in AGI development.) If this doesn’t change our minds, it’s because we think all the alternatives are worse even after accounting for these risks. You can make an argument that we got the assessment wrong, but again, I think it has to be grounded in specifics.
If we don’t routinely use the word “technocracy”, then maybe that’s just because the word tends to mean a lot of different things to a lot of different people; you’ve adopted a particular convention in this post, but it’s far from universal. Even if the meanings are related, they’re not precise, and EAs value precision in writing. Routinely describing proposed policies as “populist” or “technocratic” seems likely to result in frequent misunderstandings.
Finally, since it sounds like there are concerns about lack of existing writing in the EAsphere about these questions, I’d like to link some good ones:
Scott Alexander’s back-and-forth with Glen Weyl (part 1, part 2; don’t miss Scott’s response in the comments, and I think Weyl said further things on Twitter although I don’t have links). Uses the word “technocracy”, and is probably the most widely-read explicit discussion of technocracy-vs.-populism in the EAsphere. I think that Scott, at least, cannot reasonably be accused of never having thought about this.
Scott’s review of Rob Reich’s book Just Giving. Doesn’t use the word “technocracy”, but gets into similar issues, and presumably Reich’s perspective in the book comes from many of the same concerns that drove this piece, which I think is what Peter Singer was responding to in the EA Handbook post that you linked. Builds on the earlier post “Against Against Billionaire Philanthropy” (see also highlights from the comments).
“Against Multilateralism”, by Sarah Constantin. Maybe the EAsphere post that most explicitly lays out the case for something-like-populism (though ultimately not siding with it). Argues with Weyl again, though it actually predates his engagement with Scott and EA. Ends with some promising directions that, if further explored, could maybe be our best hope currently available of making general progress on this class of questions (though I still don’t think they rise to the level of crucial considerations).
I’m starting to think that the EA Global meetup format might not be optimal. At the very least, I didn’t get as much out of it this year as I was hoping to, and the same thing happened last year, and I suspect others might have been in the same position. (At one meetup, others I talked to expressed frustrations similar to my own.) Here are some thoughts on why, and how it might be improved.
For context: Meetups are the most frequent type of event on the EA Global schedule other than talks. There are meetups for people working in a particular cause area (e.g., nuclear risk, digital sentience, or community building) or with a particular skillset (e.g., earning to give, operations, or communications). There are also affinity-group meetups (e.g., for particular demographic minority groups), but I didn’t attend any of those and the theory of what they’re for seems different, so here I’m primarily talking about the cause area and skillset ones.
The way the meetups I attended worked was this: There were a bunch of tables in a room, and people sat around them and were encouraged to have conversations. Every ten to fifteen minutes, the organizer would ring a bell and tell people to stop their current conversations and switch to a different table with different people. At some meetups, the tables had cards indicating a particular conversation topic; at others, the organizer would offer a discussion prompt for the whole room at the start of each round; at others, it was entirely freeform.
(There were also meetups that had a different, “speed meeting” format; I didn’t attend any of these, but my understanding is that you’d have a series of time-limited one-on-one conversations with a random person, instead of group conversations. Interestingly, the meetups that used this format were the ones for the largest cause areas (e.g., global poverty, animal welfare, or AI safety); I’m not sure if there was a specific reason for that or if it was just a coincidence.)
I’m of course not in a position to know what are the intended outcomes of this format, but I have a vague guess. CEA’s events team has indicated fairly consistently that they think that most of the value of EA Global comes from one-on-one meetings; attendees are strongly encouraged to schedule a lot of these. A ten-minute conversation is just long enough to establish some amount of common interest and exchange contact information (or just names, given the availability of Swapcard), and so the meetup format could serve as a way of generating 1:1 connections. The “speed meeting” format probably does this more effectively in any given case, but with fewer people, so maybe the group-discussion format maximizes surface area for 1:1 connections.
I personally didn’t find this, or more generally a 1:1-centric approach to EA Global, especially useful, for two reasons:
I think it works better if everyone involved is a social butterfly who likes doing open-ended networking. This does not describe me; although I don’t get a lot of 1:1 invitations anymore, I did in earlier years when I was more actively involved in EA community building, and while I had a good experience when someone had a specific agenda, I found ones without one unpleasant, stressful, and draining. I don’t know exactly how prevalent it is, but my suspicion is that it’s probably not all that rare. I’ll also observe that the advice around 1:1s has softened in recent years; although it’s still encouraged to schedule a lot of them, it’s also now more encouraged to have a specific purpose for each one, and it’s more explicitly stated that it’s okay to turn down 1:1s.
There exist situations where a group conversation is more directly useful than a 1:1. In my case, for example, I was hoping to get a better general sense of the state of EA community building, what people’s theories of change are, and what kind of work they think should be getting done but isn’t; this is the kind of thing that I’d hope could help me get back into the space after a several-year hiatus. This is a poor fit for 1:1s because I didn’t know who to talk to (just looking up the “EA community building” category on Swapcard didn’t really provide enough specifics to answer this) and also didn’t have a specific ask or agenda. Talking to a bunch of people at once, where any of them can contribute whatever relevant perspective they have, seems more valuable for this use case.
The following ideas could possibly make meetups more conducive to substantive conversations on decision-relevant topics:
Don’t interrupt conversations every ten minutes. If a substantive conversation is happening, it will have just started to gain momentum by then. People should instead be encouraged to circulate tables if they haven’t found a conversation worth sticking with.
Prefer the format where each table is for a different topic or question, so that people can self-sort by what conversations they want to have.
Use discussion questions that are substantive and decision-relevant. Some meetups I attended (like the earning-to-give one) were already doing this, but others were not.
Going further, it might be worth finding out in advance from attendees what kinds of discussion topics are decision-relevant to them. (E.g., in a previous year, there was a community-building meetup where strategy/theory-of-change was not one of the topics; I suspect I wasn’t the only one there who would have found this relevant.) Maybe there’s a way to do this in Swapcard with the question-submission feature or something similar.
One other possibility that occurs to me is that there might be capacity constraints in play; the current format isn’t very demanding of volunteer time, and that might be important since volunteer time is presumably at a premium during the event. I’m not sure what to do about this, but I do think there’s probably someone at the event who’s sufficiently invested in any given meetup that they’d be willing to put in the time to make it go well. Possibly it might be worth recruiting people specifically to run meetups, separately from the regular volunteer pool. (In particular, you’d recruit someone who’s an expert in the relevant cause area or skillset, and is sufficiently interested in doing community-building around it to have some idea of what conversations are useful for people to have.)