Four practices where EAs ought to course-correct

Here are some areas where I’ve felt for a long time that fellow EA community members are making systematic mistakes.

Summary: 1. Don’t worry much about diet change 2. Be generally cynical or skeptical about AI ethics and safety initiatives that are not closely connected to the core long-run issue of AGI alignment and international cooperation 3. Worry more about object-level cause prioritization and charity evaluation, worry less about meta-level methodology 4. Be more ruthless in promoting Effective Altruism.

Over-emphasis on diet change

EAs seem to place continuously high emphasis on adopting vegan, vegetarian and reducetarian diets.

However, the benefits of going vegan are equivalent to less than a nickel per day donated to effective charities. Other EAs have raised this this point before; the only decent response given at the time was that the estimates for the effectiveness of animal charities were likely over-optimistic. However, in the linked post I took the numbers displayed by ACE in 2019, and scaled them back a few times to be conservative, so it would be tough to argue that they are over-optimistic. I also used conservative estimates of climate change charities to offset the climate impacts, and also toyed with using climate change charities to offset animal suffering by using the fungible welfare estimates (I didn’t post that part but it’s easy to replicate). In both cases, still the vegan diet is only as good as donations of pennies per day, suggesting that there is nothing particularly optimistic about animal charity ratings, it’s just the nature of individual consumption decisions to make a tiny impact. And then we have to contend with other effective charities like x-risk and global poverty alleviation possibly being better than animal and climate change charities. Therefore, this response is now very difficult to substantiate.

The basic absolute merit of veganism is of course not being debated here—it saves a significant number of animals, which is sufficient to prefer that a generic member of society be vegan (given current farming practices at least).

However, the relative impact of other efforts seems to be much much higher, so there are other implications. First, putting public emphasis on being vegan/​vegetarian is a bad choice, compared to placing that emphasis on donations (or career changes, etc). This study suggests that nudges to “turn off the lights” etc can reduce people’s support for a carbon tax, as they feel like there is an alternative and easier solution for the environment besides legislation. What if a similar effect applies to animal welfare legislation or donations? The effect goes away when people know just how little of an impact they are actually having, but such messages are rarely given when it comes to veg*n activism—even when EAs are doing it. In addition to a possibly detrimental impact on the political attitudes and donation habits of our audience (committed EAs themselves, almost certainly, are not so vulnerable to these nudges) there is a risk that it reduces the popular appeal of the EA movement. While veg*nism seems to be significantly more accepted in public discourse now than it was ~10 years ago, it’s still quite controversial.

Second, actually being vegan/​vegetarian may be a bad choice for someone who is doing productive things with their career and donations. If a veg*n diet is slightly more expensive, more time consuming, or less healthy, then adopting it is a poor choice. Of course, many people have adequately pointed out that veg*n diets need not be more expensive, time consuming, or unhealthy than omnivorous diets. However, it’s substantially more difficult to make them satisfy all three criteria at the same time. As for expense and time consumption—that’s really something for people to decide for themselves, based on their local food options and habits. As for health:

Small tangent on the healthiness of vegan/​vegetarian diets

I am not a nutritionist but my very brief look at the opinions of expert and enthusiast nutritionists and the studies they cite has told me that the healthiest diet is probably not vegetarian.

First, not all animal products are equal, and the oft-touted pro-veg*n studies overlook these differences. Many of the supposed benefits of veg*n diets seem to come from the exclusion of processed meat, which is meat that has been treated with modern preservatives, flavorings, etc. This is really backed up by studies, not just anti-artificial sentiment. Good studies looking at the health impacts of unprocessed meat (which, I believe, generally includes ground beef) are rare. I’ve only found one, a cohort study, and it did find that unprocessed red meat increased mortality, but not as much as processed red meat. Whether unprocessed white meat and fish have detrimental impacts seems like a very open question. And even when it comes to red meat, nutritional findings that were backed by similarly strong evidence as this have been overturned in the past, I believe. Then there are a select few types of meat which seem particularly healthy, like sardines, liver and marrow, and there is still less reason to believe that they are harmful. Moving on to dairy products, it seems that fermented dairy products are significantly superior to nonfermented ones.

Second, vegan diets miss out on creatine, omega-3 fat in its proper EHA/​DHA form, Vitamin D, taurine, and carnosine. Dietary intake of these is not generally necessary for a basically decent life as far as I know, but being fully healthy (longest working life + highest chance of living to a longevity horizon + best cognitive function) is a different story, and these chemicals are variously known or hypothesized to be beneficial. You can of course supplement, but at the cost of extra time and money—and that’s assuming that you remember to supplement. For some people who are simply bad at keeping habits—me, at least—supplementing for an important nutrient just isn’t a reliable option; I can set my mind to do it but I predictably fail to keep up with it.

Third, vegan/​vegetarian diets reduce your flexibility to make other healthy changes. As an omnivore, it’s pretty easy for me to minimize or avoid unhealthy foods such as store-bought bread (with so many preservatives, flavorings etc) and fortified cereal. As a vegetarian or vegan, this would be significantly more difficult. When I was vegan and when I was vegetarian, both times I made it work by eating some less-than-healthy foods, otherwise I would have had to face greater time and/​or money spent on putting my diet together.

Finally, nutritional science is frankly a terrible mess, and not necessarily due to ill motives and practices on the part of researchers (though there is some of that) but also because of just how difficult it is to tease out correlation from causation in this business. There’s a lot that we don’t understand, including chemicals that may play a valuable health role but haven’t been properly identified as such. Therefore, in the absence of clear guidance it’s wise to defer to eating (a) a wide variety of foods, which is enhanced by including animal products, and (b) foods that we evolved to eat, which has usually included at least a small amount of meat.

For these reasons, I weakly feel that the healthiest diet will include some meat and/​or fish, and feel it more strongly if we consider that someone is spending only a limited amount of time and money on their diet. Of course that doesn’t mean that a typical Western omnivorous diet is superior to a typical Western veg*n diet (it probably isn’t).

Too much enthusiasm for AI ethics

The thesis of misaligned AGI risk, developed by researchers like Yudkowsky and Bostrom, has motivated a rather wide range of efforts to establish near-term safety and ethics measures in AI. The idea is that by starting conversations and institutions and regulatory frameworks now, we’re going to be in a better position to build safe AGI in the future.

There is some value in that idea, but people have taken it too far and willingly signed onto AI issues without a clear benefit for long-run AI safety or even for near-term AI use in its own right. (I’ve been guilty of this.) The problem is a lack of good reason to believe that better outcomes are achieved when people put a greater emphasis on AI ethics. Most people outside of EA do not engage in robust consequentialist analysis for ethics. One example would be the fact that Google’s ethics board was dissolved because of outrage against the inclusion of the conservative Kay Coles James, larger on the basis of her views on gender politics; an EA writing for Vox, Kelsey Piper, mildly fanned the flames by describing (but, commendably, not endorsing) the regular outrage while simultaneously taking Google to task for not assigning substantial power to the ethics board. Yet it’s not really clear if a powerful ethics board—especially one which is composed only of people approved by Google’s constituency—is desirable, as I shall argue. An example of AI ethics boards in action would be an ethics report produced by the ethics board at the policing technology company Axon, which recommended against using facial recognition technology on body cams. While it purports to perform a “cost-benefit analysis”, and included the participation of one Miles Brundage who is affiliated with the EA community, the recommendation was developed on a wholly rhetorical and intuitive basis without any quantification nor explicit qualitative comparison of costs and benefits. It had a dubious and partisan emphasis on improving the relative power and social status of racial minorities as opposed to a cleaner emphasis on improving aggregate welfare, and an utterly bizarre omission of the benefit that facial recognition tech could make it easier to identify suspects and combat crime. My attempts to question two of the authors about some of these problems led nowhere.

EAs have piled onto the worries over “killer robots” without adequate supporting argument. I have seen EAs circulate half-baked fears of suicide drones making it easy to murder people (just carry a tennis racket, or allow municipalities to track or ban drone flights if they so choose) or assassinate political leaders (they already speak behind bullet-resistant plexiglass sometimes, this is not a problem) or overwhelm defenses (just use turrets with lasers or guns; every measure has a countermeasure). As I argued here, introducing AI into international warfare does not seem bad overall. This point was generally accepted; the remaining quarrel was that AI could facilitate more totalitarian rule as the government could take domestic actions without the consent of human police/​militaries. I think this argument is potentially valid but unsolved; maybe stronger policing is better for countries, it needs more investigation. These robots will be subject to democratic oversight and approval, not totalitarian command. When unethical police behavior is restrained, it is almost always done by public outrage and oversight, not by freethinking police officers disobeying their orders.

For a more extreme hypothesis, Ariel Conn at FLI has voiced the omnipresent Western fear of resurgent ethnic cleansing, citing the ease of facial recognition of people’s race—but has that ever been the main obstacle to genocide? Moreover, the idea of thoughtless machines dutifully carrying out a campaign of mass murder takes a rather lopsided view of the history of ethnic cleansing and genocide, where the real death and suffering is not mitigated by the presence of humans in the loop more often than it is caused or exacerbated by human passions, grievances, limitations, and incompetency. To be clear, I don’t think weaponized AI would make the risks of genocide or ethnic cleansing smaller, there just seems to be no good reason to expect it to make the risks bigger.

On top of all this, few seem to have seriously grappled with the fact that we only have real influence in the West, and producing fewer AI weapons mainly just means fewer AI weapons in the West. You can wish for a potent international treaty, but even if that pans out (history suggests it probably won’t) it doesn’t change the fact that EAs and other activists are incorrectly calling to stop AI weapon development now. And better weapons for the West does mean better global outcomes—especially now that the primary question for Western strategic thinkers is probably not about expanding or even maintaining a semblance of Western global hegemony, but just determining how much Western regional security and influence can be saved from falling victim to rising Russian, Chinese and other challenges. But even when the West was engaging in very dubious wars of global policing (Vietnam, Iraq) it still seems that winning a bad war would have been much better than losing a bad war. Even Trump’s recently speculated military adventures in Venezuela and Iran, if they had occurred, would be less bad if they resulted in American victory than American defeat. True, there is moral hazard involved in giving better tools for politicians to commit to bad policies, but on my intuition that seems unlikely to outright outweigh the benefits of success—it would just partially counterbalance them. (Piper, writing for Vox, did mention improved military capability as a benefit of AI weapons.)

So generally speaking, giving more power to philosophers and activists and regulators to restrict the development and applications of AI doesn’t seem to lead anywhere good in the short or medium run. EA-dominated institutions would be mostly trustworthy to do it well (I hesitate slightly because of FLI’s persistent campaigning against AI weaponry), but an outside institution/​network with a small amount of EA participation (or even worse, no EA participation) is a different story.

The real argument for near-term AI oversight is that it will lead to better systems in the long run. But I am rather skeptical that, in the long run, we will suffer from a dearth of public scrutiny of AI ethics and safety. AI ethics and safety for current systems is not neglected; arguably it’s over-emphasized at the expense of liberty and progress. Why think it will be neglected in the future? As AI advances and proliferates, it will likely gain more public attention, and by the time that AGI comes around, we may well find ourselves being restrained by too much caution and interference from activists and philosophers. Of course Bostrom and Yudkowsky’s thesis on AGI misalignment will not be so neglected when people see AI on the verge of surpassing humans! Yes, AI progress can be unexpectedly rapid, so there may be some neglect, but there will still be less neglect than there is now. And faster AGI rollout could be preferable because AI might reduce global risk, or because Bostrom’s ‘astronomical waste’ argument for great caution at the expense of growth is flawed. I think it likely is, because it relies on the debatable assumptions of (a) existential risks being concentrated in the near/​medium term future and (b) a logistic (as opposed to exponential) growth in the value of humanity as time goes by. Tyler Cowen has argued for putting growth as comparably important to risk management. Nick Beckstead puts more doubts on the astronomical waste argument. Therefore even AGI/​ASI rollout should arguably follow the status quo or be accelerated, so more ethics/​safety oversight and regulation on the margin will possibly be harmful.

To be sure, international institutions for cooperation on AI and actual alignment research, ahead of time, are both robustly good things where we can reliably expect society to err on the side of doing too little. But the other stuff has minimal or possibly negative value.

Top-heavy emphasis on methodology at the expense of object level progress (edit: OK, few people are actually inhibited by this, not a big deal)

It pains me to see so much effort going into writeups and arguments along the lines of EA needs more of [my favorite type of research] or EA needs to rely less on quantitative expected value estimates and so on. This is often cheap criticism which doesn’t really lead anywhere but intractable arguments, and can weaken the reputation of the EA movement. This seems reminiscent of the perennial naive-science versus philosophy-of-science wars, but where most science fields seem to have fifty scientists for every philosopher of science, we seem to have two or three EA researchers for every methodology-of-EA philosopher. Probably an exaggeration but you get the point.

EA has made exactly one major methodological step forward since its beginnings, which was identifying the optimizer’s curse about eight years ago, something which had the benefit of a mathematical proof. I can’t think of any meta level argument that has substantially contributed to the EA cause prioritization and charity evaluation process since then. I at least have not benefited from other such arguments. To be clear, such inquiry is better than nothing. But what’s much better is for people to engage in real, object level arguments about causes and charities. If you think that EA can benefit by paying more attention to, say, psychoanalytic theory, then great! Don’t tell us or berate us about it; instead, lead by example, use psychoanalytic theory, and show us what it says about a charity or cause area. If you’re right about the value of this research methodology, then this should be easy for you to do. And then we will see your point and we’ll know how to look into this research for more ideas. This is very similar to Noah Smith’s argument on the two-paper rule. It’s a much more epistemically and socially healthy way of doing things. And along the way, we can get directly useful information about cause areas that we may be missing. Until then, don’t write me off as an ideologue just because I’m not inclined to spend my limited free time struggling through Deleuze and Guattari.

Not ruthless enough

This post suggested the rather alarming idea that EA’s growth is petering out in a sort of logistic curve. It needs to be taken very seriously. In my biased opinion, this validates some of my longtime suspicions that EAs are not doing enough to actively promote EA as something to be allied with. We’ve been excessively nice and humble to criticisms, and allowed outsiders’ ideas to dominate public conversations about EA. We’ve over-estimated the popular appeal that comes from being unusually nice and deferential, neglected the popular appeal that comes from strength and condemnation, imagined everything in terms of ‘mistake theory’ instead of developing a capacity to wield ‘conflict theory’, and assumed that the popular human conception of “ethics” and “niceness” was as neurotic, rigid and impartial as the upper class urban white Bay Area/​Oxford academic conception of “ethics” and “niceness”. In today’s world, people don’t care how “ethical” or “nice” you are if you are on the wrong team, and people who don’t have a team won’t be motivated to action unless you give them one.

I can’t spell out more precisely what I think EAs should do differently, not because I’m trying to be coy about some unspeakable subversive plot, but because every person needs to look in their own life and environment to decide for themselves what they should do to develop a more powerful EA movement, and this is going to vary person to person. Generally speaking, I just think EAs should have a change in mindset and take leaves out of the books of more powerful social movements. We should absolutely be very nice and fair to each other, and avoid some of the excesses of hostility displayed by other social movements, but there’s more to the issue than that.