Forecasting (at Good Judgment, Swift Centre, Samotsvety, Sentinel, RAND and a hedge fund), biosecurity research, animal welfare, AI risk, utilitarianism. I studied medicine and public health.
Vidur Kapur
Even within the dairy and red meat categories, there are ways to reduce your greenhouse gas emissions. Milk is better than cheese, and lamb is better than beef. Also, mussels and oysters do well on climate and (probably) welfare grounds.
The Fish Welfare Initiative also works on improving shrimp welfare.
Hi Stephen. I’m also lacto-vegetarian. I take Vitamin D supplements (mainly for the reasons that they’re recommended for everyone) and an occasional Vitamin B complex or B3 supplement. I’ve considered taking algae-based Omega-3 supplements (in the form of DHA and EPA) but I don’t think the evidence is strong enough to justify the expense. My iron levels have consistently been fine without supplementation. I’ve found VeganHealth.org to be useful (I’d vouch for the quality of their evidence reviews). Ginny Messina is also worth reading (https://www.theveganrd.com/vegan-nutrition-101/vegan-nutrition-primers/recommended-supplements-a-vegan-nutrition-primer/).
In addition to Fin’s considerations and the excellent post by Jacy Anthis, I find Michael Dickens’ analysis to be useful and instructive. What We Owe The Future also contains a discussion of these issues.
Agree that not all EAs are utilitarians (though a majority of EAs who answer community surveys do appear to be utilitarian). I was just describing why it is that people who (as you said in many of your comments) think some capacities (like the capacity to suffer) are morally relevant still, despite this, also describe themselves as philosophically committed to some form of impartiality. I think Amber’s comment also covers this nicely.
That’s a good question, and is part of what Rethink Priorities are working on in their moral weight project! A hedonistic utilitarian would say that if fulfilment of the fish’s desire brings them greater pleasure (even after correcting for the intensity of pleasure perhaps generally being lower in fish) than the fulfilment of the human’s desire, then satisfying the fish’s desire should be prioritised. The key thing is that one unit of pleasure matters equally, regardless of the species of the being experiencing it.
Bentham’s view was that the ability to suffer means that we ought to give at least some moral weight to a being (their capacity to suffer determining how much weight they are given). Singer’s view, when he was a preference utilitarian, was that we should equally consider the comparable interests of all sentient beings. Every classical utilitarian will give equal weight to one unit of pleasure or one unit of suffering (taken on their own), regardless of the species, gender or race of the being experiencing the pleasure or suffering. This is a pretty mainstream view within EA. If it means (as MacAskill suggests it might, in his latest book) that the total well-being of fish outweighs the total well-being of humanity, then this is not an objectionable conclusion (and to think otherwise would be speciesist, on this view).
I’m sorry to hear that you’ve been feeling this way, Linch. I’ve also been facing some of the difficulties that you describe. I’ll try to do the best I can but would welcome the input of people who are more knowledgeable than me!
In the professional work of the English Utilitarians, what stands out to me is perseverance. When Bentham’s Panopticon project (which was meant to be an improvement on the often cruel treatment of prisoners) failed to get off the ground, he moved on to other things such as education reform (advocating for an end to corporal punishment, for example). Similarly, when the ‘Philosophical Radicals’ (a loosely knit group of parliamentarians and writers associated with utilitarianism) split in the early 1840s, Mill took the opportunity to do some “deep work” and publish A System of Logic, which had been on the back burner for over a decade.
Friendship and companionship were also important. Mill, over the same period, deepened his companionship and collaboration with Harriet Taylor, which was to be a source of great happiness to him for the rest of his life. After her death in Avignon, he would spend six months a year working close to the cemetery where she was buried. Meanwhile, Henry Sidgwick’s efforts to improve the higher education of women — which he sometimes felt did not progress rapidly enough — were supported by his wife Eleanor, and whenever he had a crisis he would always seek the company of his friends in the Cambridge Apostles (a discussion group in which he felt he could freely express his views), particularly John Addington Symonds. Symonds happened to be gay, and so Sidgwick (who often advised Symonds about what to publish) regularly had to confront dilemmas about how quickly the established moral order should challenged. (His personal experiences here may have influenced his noticeably cautious approach to the utilitarian reform of public morality in The Methods of Ethics.) Again, friendship and open discussion were indispensable to him here.
I’d be interested to learn more about the Benthamite Edwin Chadwick’s life after he was forced to retire from the Civil Service after his stint as Commissioner of the General Board of Health (following the passage of the Public Health Act of 1848, inspired by his report on sanitation). He seems to have attracted a great deal of backlash from various interest groups. One thing he did do was correspond with Florence Nightingale, who wanted to resurrect his efforts, so he did not entirely give up (despite the direct effect of the 1848 Public Health Act, partly due to lax enforcement, being modest at best).
There has historically been some overlap between the charities that Open Phil and the Animal Welfare Fund have supported, and ACE’s recommendations, which suggests that there is a degree of consensus. See also the discussion here, in which some endorse the changes that ACE has made to its methodology.
(Crossposted from FB)
Some initial thoughts: hedonistic utilitarians, ultimately, wish to maximise pleasure. Concurrently, suffering will be eliminated. In the real world, things are a lot fuzzier, and we do have to consider pleasure/suffering tradeoffs. Because it’s difficult to measure pleasure/suffering directly, preferences are used as a proxy.
But I aver that we’re not very good at considering these tradeoffs. Most are framed as thought-experiments, in which we are asked to imagine two ‘real-world’ situations. Some people may be willing to take five minutes of having a dust-speck in the eye for ten minutes of eating delicious food, whereas others may only be willing to take 30 seconds of the dust-speck. It’s likely that, when we are asked to do this, we aren’t considering the pleasure and suffering on their own, but taking other things into consideration too (perhaps thinking about our memories of similar situations in the past). The variance may also arise because a speck of dust in the eye *will* cause some people to suffer more than others.
Ideally, we’d be able to just consider the pleasure and the suffering on their own. That’s very difficult to do, though. I think there are right answers to these tradeoff questions, but that our brains aren’t able to answer the questions precisely enough. But in extreme cases, the hedonistic utilitarian could argue that anyone who would rather not have a blissful life at all, if it comes at the cost of being pricked by a pin, is simply wrong. It is the pleasure and the suffering that matter, no matter what people *say* they prefer. (See the ‘Future Tuesday Indifference’ argument promulgated by Parfit and Singer).
Sidgwick’s definition of pleasure is after all “a feeling which the sentient individual at the time of feeling it implicitly or explicitly apprehends to be desirable – desirable, that is, when considered merely as feeling.” The feeling, as it were, cannot be unfelt, even if an individual makes certain claims about the desirability (or lack thereof) of the feeling later on.
On that note, have you read Derek Parfit’s ‘On What Matters’ (particularly Parts 1 and 6, in Volumes One and Two respectively)? In my view, he makes some convincing arguments against preference-based theories. Singer and de-Lazari Radek, in ‘The Point of View of the Universe’, build on his arguments to mount a defence of hedonistic utilitarianism against other normative theories, including preference utilitarianism.
Moral realists who endorse hedonistic utilitarianism, such as Singer, posit that the very nature of what Sidgwick describes as pleasure gives us reason to increase it, and that nothing else in the universe gives us similar reasons.
The experience machine is another example of where hedonistic utilitarians would postulate that people’s preferences are plagued by bias. Joshua Greene and Peter Singer have both argued that people’s objections to entering the experience machine are the result of status quo bias, for instance.
See: https://www.tandfonline.com/doi/abs/10.1080/09515089.2012.757889?journalCode=cphp20 and https://en.wikipedia.org/wiki/Experience_machine#Counterarguments
Thank you for this piece. I enjoyed reading it and I’m glad that we’re seeing more people being explicit about their cause-prioritization decisions and opening up discussion on this crucially important issue.
I know that it’s a weak consideration, but I hadn’t, before I read this, considered the argument for the scale of values spreading being larger than the scale of AI alignment (perhaps because, as you pointed out, the numbers involved in both are huge) so thanks for bringing that up.
I’m in agreement with Michael_S that hedonium and delorium should be the most important considerations when we’re estimating the value of the far-future, and from my perspective the higher probability of hedonium likely does make the far-future robustly positive, despite the valid points you bring up. This doesn’t necessarily mean that we should focus on AIA over MCE (I don’t), but it does make it more likely that we should.
Another useful contribution, though others may disagree, was the biases section: the biases that could potentially favour AIA did resonate with me, and they are useful to keep in mind.
Thank you for the interesting post, and you provide some strong arguments for moral inclusivity.
I’m less confident that the marketing gap, if it exists, is a problem, but there may be ways to sell the more ‘weird’ cause areas, as you suggest. However, even when they are mentioned, people may still get the impression that EA is mostly about poverty. The other causes would have to be explained in the same depth as poverty (looking at specific charities in these cause areas as well as cost-effectiveness estimates where they exist, for instance) for the impression to fade, it seems to me.
While I do agree that it’s likely that a marketing gap is perceived by a good number of newcomers (based solely on my intuition), do we have any solid evidence that such a marketing gap is perceived by newcomers in particular?
Or is it mainly perceived by more ‘experienced’ EAs (many of whom may prioritise causes other than global poverty) who feel as if sufficient weight isn’t being given to other causes, or who feel guilty for giving a misleading impression relative to their own impressions (which are formed from being around others who think like them)? If the latter, then the marketing gap may be less problematic, and will be less likely to blow up in our faces.
And, as Michael says, even the perception that EA is misrepresenting itself could potentially be harmful.
I agree with the characterization of EA here: it is, in my view, about doing the most good that you can do, and EA has generally defined “good” in terms of the well-being of sentient beings. It is cause-neutral.
People can disagree on whether potential beings (who would not exist if extinction occurred) have well-being (total vs. prior-existence), they can disagree on whether non-human animals have well-being, and can disagree on how much well-being a particular intervention will result in, but they don’t arbitrarily discount the well-being of sentient beings in a speciesist manner or in a manner which discriminates against potential future beings. At least, that’s the strong form of EA. This doesn’t require one to be a moral realist, though it is very close to utilitarianism.
If I’m understanding this post correctly, the “weak form” of EA—donating more and donating more effectively to causes you already care about, or even just donating more effectively given the resources you’re willing to commit—is not unique enough for Lila to stay. I suspect, though, that many EAs (particularly those who are only familiar with the global poverty aspect of EA) only endorse this weak form, but the more vocal EAs are the ones who endorse the strong form.
I don’t think this gets us very far. You’re making a utilitarian argument (or certainly an argument consistent with utilitarianism) in favour of not trying to be a perfect utilitarian. Paradoxically, this is what a perfect utilitarian would do given the information that they have about their own limits—they’re human, as you put it. For someone such as myself who believes that utilitarianism is likely to be objectively true, therefore, I already know not to be a perfectionist.
Ultimately, Singer put it best: do the most good that you can do.
The main problem with this post, in my view, is that it’s still in some places trying trying to re-run the election debate. The relevant question is no longer about who is a bigger risk or who will cause more net suffering out of Trump or Clinton, but about how bad Trump is on his own and what we can do to reduce the risks that arise from his Presidency.
I agree that Trump’s views on Russia reduce global catastrophic risk (although his recent appointments seem to be fairly hawkish towards Russia.) However, he’ll likely increase tensions in Asia, and his views on climate change seem to me to be a major risk.
In terms of values and opinion polls, immigrants to Western nations have better attitudes than people from their native countries. Furthermore, immigrants when they return to their native countries often take back the values and norms of their host countries. I’m not saying this to make a judgement on whether immigration on this scale is good or bad, just to make the point that our aim is to make the world a better place, not to decrease crime rates in Europe.
That said, far-right extremists are on the rise in both the United States and in Europe (thanks in part to irrational overreactions and hyperbolic statements like law and order is breaking down, which is just patently false as others have said, and thanks in part due to a number of false beliefs about immigration and immigrants themselves, Muslim or not) and I think that one way to stop them from taking power in elections and from attacking immigrants, refugees and others is to give them the sense that they have control over ‘their’ borders; in other words, tactically retreating on the issue of immigration may well be a good thing. Did we need to elect Trump, with all of the risks that come with his Presidency, in order to do that?
I don’t know, but I do know that Trump has been elected now, and that many of his stated policies are terrible, and if individual EAs think that trying to change the policies of the Trump administration from the inside would be an effective thing to do (as Peter Singer has suggested) then I’d say that’s plausibly true for a small number of EAs.
I think, in general, it’s true that a small number of EAs going into party politics would be an effective thing to do, over and above the policy-change focus which already exists in the EA community and some of its organisations, but that this should be done on an individual basis: EA-affiliated groups and organisations should not get involved in party-politics.
Just a few thoughts.
Firstly, Trump’s agricultural advisors seem to be very hostile to animal welfare. This may mean that we need more people working on farmed animal welfare, not less.
In terms of going into politics, the prospect of having a group of EAs, and perhaps even an EA-associated organization, doing regular, everyday politics may turn some people off from the movement (depending on your view on whether EA is net-positive or negative overall, this may be bad or good.)
While Sentience Politics, the Open Philanthropy Project and some others I may have missed do take part in political activities, they focus on specific policies, and I suspect that what some people are talking about would involve a systematic attempt to engage in party-politics.
I think that even without Trump, the idea of having a very small number of individual EAs (maybe 1/1000 EAs) going inside politics and trying to influence administrations or even become politicians was a good one.
But, a systematic attempt to engage in party-politics would not be a good idea, partly because, even in the EA community, focusing on party-politics or even on controversial policies seems to lead to less willingness to consider other points of view.
And, partly because influencing administrations or becoming a politician on one’s own is more likely to make a difference than engaging in regular party-political campaigning, even though becoming a politician or influencing an administration is less easy to do.
Finally, I think that politics is very important, because you could potentially reduce existential risks as well as spread good values and ensure that humanity is on the right course in the future, and therefore there’s not a tension between reducing existential risks and values-spreading.
However, in order for any politicians or political advisors to be able to steer humanity in a positive direction, you need public and corporate support for it, which is why I believe that spreading anti-speciesism, working on farmed animal suffering, and so on, remains highly important too.
Overall, Trump’s election has not influenced my beliefs significantly.
Yeah, I’d say Parfit is probably the leading figure when it comes to trying to find convergence. If I understand his work correctly, he initially tried to find convergence when it came to normative ethical theories, and opted for a more zero-sum approach when it came to meta-ethics, but in the upcoming Volume Three I think he’s trying to find convergence when it comes to meta-ethics too.
In terms of normative theories, I’ve heard that he’s trying to resolve the differences between his Triple Theory (which is essentially Rule Utilitarianism) and the other theory he finds most plausible, the Act Utilitarianism of Singer and De-Lazari Radek.
Anyone trying to work on convergence should probably follow the fruitful debate surrounding ‘On What Matters’.
It’s also possible that people don’t even want to consider the notion that preventing human extinction is bad, or they may conflate it with negative utilitarianism when it could also be a consequence of classical utilitarianism.
For the record, I’ve thought about writing something about it, but I basically came to the same conclusions that you did in your blog post (I also subscribe to total, hedonistic utilitarianism and its implications i.e. anti-speciesism, concern for wild-animals etc).
If everyone has similar perspectives, it could be a sign that we’re on the right track, but it could be that we’re missing some important considerations as you say, which is why I also think more discussion of this would be useful.
In England, secular ethics isn’t really taught until Year 9 (age 13-14) or Year 10, as part of Religious Studies classes. Even then, it might be dependent on the local council, the type of school or even the exam boards/modules that are selected by the school. And by Year 10, students in some schools can opt out of taking religious studies for their GCSEs.
Anecdotally, I got into EA (at least earlier than I would have) because my high school religious studies teacher (c. 2014) could see that I had utilitarian intuitions (e.g. in discussions about animal experimentation and assisted dying) and gave me a copy of Practical Ethics to read. I then read The Life You Can Save.