Hello- first of all I think you verbalised a bunch of very interesting and useful ideas about EA, its role and strategy. However as someone who currently donates 10% of my salary to addressing farmed animal welfare, I have some criticisms of your conclusions. I know that you’re not ruling out donating to animal charities, but requiring people to donate >10% of their salaries to charity is just putting the bar insanely high for the vast vast majority of people. So in effect your proposal means ceasing support to farmed animal welfare in favour of global poverty focused charities
One of the issues with this argument to my mind is that the same basic form can be made compatible with nationalistic rhetoric. ‘Before we donate a single dollar/​pound of aid, we need to make sure no child is hungry in our own country, etc.’ If we accept an argument for partiality towards some strangers over other strangers (beyond questions of effectiveness)- why draw the line to contain all humans rather than humans of a specific nationality, ethnicity, eye colour etc.
I completely get the ‘optics’ rationale for not prioritising nematode welfare, but I think saying that we need to solve all major causes of human sufffering before addressing factory farming is too conservative. Quite a lot of people are against factory farming in a way which is not true about wild animal suffering (or farmed invertebrates suffering for that matter). After all it is fear of public opinion which makes farmed animal welfare charity campaigns so unreasonably effective (particularly caged hen corporate campaigns). This is why factory farming and wild invertebrate suffering are in different leagues as far as optics are concerned. In essence—I agree that emphasising certain ‘far out’ aspects of EA can be off-putting, but I don’t think that factory farming is so beyond the Overton window.
Also- the Overton window is malleable, many ideas (abolitionism, women’s suffrage, AI safety) sounded completely nutty when they were first floated—not to mention ‘immoral’. One of the historic missions that EA is currently fulfilling is pushing this circle outward, not by solving all issues for people within the circle first, but by challenging where most people draw the boundary in the first place. It can’t be done all at once (we’re not going to convince most people about shrimp anytime soon) but we can move the line inch by inch over decades- which is pretty much how all moral progress has worked up till this point. I’m fairly confident it will continue working this way (barring future existential catastrophes)
Impatient_Longtermist 🔸🌱
SurÂvey reÂveals chasm beÂtween pubÂlic and exÂpert views of AI bio-risk
I think one simple and effective idea is tying EA to marginal decreasing utility. Decreasing marginal utility is often a Econ-101 topic as it explains the downward sloping nature of demand curves. It is also a fundamental part of why donating money overseas rather than domestically is more impactful (a foundational EA insight).
People living in the West are most likely in the top 10% of global incomes, and because of that a single $/​£/​Euro will be purchase significantly less wellbeing than for someone in a low-income country. This is basically the ‘drowning-child’ argument in a nutshell, tied to a 101 Econ principle, and a good starting point before exploring more contentious/​less intuitive EA ideas.
I think another topic that can springboard into EA type ideas is the idea of discount rates, as this brings up the subject of how much we should care about the future. The question of discounting is central to Longtermism, and a ongoing discussion within economics, with plenty of different perspectives to consider.
I went through a phase of researching masks for several weeks a few years ago and I fully agree with your choice of the 3M half face respirator as a one of the best options available.
I think that social movements are most effective when they have face-to-face interactions, which build solidarity, facilitate discussion, and prevent value drift (as well as reducing burnout and increasing subjective well-being). However, outside of EAGs I don’t see many opportunities to socialise with other EAs in London. This is despite London being the second largest EA city after SF (or so I’ve been told). Am I missing something?
I completely agree with your comment. However my interpretation of what Professor Jones is trying to do is slightly different from straightforward cause prioritisation in the EA sense.
I think he is trying to frame AI risk reduction in a way that is compelling to policymakers, by focusing on standard benchmark values (Value of a Statistical Life), and limiting his analysis in space (only ‘valuing’ lives of American citizens) and time (only the next 20 years). This puts the report in line with standard government Cost Benefit Analyses, which may make it more convincing for those who have access to policy levers.
AI safety reÂmains unÂderÂfunded by more than 3 OOMs
Open LetÂter to stop the EU’s ban on fake meat labels
Very interesting article. I agree that nutrition as a vegan is tricky- there can be limits to supplementation (although relatively cheap b vitamins and vegan omega-3 supplements are available online in my experience). I’d mildly disagree with you that gaining muscle as a vegan is ‘much harder’, pea-isolate protein powder and tofu (if you know where to get it) can be a nutritionally complete protein source, price competitive with even with chicken.
I do have a few issues with your list of (potentially) ethical aninal products:
Bivalves: I agree that these are likely to be unconscious. However, a lack of certainty could make this a problem given how many animals are necessary to make a meal, and given that the means of preparation often involve boiling said animals alive. Additionally these are small and expensive foods which probably couldn’t meet the nutritional needs of a large number of people cheaply.
Wild caught fish: The issue with this again is the number of animals involved. Some fish may have small brains, but you need a large number of individuals to make a meal. Given the extent of uncertainty around consciousness in the animal kingdom it feels morally risky to do so. I see the argument that these animals could die worse deaths from hunger or predation in nature, however I think there is a useful acts of commission/​omission distinction in morality which holds up when talking about wild animal suffering. There is also uncertainty about whether animals lives in nature are net negative, if not then catching wild fish on an industrial scale is pretty bad.
Cattle: I think your argument is stronger here (cows are large indeed!). I think particularly in relation to dairy. An omnivore could probably eat one cow a year in expectation, but may take 2-3 times as long to consume enough dairy to separate a calf and a cow. Personally I consume dairy for this reason, without going so far as to eat beef, but I realise the two industries are connected.
Eggs: I think it’s hard to know if an egg is ethically produced or not, given how poorly enforced and vague a lot of ‘free range’ standards are in reality. Also, without in-ovo sexing consumption of eggs necessarily involves a lot of killing male chicks which doesn’t sit right with me.
Additionally, I’m not entirely convinced of the argument that vegans have worse mental health because of their nutrition. I think it’s as likely that vegans are more likely to be neurotic, self-critical, and politically liberal, all of which are highly correlated with anxiety and depression.
Here are some reasons for why having children may be altruistic (With the caveat that I haven’t engaged deeply on this subject):
It is better for that child to have existed than not. Contra the VHEM, I think there is pretty good empirical (and subjective) evidence that it is better to have lived than not to have lived, and that in the developed economies most lives are significantly net positive. Therefore having a child will create more happiness than sadness (from the perspective of that child, as well as to you probably and their future loved ones.)
From a longtermist perspective, as long as principle 1) holds and as long as your children have children (which is statistically likely) you will help continue the chain of civilisation, with future net positive lives which stand in relation to you as you do to your ancestors.
If you are someone with deeply held ethical beliefs and a wider than average moral circle (which feels very likely given the context), then having kids will likely be moral moral than for the average person. This is because your kids will likely inherit your ethical worldview (to some extent) and they may choose to have positive impact through their actions (e.g donations/​career) or by promoting those values to others (through conversation, political activism etc). One way to think of this is: what would happen if all good people didnt reproduce, and only people who didn’t care about morality had kids? I would guess that the short-term benefits of good people having more resources would be swamped over a few generations by the negative ethics of a corrupted culture.
there are significant counter arguments to consider as well (e.g the meat eater problem, the opportunity cost of having very expensive children reducing capacity to donate) but I think the above reasoning shows why having children isnt firmly on the buying a sports car side of things in my mind.
Far-future effects are the most important determinant of what we ought to do
Time, like distance, has no relevance for moral judgement.
Completely agree with this analysis. For readers interested in a high impact career in the UK civil service I recommend checking out Impactful Government Careers. We offer 1:1 discussions and a weekly job mailing list of high impact roles in government.
I’d be doing less good with my life if I hadn’t heard of effective altruism
my decisions to give 10% of my salary to effective causes, and my decision to work in AI were both strongly influenced by EA
Morality is Objective
Like this slider- objectivity is a spectrum. The most subjective thing possible is a pure taste satement ‘I like ice-cream’. A pure objective statement is ‘1+1=2’.
In the world of inter-subjectivity there are statements like ‘Democracy is superiour to dictatorship’. This has elements of both objectivity and subjectivity.
I think morality is an intersubjective agreement (hence the influence of culture) but supported by biological roots (we possess a biological distaste for suffering and injustice, and a biological capacity for abstract reasoning). These intersubjective agreements combined with objective biological dispositions result in something which is not as objective as mathematics or natural sciences, but possesses a degree of objectivity.
I buy into MacAskill’s argument that the 20th-21st centuries appear to be an era of heightened existential risk, and that if we can survive the development of nuclear, AI and engineered biology technologies there will be more time in the future to increase the value where we survive.
Farmed animal welfare should be addressed first. I think this is an important step in our moral circle expansion (e.g. caring enough about animals to stop actively harming them.) I’m not an deep environmentalist, but there’s also more moral uncertainty about messing with nature (what if wild animals have good lives? What if nature has inherent value?)