Director of Research at CEARCH: https://exploratory-altruism.org/
I construct cost-effectiveness analyses of various cause areas, identifying the most promising opportunities for impactful work.
Previously a teacher in London, UK.
Director of Research at CEARCH: https://exploratory-altruism.org/
I construct cost-effectiveness analyses of various cause areas, identifying the most promising opportunities for impactful work.
Previously a teacher in London, UK.
Thanks for your thoughts. You make a good point—EA can be pretty alienating. There’s a trade-off: Within the EA community there is a ready-made audience, probably lots of potential guests, less of a need to explain foundational concepts. But less potential impact, perhaps, as the podcast might only marginally help insiders to increase their impact.
Definitely open to a change in title.
I’ve sent you a message.
I think you are right that we often forget the marginal nature of the contributions made in a highly-sought-after job. “Do I offer more than the next best candidate?” is a question we forget to ask.
I suspect the effectiveness of “nurses, child care workers, truck drivers, and home health aides”, while higher than a typical job, might pale in comparison to more targeted work like independent projects or effective giving. Someone donating 10% of the median US salary to effective causes can expect to save approximately one life per year—a high threshold indeed.
Thanks, Ian. You make an excellent point: I don’t want to unnecessarily narrow my focus here.
Perhaps I should focus on 1) because it also allows a broader scope of episode ideas. “How can ordinary people maximise the good they do in the world?” allows lots of different responses. Independent projects could be one of them.
On the other hand 2) seems more neglected. There’s probably lots out there about startups or founding charities, but I can’t find anything on running altruistic projects (except a few one-off posts).
Thanks for pointing this out. I agree, and I think we can trace the elitism in the movement to well-informed efforts to get the most from the human resources available.
While EA remains on the fringe we can keep thinking in terms of maximising marginal gains (ie only targeting an elite with the greatest potential for doing good). But as EA grows it is worth considering the consequences of maintaining such an approach :
Focusing on EA jobs & earning-to-give will limit the size of the movement, as newcomers increasingly see no place for themselves in it
With limited size comes limited scope for impact: eg you can’t change things that require a democratic majority
Even if 2) proves false, we probably don’t want a future society run by a vaunted, narrow elite (at least based on past experience)
Hi James, thanks for sharing this. As others have said, it is a difficult thing to do. I’m actually weirdly looking forward to the EA criticisms that will come out of this FTX business. You often hear of the abstract need for criticism and “red-teaming” but not much about the actual criticisms.
I think your story chimes with a bigger difficulty in the EA movement : how small-scale effectiveness measures (ie not talking to junior EAs) end up stymying the movement on a larger scale (being unfriendly and putting people off).
I’m also worried about whether a utilitarian movement really can value integrity, friendliness etc. I can see how it might see the value in appearing to have integrity or appearing to value diversity. But if those things get in the way of effectiveness, won’t they be covertly canned?
I’m a 30y.o. in London and consider myself fairly friendly. If you want to talk about stuff, get in touch.
Perhaps this was unfair of me. I mean as a casual user of EA social media spaces before last week, I came across non-strawman criticisms, or even expressions of personal doubt, quite rarely. Like any movement, I think there’s a hidden pull to virtue-signal (even when this is explicitly recognised as a danger), and it certainly seems like the FTX thing has given more people confidence to air reservations they had been keeping to themselves (and I don’t mean the people saying “I saw this coming and didn’t tell anyone”).
Thanks to pointing me to the red-teaming contest. I read the summaries of the 3 top winners, and I guess I was using the wrong definition of red-teaming in my comment here. I’m interested in fundamental criticisms of EA as a philosophy and as a movement. Not necessarily because I’m looking to disavow EA, but because a) I want to know how best to communicate it to a sceptical audience and b) I think such criticisms can be useful in deciding what to prioritise in meta-EA.
This is also a good argument for positive lifestyle changes like eating vegan.
The sheer scale of animal suffering, plus the fact there are definitely more impactful options than going vegan, can make it seem less appealing. But knowing that each year I have (and use) the power to prevent dozens of animals from life in factory farms is empowering.
Longtermist work is suspiciously comfortable
Much Longtermist work is clean, abstract and suspiciously well-suited to your typical EA. Could this be clouding our judgement?
One surprising thing about bednet-era EA was the disconnect between EAs and the kind of work they were championing. Oxbridge-grad MacAskill implored us to donate to malaria charities, or even to help fix global poverty directly. I actually found this reassuring—nerdy philosophy types probably don’t inherently love thinking about Sub-Saharan supply chains, so the fact they do it anyway was a sign that perhaps their reasoning really was impartial.
Contrast this with Longtermism. Longtermist work is generally more theoretical and less messy. It can be conducted on a laptop with a flat white and a Huel on hand. Longtermists don’t need to make networks in developing countries. In many cases, they don’t even need to prove that their work is making a difference.
All of the above differences make Longtermist work more appealing to a typical Western, university-educated person.
“So what?” I hear you say, “we’re rational and are pursuing Longtermism because it is so impactful”.
Perhaps. But we should be wary. We know how prone we are to post-rationalising our decisions. We should be careful to separate the worthiness of Longtermist work from the appeal of Longtermist roles.
I am not questioning the validity of Longtermism. I merely think that we should be aware of the likely bias we have towards it.
We are allowed to be swayed by good working conditions or better wages. The danger is that the comforts of the job stop us asking difficult questions about Longtermism.
Tonight, on the 80,000 Hours job page, as my cursor glides past the $1,000/month manager roles in Nairobi and hovers over the $100,000/year AI job in Silicon Valley, I will try to remember this.
Great article, thanks! I will be sharing this with my reading group.
A couple of minor points:
The text mentions a 6% growth spurt, but your graph says 8%
It would be great to see the graph of Malawi’s GDP go back to 1961 (rather than 1990), since you mentioned the annual growth rate has averaged just 1.4% since then (I suspect this would better portray the disappointing lack of progress in the late 20th century, too)
Have you got anything to share yet? I’m writing a post on social media strategy and would love to see anything you have made.
I agree that vegans probably won’t get an “I told you so” moment or an “end of meat” day to celebrate. But I think there is a good chance of a future where eating/milking animals is unthinkable and in which people have respect and compassion for all animal life.
It must have rankled in the 1800s when slaveholders were actually compensated when slavery was banned in the British Empire—there was no “I told you so” moment, and much of the historic racial inequality endured in a different form for generations even up to the present day. But now slavery is a massive taboo; all people are recognised as human and entitled to rights; labour exploitation still exists but is condemned; the old days’ institutional degradation of an entire race is unthinkable now.
I’d argue that most of this moral progress hasn’t come from living people changing their minds. It’s from children reared in a society without slavery thinking “wow, slavery sounds really inhuman, let’s not do that again”. The ban enabled people to see how wrong owning humans was.
We can hope for something similar when it comes to factory farming. If fake meat makes factory farming obsolete, and kids have never tasted the real thing and have only encountered animals in nature and in zoos—they will find the idea of eating animals and their milk/eggs really gross. They will genuinely believe that factory farming is wrong.
It won’t be a day of reckoning but it will be a giant shift worth celebrating .
I have sent some feedback via email. Thanks!
Yes please! I have sent an email.
I would love to do that. Maybe I’ll learn more data stuff, because I want to be able to scrub data so I can get more of it, faster. Any idea if that sort of data is even public?
I agree with you on engagement. It would be good to get better data on that, because number of followers is probably quite a poor proxy for engagement. Numbers of views, likes and comments are probably far better.
Surprised to see that suffering levels for non-cage eggs are so similar to those from cage eggs. Where do you get the data on that? I skimmed a Brian Tomasik piece on suffering per kg of food that you cited, but it only examines caged hens.
This is an amazing tool by the way. Thanks for making it.
Thanks for the project and the write-up!
Does anyone know any good sources to help someone going vegan to figure out:
which deficiencies to worry about
how to counter them (not just supplements, which I would like to avoid relying on when possible)
I skimmed the piece on axiological asymmetries that you linked and am quite puzzled that you seem to start with the assumption of symmetry and look for evidence against it. I would expect asymmetry to be the more intuitive, therefore default, position. As the piece says
At just the first-order level, people tend to assume that (the worst) pain is worse than (the best) pleasure is pleasurable. The agonizing ends for non-human animals in factory farms and in the wild seem far worse than the best sort of life they could realize would be good. [...] it’s hard to find any organisms that risk the worst pains for the greatest pleasures and vice versa.
I would expect that a difference in magnitude between the best pleasure and worst possible is the most obvious explanation, but the piece concludes that these judgments are “far more plausibly explained by various cognitive biases”.
As far as I can tell this would suggest that either:
Someone who has recently experienced or is currently experiencing intense suffering (and therefore has a better understanding of the stakes) would be more willing to take the kind of roulette gamble described in the piece. This seems unlikely.
People’s assessments of hedonic states are deeply unreliable even if they have recent experience of the states in question. I don’t like this much because it means we have to fall back on physiological evidence for human pleasure/suffering, which, as shown by the mayonnaise example, can’t give us the full picture.
On a slightly separate note, I played around with the BOTEC to check the claim that assuming symmetry doesn’t change the numbers much and I was convinced. The extreme suffering-focused assumption (where perfect health is merely neutral) resulted in double the welfare gain of the symmetric assumption (when the increase in welfare as a percentage of the animals’ negative welfare range is held constant).
My main question on this last point is: why use “percentage of the animals’ negative welfare range” when “percentage of the animals’ total welfare range” seems more relevant and would not vary at all across different (a)symmetry assumptions?
Thanks for pointing that out. I also get a sense that people are getting more traction than brands in EA twitter. Before my initial social media study I followed very few EA orgs on Twitter and I wasn’t getting exposed to new ones, whereas prominent individuals were popping up.
Perhaps orgs should be trying that out. I expect some friction—people who already use social media in a personal capacity may want to keep it separate from their job, while others are consciously off the platforms and actively against spending time on them. Maybe people could just get secondary, job-aligned accounts.
Its embarassing because my methods are so low-tech. I literally went to the Twitter feeds of each org and copied the 10 most recent original tweets into a spreadsheet. Hence the call for someone with data-scraping skills at the end.
My background is: I’m a teacher, run a podcast on Education, looking to transition to a more impactful career, have a degree in Mathematics. Any research skills I have are self-taught.
I don’t have a specific vision for a project at the moment but would very much be interested in doing a larger study on EA and Twitter. I’d be interested in projects that i) allow EAs and orgs to better form social media strategies and ii) build my research skills & credibility.
Thanks, Amber. Great article.