Data scientist working on AI forecasting through Epoch and the Stanford AI Index. GWWC pledge member since 2017. Formerly social chair at Harvard Effective Altruism, facilitator for Arete Fellowship, and founder of the DC Slate Star Codex meetup.
Robi Rahman
If you don’t aim to persuade anyone else to agree with your moral framework and take action along with you, you’re not doing the most good within your framework.
(Unless your framework says that any good/harm done by anyone other than yourself is morally valueless and therefore you don’t care about SBF, serial killers, the number of people taking the GWWC pledge, etc.)
embrace of the “Meat-Eater Problem” inbuilt into both the EA Community and its core ideas
Embrace of the meat-eater problem is not built into the EA community. I’m guessing a large majority of EAs, especially the less engaged ones who don’t comment on the Forum, would not take the meat-eater problem seriously as a reason we ought to save fewer human lives.
I personally am on the side that thinks that current conclusions are probably overconfident and lacking in some very important considerations.
Can you give specifics? Any crucial considerations that EA is not considering or under-weighting?
I actually found it more persuasive that buying broilers from a reformed scenario seems to get you both a reduction in pain and a more climate-positive outcome
How did you conclude that? How are the broilers reformed to not be painful?
Wow, incredible that this has 0 agree votes and 43 disagree votes. EAs have had our brains thoroughly fried by politics. I was not expecting to agree with this but was pleasantly surprised at some good points.
Now that the election is over, I’d love to see a follow-up post on what will probably happen during the next administration, and what will be good and bad from an EA perspective.
I agree with your numbered points, especially that if your discount rate is very high, then a catastrophe that kills almost everyone is similar in badness to a catastrophe that kills everyone.
But one of the key differences between EA/LT and these fields is that we’re almost the only ones who think future people are (almost) as important as present people, and that the discount rate shouldn’t be very high. Under that assumption, the work done is indeed very different in what it accomplishes.
I don’t know what you mean by fields only looking into regional disasters—how are you differentiating those investigations from the fields that you mention that the general public has heard of in large part because a ton of academic and governmental effort has gone into it?
I’m skeptical that the insurance industry isn’t bothering to protect against asteroids and nuclear winter just because they think the government is already handling those scenarios. For one, any event that kills all humans is uninsurable, so a profit-motivated mitigation plan will be underincentivized and ineffective. Furthermore, I don’t agree that the government has any good plan to deal with x-risks. (Perhaps they have a secret, very effective, classified plan that I’m not aware of, but I doubt it.)
I saw a lot of criticism of the EA approach to x-risks on the grounds that we’re just reinventing the wheel, and that these already exist in government disaster preparedness and the insurance industry. I looked into the fields that we’re supposedly reinventing, and they weren’t the same at all, in that the scale of catastrophes previously investigated was far smaller, only up to regional things like natural disasters. No one in any position of authority had prepared a serious plan for what to do in any situation where human extinction was a possibility, even the ones the general public has heard of (nuclear winter, asteroids, climate change).
I went to a large event, and the organizers counted the number of attendees present and then ordered chicken for everyone’s meal. Unfortunately I didn’t have a chance to request a vegetarian alternative. What’s the most efficient way to offset my portion of the animal welfare harm, and how much will it cost? I’m looking for information such as “XYZ is the current best charity for reducing animal suffering, and saves chickens for $xx each”, but I’m open to donating to something that helps other animals—doesn’t necessarily have to be chickens, if I can offset the harm more effectively or do more good per dollar elsewhere.
Humans are just lexically worth more than animals? You would torture a million puppies for a century to protect me from stubbing my toe?
Reducing animal agriculture for the benefits to humans by reducing habitat destruction is a really roundabout and ineffective way to help humans.
If you want to help humans, you should do whatever most helps humans.
If you want to protect someone from climate change, you should do whatever most effectively mitigates the effects of climate change.
If you want to help animals for the sake of helping animals, you should do that.
But you shouldn’t decide that helping animals is better than helping humans on the grounds that helping animals also indirectly helps humans.
Animal welfare is extremely neglected compared to human philanthropy. (However, effective interventions receive only a small fraction of altruistic funding intended to help humans.)
I’m highly uncertain about counterfactuals and higher-order effects, such as changes in long-term human population and eating patterns due to accelerated global economic development.
Risk aversion doesn’t change the best outcome from donating to a single charity to splitting your donation, once you account for the fact that many other people are already donating to both charities.
Given that both orgs already have many other donors, the best action for you to take is to give all of your donations to just one of the options (unless you are a very large donor).
a portfolio approach does more good given uncertainty about the moral weight on animals
No, this is totally wrong. Whatever your distribution of credences of different possible moral weights of animals, either the global health charity or the animal welfare charity will do more good than the other, and splitting your donations will do less good than donating all to the single better charity.
I believe this is not a valid analogy. If you uninvite someone from events for making rude comments about other attendees’ appearances, that only applies to that one rude person, or to people who behave rudely. If you disinvite someone for holding political views you’re uncomfortable with, that has a chilling effect on all uncommon political views, and is harmful to everyone’s epistemics.
there have been moments highlighting a discomforting lack of diversity and inclusion. Some of these include being the only girl in a room full of (mostly white) men discussing white political problems
Sorry for this dumb question but what are white political problems? Is this referring to stuff like rural drug overdoses?
It seems plausible to me that $1bn in a foundation independent from OP could be worth several times that amount added to OP.
How can $1B to a foundation other than OP be worth more than $2B to OP, unless OP is allocating grants very inefficiently? You would have to believe they are misjudging the EV of all their grants, or poorly diversifying against other possible futures, for this to be true.
Beef cattle are not that carbon-intensive. If you’re concerned about the climate, the main problem with cattle is their methane emissions.
If you eat them, your emissions, combined with other people’s emissions, are going to cause a huge amount of both human and non-human suffering.
If I eat beef, my emissions combined with other people’s emissions does some amount of harm. If I don’t eat beef, other people’s emissions do approximately the same amount of harm as there would have been if I had eaten it. The marginal harm from my food-based carbon emissions are really small compared to the marginal harm from my food-based contribution to animal suffering.
I agree. Mods, is there a reason why I can’t downvote the community tag on this post?
You make some great points. If you think humanity is so immoral that a lifeless universe is better than one populated by humans, then yes, it would indeed be bad to colonize Mars, from that perspective.
I would be pretty horrified at humans taking fish aquaculture with us to Mars, in a manner as inhumane as current fish farming. However, I opened the Deep Space Food Challenge link, and it’s more like what I expected: the winning entries are all plants or cellular manufacturing. (The Impact Canada page you linked to is broken.)
If we don’t invent any morally relevant digital beings prior to colonizing space, then I think wild animal suffering is substantially likely to be the crux of whether it is morally good or bad to populate the cosmos.
No, I think you would expect EAs to have the mental strength to handle diverse political views, but in practice most of them don’t. For example, see this heavily downvoted post about demographic collapse by Malcolm and Simone Collins. Everyone is egregiously misreading it as being racist or maybe just downvoting it because of some vague right-wing connotations they have of the authors.