Data scientist working on AI forecasting through Epoch and the Stanford AI Index. GWWC pledge member since 2017. Formerly social chair at Harvard Effective Altruism, facilitator for Arete Fellowship, and founder of the DC Slate Star Codex meetup.
Robi Rahman
I agree with your numbered points, especially that if your discount rate is very high, then a catastrophe that kills almost everyone is similar in badness to a catastrophe that kills everyone.
But one of the key differences between EA/LT and these fields is that we’re almost the only ones who think future people are (almost) as important as present people, and that the discount rate shouldn’t be very high. Under that assumption, the work done is indeed very different in what it accomplishes.
I don’t know what you mean by fields only looking into regional disasters—how are you differentiating those investigations from the fields that you mention that the general public has heard of in large part because a ton of academic and governmental effort has gone into it?
I’m skeptical that the insurance industry isn’t bothering to protect against asteroids and nuclear winter just because they think the government is already handling those scenarios. For one, any event that kills all humans is uninsurable, so a profit-motivated mitigation plan will be underincentivized and ineffective. Furthermore, I don’t agree that the government has any good plan to deal with x-risks. (Perhaps they have a secret, very effective, classified plan that I’m not aware of, but I doubt it.)
I saw a lot of criticism of the EA approach to x-risks on the grounds that we’re just reinventing the wheel, and that these already exist in government disaster preparedness and the insurance industry. I looked into the fields that we’re supposedly reinventing, and they weren’t the same at all, in that the scale of catastrophes previously investigated was far smaller, only up to regional things like natural disasters. No one in any position of authority had prepared a serious plan for what to do in any situation where human extinction was a possibility, even the ones the general public has heard of (nuclear winter, asteroids, climate change).
I went to a large event, and the organizers counted the number of attendees present and then ordered chicken for everyone’s meal. Unfortunately I didn’t have a chance to request a vegetarian alternative. What’s the most efficient way to offset my portion of the animal welfare harm, and how much will it cost? I’m looking for information such as “XYZ is the current best charity for reducing animal suffering, and saves chickens for $xx each”, but I’m open to donating to something that helps other animals—doesn’t necessarily have to be chickens, if I can offset the harm more effectively or do more good per dollar elsewhere.
Humans are just lexically worth more than animals? You would torture a million puppies for a century to protect me from stubbing my toe?
Reducing animal agriculture for the benefits to humans by reducing habitat destruction is a really roundabout and ineffective way to help humans.
If you want to help humans, you should do whatever most helps humans.
If you want to protect someone from climate change, you should do whatever most effectively mitigates the effects of climate change.
If you want to help animals for the sake of helping animals, you should do that.
But you shouldn’t decide that helping animals is better than helping humans on the grounds that helping animals also indirectly helps humans.
Animal welfare is extremely neglected compared to human philanthropy. (However, effective interventions receive only a small fraction of altruistic funding intended to help humans.)
I’m highly uncertain about counterfactuals and higher-order effects, such as changes in long-term human population and eating patterns due to accelerated global economic development.
Risk aversion doesn’t change the best outcome from donating to a single charity to splitting your donation, once you account for the fact that many other people are already donating to both charities.
Given that both orgs already have many other donors, the best action for you to take is to give all of your donations to just one of the options (unless you are a very large donor).
a portfolio approach does more good given uncertainty about the moral weight on animals
No, this is totally wrong. Whatever your distribution of credences of different possible moral weights of animals, either the global health charity or the animal welfare charity will do more good than the other, and splitting your donations will do less good than donating all to the single better charity.
I believe this is not a valid analogy. If you uninvite someone from events for making rude comments about other attendees’ appearances, that only applies to that one rude person, or to people who behave rudely. If you disinvite someone for holding political views you’re uncomfortable with, that has a chilling effect on all uncommon political views, and is harmful to everyone’s epistemics.
there have been moments highlighting a discomforting lack of diversity and inclusion. Some of these include being the only girl in a room full of (mostly white) men discussing white political problems
Sorry for this dumb question but what are white political problems? Is this referring to stuff like rural drug overdoses?
It seems plausible to me that $1bn in a foundation independent from OP could be worth several times that amount added to OP.
How can $1B to a foundation other than OP be worth more than $2B to OP, unless OP is allocating grants very inefficiently? You would have to believe they are misjudging the EV of all their grants, or poorly diversifying against other possible futures, for this to be true.
Beef cattle are not that carbon-intensive. If you’re concerned about the climate, the main problem with cattle is their methane emissions.
If you eat them, your emissions, combined with other people’s emissions, are going to cause a huge amount of both human and non-human suffering.
If I eat beef, my emissions combined with other people’s emissions does some amount of harm. If I don’t eat beef, other people’s emissions do approximately the same amount of harm as there would have been if I had eaten it. The marginal harm from my food-based carbon emissions are really small compared to the marginal harm from my food-based contribution to animal suffering.
I agree. Mods, is there a reason why I can’t downvote the community tag on this post?
You make some great points. If you think humanity is so immoral that a lifeless universe is better than one populated by humans, then yes, it would indeed be bad to colonize Mars, from that perspective.
I would be pretty horrified at humans taking fish aquaculture with us to Mars, in a manner as inhumane as current fish farming. However, I opened the Deep Space Food Challenge link, and it’s more like what I expected: the winning entries are all plants or cellular manufacturing. (The Impact Canada page you linked to is broken.)
If we don’t invent any morally relevant digital beings prior to colonizing space, then I think wild animal suffering is substantially likely to be the crux of whether it is morally good or bad to populate the cosmos.
Interesting argument. However, I don’t think this point about poverty is right.
The problem is that [optimistic longtermism is] based on the assumption that life is an inherently good thing, and looking at the state of our world, I don’t think that’s something we can count on. Right now, it’s estimated that nearly a billion people live in extreme poverty, subsisting on less than $2.15 per day.
Poverty is arguably a relic of preindustrial society in a state of nature, and is being eliminated as technological progress raises standards of living. If we were to colonize Mars, it would probably be done by wealthy societies that have large amounts of capital per person. You might argue that conditions are so harsh on Mars that life will be unpleasant even for the wealthy, or that population growth will eventually turn Mars society into a zero-sum Malthusian hellhole, but I don’t think those are your claims.
As for animal cruelty, it’s pretty straightforward to propose things like a ban on animal cruelty in a Mars charter or constitution. Maybe this is politically difficult and we don’t have leverage on the Mars colonist people, but then it would be even harder to ban Mars colonization altogether. Finally, this issue might be moot: it’ll be really expensive to take pets and farm animals to Mars. Everyone will probably be eating hydroponic lettuce for the first fifty years anyway, not foie gras.
Shrimpify Mentoring? Shrimping What We Can? Future of Shrimp Institute?
Oh, and we can’t forget about 1FTS: One for the Shrimp.
I’m very disappointed that Rethink Priorities has chosen to rebrand as Rethink Shrimp. I really think we should have gone with Reshrimp Priorities. That said, I will accept the outcome, whatever is deemed to be most effective, and in any case redouble my efforts to forecast timelines to the shrimp singularity.
I don’t see Shapley values mentioned anywhere in your post. I think you’ve made a mistake in attributing the values of things multiple people have worked on, and these would help you fix that mistake.
I don’t really see anything in the article to support the headline claim, and the anonymous sources don’t actually work at NIST, do they?
Wow, incredible that this has 0 agree votes and 43 disagree votes. EAs have had our brains thoroughly fried by politics. I was not expecting to agree with this but was pleasantly surprised at some good points.
Now that the election is over, I’d love to see a follow-up post on what will probably happen during the next administration, and what will be good and bad from an EA perspective.