I think DC’s point is that the donation one can makes from the proceeds of selling the kidney outweighs the counterfactual direct impact of donating the kidney.
ag4000
Perhaps “moral obligation” has a specific legalistic/Christian etymology.
This is one of the positions G.E.M. Anscombe defends in her influential essay “Modern Moral Philosophy”. She argues in part that the moral “ought” is a vestige of religious ethics, which doesn’t make much sense without a (divine) lawgiver. Indeed, one of the starting points of many modern virtue theorists is arguing that the specific moral sense of “ought” and moral sense of “good” are spurious and unfounded. One such view is in Philippa Foot’s Natural Goodness, which argues instead that the goodness ethics cares about is natural goodness and defect (e.g., “the wolf who fails to contribute to the hunt is defective” is supposed to be a statement about a natural, rather than moral, defect of the wolf).
I found this list very helpful, thank you!
On exotic tofu: I am not yet convinced that Stiffman doesn’t have the requisite charisma. Is your concern that he’s vegan (hence less relatable to non-vegans), his messaging in Broken Cuisine specifically, or something else? I am sympathetic to the first concern, but not as convinced by the second. In particular, from what little else I’ve read from Stiffman, his messaging is more like his original post on this Forum: positive and minimally doom-y. See, for example, his article in Asterisk, this podcast episode (on what appears to be a decently popular podcast?), and his newsletter.
Have you reached out to him directly about your concerns about his messaging? Your comments seem very plausible to me and reaching out seems to have a high upside.
Although what you said might be part of the explanation for why many EAs focus on alignment or governance research rather than pause advocacy, I think the bigger part is that many EAs think that pause advocacy isn’t as good as research. See, e.g., some of these posts.
I haven’t yet looked at the papers cited, but aren’t they probably hopelessly confounded? This seems to be one of the areas where it’s hardest to measure causal effects.
Answers to this question could be relevant: What are some artworks relevant to EA?
Undoubtedly these are interesting questions, and I don’t have much to contribute now. Your thought experiment reminds me of Timmerman’s Drowning Children case from “Sometimes there is nothing wrong with letting a child drown”. Timmerman argues with this case that we should reject the strong conclusion from “Famine, Affluence, and Morality”.
I agree that the simple story of a producer reacting to changing demand directly is oversimplified. I think we differ in that I think that absent specific information, we should assume that any commonly consumed animal product’s supply response to changing demand should be similar to the ones from Compassion, by the Pound. In other words, we should have our prior on impact centered around some of the numbers from there, and update from there. I can explain why I think this in more detail if we disagree on this.
Leather example:Sure, I chose this example to show how one’s impact can be diluted, but I also think that decreasing leather consumption is unusually low-impact. I don’t think the stories for other animal products are as convincing. To take your examples:
Eggs for human consumption are unfertilized, so I’m not sure how they are useful for hatching. Perhaps you are thinking that producers could fertilize the eggs, but that seems expensive and wouldn’t make sense if demand for eggs is decreasing.
Perhaps I am uncreative, but I’m not sure how one would redirect unused animal products in a way that would replace the demand from human consumption. Raising an animal seems pretty expensive, so I’m not sure in what scenario this would be so profitable.
If we are taking into account the sort of “meta” effects of consuming fewer animal products (such as your example of causing people to innovate new ways of using animal products), then I agree that these increase the variance of impact but I suspect that they strongly skew the distribution of impact towards greater rather than lesser impact. Some specific, and straightforward, examples: companies research more alternatives to meat; society has to accommodate more vegans and vegan food ends up more widespread and appealing, making more people interested in the transition; people are influenced by their reducetarian friends to eat less meat.
Voting:
I’ll need to think about it more, but as with two-candidate votes, I think that petitions can often have better than 1:1 impact.
Not an expert, but I think your impression is correct. See this post, for example (I recommend the whole sequence).
Late to the party here but I’d check out Räuker et al. (2023), which provides one taxonomy of AI interpretability work.
Thanks, this makes things much clearer to me.
I agree that this style of reasoning depends heavily on the context studied (in particular, the mechanism at play), and that we can’t automatically use numbers from one situation for another. I also agree with what I take to be your main point: In many situations, the impact is less than 1:1 due to feedback loops and so on.
I’m still not sure I understand the specific examples you provide:Animal products used as food: For commonly-consumed food animal products, I would be surprised if the numbers were much lower than those in the table from Compassion by the Pound (assuming that those numbers are roughly correct). This is because the mechanism used to change levels of production is similar in these cases. (The previous sentence is probably naive, so I’m open to corrections.) However, your point about substitution across goods (e.g., from beef to chicken) is well taken.
Other animal products: Not one of the examples you gave, but one material that’s interested me is cow leather. I’m guessing that (1) much of leather is a byproduct* of beef production and (2) demand for leather is relatively elastic. Both of these suggest that abstaining from buying leather goods has a fairly small impact on farmed animal welfare suffering.**
Voting: I am unsure what you mean here by “1:1”. Let me provide a concrete example, which I take to be the situation you’re talking about. We have an election with n voters and 2 candidates, with the net benefit of the better candidate winning U. If all voters were to vote for the better candidate, then each person’s average impact is U / n. I assume that this is what you mean by the “1″ in “1:1”: if someone has expected counterfactual impact U / n, then their impact is 1:1. If this is what you mean by 1:1, then actually one’s impact can easily be greater than U / n, going against your claim. For example, if your credence on the better candidate winning is exactly 50%, then U / n is a lower bound; see Ord (2023), some of whose references show that in real-world situations, the probability of swaying the election can be much greater than 1 / n.
* Not exactly a byproduct, since sales of leather increases the revenue from raising a cow.
** This is not accounting for less direct impacts on demand, like influencing others around oneself.
This position is commonly defended for consequentialist arguments for vegetarianism and veganism; see, e.g., Section 2 here, Section 2 here, and especially Day 2 here. The argument usually goes something like: if you stop buying one person’s worth of eggs, then in expectation, the industry will not produce something like one pound of eggs that they would’ve produced otherwise. Even if you are not the tipping point to cause them to cause production, due to uncertainty you still have positive expected impact. (I’m being a bit vague here, but I recommend reading at least one of the above readings—especially the third one—because they make the argument better than I can.)
In the case of animal product consumption, I’m confused what you mean by “the expected impact still remains negligible in most scenarios”—are you referring to different situations? I agree in principle that if the expected impact is tiny, then we don’t have much reason on consequentialist grounds to avoid the behavior, but do you have a particular situation in mind? Can you give concrete examples of where your shift in views applies/where you think the reasoning doesn’t apply well?
Thanks for the posts so far, I’ve briefly thought about trying some of these ideas but haven’t had the courage to really go for them.
One thing I’m wondering: What “sample size” are you basing the takeaways of your posts on intro fellowships on? That is, how many semesters, and how many people participated?
Why is this post being downvoted? I seriously doubt that EAs working to prevent school shootings would be cost-effective, but I don’t get why there are downvotes here—it’s a fair question.
It seems that your original comment no longer holds under this version of “1% better”, no? In what way does being 1% better at all these skills translate to being 30x better over a year? How do we even aggregate these 1% improvements under the new definition?
Anyway, even under this definition it seems hard to keep finding skills that one can get 1% better at within one day easily. At some point you would probably run into diminishing returns across skills—that is, the “low-hanging fruit” of skills you can improve at easily will have been picked.
I have not read much of Tetlock’s research, so I could be mistaken, but isn’t the evidence for Tetlock-style forecasting only for (at best) short-medium term forecasts? Over this timescale, I would’ve expected forecasting to be very useful for non-EA actors, so the central puzzle remains. Indeed, if there is not evidence for long-term forecasting, then wouldn’t one expect non-EA actors (who place less importance on the long-term) to be at least as likely as EAs use this style of forecasting?
Of course, it would be hard to gather evidence for forecasting working well over longer (say, 10+ year) forecasts, so perhaps I’m expecting too much evidence. But it’s not clear to me that we should have strong theoretical reasons to think that this style of forecasting would work particularly well, given how “cloud-like” predicting events over long time horizons is and how with further extrapolation there might be more room for bias.
For more on this line of argument, I recommend one of my favorite articles on ethical vegetarianism: Alastair Norcross’s “Puppies, Pigs, and People”.
I’m not sure how reputable it is, but I picked up a used copy of Becoming Vegan Comprehensive Edition and have consulted it from time-to-time for vegan nutrition.
I enjoyed the new intro article, especially the focus on solutions. Some nitpicks:
I’m not sure that it’s good to use 1DaySooner as the second example of positive EA interventions. I agree that challenge trials are good, but in my experience (admittedly a convenience sample), a lot of people I talk to are very wary of challenge trials. I worry that including it in an intro article could create needless controversy/turn people away.
I also think that some of the solutions in the biodefense section are too vague. For example, what exactly did the Johns Hopkins Center for Health Security do to qualify as important? It’s great that the Apollo Programme for Biodefense has billions in funding, but what are they doing with that money?
I don’t think it makes sense to include longtermism without explanation in the AI section. Right now it’s unexplained jargon. If I were to edit this, I’d replace that sentence with a quick reason why this huge effect on future generations matters or delete the sentence entirely.
A very common common counterargument to improving the lots of various animals is that the animals aren’t conscious. A letter signed by experts seems useful in debunking, or at least casting doubt, on such counterarguments. You are right to point out that this doesn’t solve the difficult problem of comparing welfare or rights between species, but it’s still a very important step.
Why would this be a distraction even if not very significant? Seems one could sign the letter and argue for the position while doing all the things you like.