Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog
Richard Y Chappellšø
ļAuĀtonĀomy Consequentialism
ļDisĀcussing ethĀiĀcal alĀtruĀism and conĀseĀquenĀtialĀism vs. deontology
Itās mostly not anything specific to going vegan. Just the general truism that effort used for one purpose could be used for something else instead. (Plus I sometimes donate extra precisely for the purpose of āoffsettingā, which I wouldnāt otherwise be motivated to do.)
Mostly just changing old habits, plus some anticipated missing of distinctive desired tastes. Itās not an unreasonable ask or anything, but Iād much rather just donate more. (In general, I suspect thereās insufficient social pressure on people to increase our donations to good causes, which also shouldnāt be āso effortfulā, and we likely overestimate the personal value we get from marginal spending on ourselves.)
ļA HuĀman AbunĀdance Agenda
ļThe Costs of Permission
I donāt understand the relevance of the correlation claim. People who care nothing for animals wonāt do either. But that doesnāt show that there arenāt tradeoffs in how to use oneās moral efforts on the margins. (Perhaps youāre thinking of each choice as a binary: ādonate someā Y/āN + āgo veganā Y/āN? But donating isnāt binary. What matters is how much you donate, and my suggestion is that any significant effort spent towards adopting a vegan diet might be better spent on further increasing oneās donations. It depends on the details, of course. If you find adopting veganism super easy, like near-zero effort required, then great! Not much opportunity cost, then. But others may find that it requires more effort, which could be better used elsewhere.)
My main confusion with your argument is that I donāt understand why donations donāt also count as āpersonal ethicsā or as āvisible ethical actionā that could likewise āripple outwardā and be replicated by others to good effect. (I also think the section on āequityā fundamentally confuses what ethics should be about. I care about helping beneficiaries, not setting up an āequitable moral landscapeā among agents, if the latter involves preventing the rich from pursuing easy moral wins because this would be āunfairā to those who canāt afford to donate.)
One more specific point I want to highlight:
...where harm is permissible as long as itās āoffsetā by a greater good
fwiw, my argument does not have this feature. I instead argue that:
(1) Purchasing meat isnāt justified: the moral interests of farmed animals straightforwardly outweigh our interest in eating them. So buying a cheeseburger constitutes a moral and practical mistake. And yet:
(2) It would be an even greater moral and practical mistake to invest your efforts into correcting this minor mistake if you could instead get far greater moral payoffs by directing your efforts elsewhere (e.g. donations).
ļDeath by Metaphysics
Just to clarify: Spears & Gerusoās argument is that average (and not just total) quality of life will be significantly worse under depopulation relative to stabilization. (See especially the āprogress comes from peopleā section of my review.)
The authors discuss this a bit. They note that even āhigher fertilityā subcultures are trending down over time, so itās not sufficiently clear that anyone is going to remain āabove replacementā in the long run. That said, this does seem the weakest point for thinking it an outright extinction risk. (Though especially if the only sufficiently high-fertility subcultures are relatively illiberal and anti-scientific onesāAmish, etc. - the loss of all other cultures could still count as a significant loss of humanityās long-term potential! I hope itās OK to note this; I know the mods are wary that discussion in this vicinity can often get messy.)
I wrote āperhaps the simplest and most probable extinction riskā. Thereās room for others to judge another more probable. But itās perfectly reasonable to take as most probable the only one that is currently on track to cause extinction. (Itās hard to make confident predictions about any extinction risks.) I think it would be silly to dismiss this simply due to uncertainty about future trends.
What reason is there to think that demographic trends will suddenly reverse? If it isnāt guaranteed to reverse, then it is an extinction risk.
Iād guess that (for many readers of the book) less air travel outweighs ābuying moreā furniture and kids toys, at least. But the larger point isnāt that the change is literally zero, but that it doesnāt make a sufficiently noticeable change to near-term emissions to be an effective strategy. It would be crazy to recommend a DINK lifestyle specifically in order to reduce emissions in the next 25 years. Like boycotting plastic straws or chatgpt.
Updated to add the figure from this paper, which shows no noticeable difference by 2050 (and little difference even after that):
As a general rule, it isnāt necessary to agree on the ideal target in order to agree directionally about what to do on present margins. For example, we can agree that it would be good to encourage more effective giving in the population, without committing to the view (that many people would āpersonally disagreeā with) that everyone ought to give to the point of marginal utility, where they are just as desperate for their marginal dollar as their potential beneficiaries are.
The key claim of After the Spike is that we should want to avoid massive depopulation. Whether youād ideally prefer stabilization, gradual population growth, or growth as fast as we can sustainably maintain without creating worse problems, isnāt something that needs to be adjudicatedāand in fact seems a distraction from the more universally agreeable verdict that massive depopulation is bad and worth avoiding.
ļDeĀbate: DeĀpopĀuĀlaĀtion Matters
Everyone has fundamental assumptions. You could imagine someone who disagrees with yours calling them ājust vibesā or āpresuppositionsā, but that doesnāt yet establish that thereās anything wrong with your assumptions. To show an error, the critic would need to put forward some (disputable) positive claims of their own.
The level of agreement just shows that plenty of others share my starting assumptions.
If you take arguments to be ācircularā whenever a determined opponent could dispute them, I have news for you: there is no such thing as an argument that lacks this feature. (See my note on the limits of argumentation.)
I agree itās often helpful to make our implicit standards explicit. But I disagree that thatās āwhat weāre actually askingā. At least in my own normative thought, I donāt just wonder about what meets my standards. And I donāt just disagree with others about what does or doesnāt meet their standards or mine. I think the most important disagreement of all is over which standards are really warranted.
On your view, there may not be any normative disagreement, once we all agree about the logical and empirical facts. I think itās key to philosophy that there is more we can wonder about than just that. (There may not be any tractable disagreement once we get down to bedrock clashing standards, but I think there is still a further question over which we really disagree, even if we have no way to persuade the other of our position.)
Itās interesting to consider the meta question of whether one of us is really right about our present metaethical dispute, or whether all you can say is that your position follows from your epistemic standards and mine follows from mine, and there is no further objective question about which we even disagree.
Actually have high integrity, which means not being 100% a utilitarian/āconsequentialist
Sorry for the necro-reply, but just saw this and wanted to register that I think a 100% utilitarian/āconsequentialist can still genuinely have high integrity. (I think people are generally quite confused about what a fitting consequentialist mindset looks like. It absolutely is not: ādo whatever I naively estimate will maximize expected value, without regard for trustworthiness etc.ā) See, e.g., NaĆÆve Instrumentalism vs Principled Proceduralism.
It also shows strong and in vastly more cases positive influence from (what you call) āutilitarianā ideas (but really ought to be more universalāideas like that it is better to do more good than less, and that quantification can help us to make such trade-offs on the basis of something other than mere vibes).
Unless thereās some reason to think that the negative outweighs the positive, you havenāt actually given us any reason to think that āutilitarian influenceā is a bad thing.
Quick sanity check: when I look at any other major social movement, it strikes me as vastly worse than EA (per person or $ spent), in ways that are very plausibly attributable to their being insufficiently āutilitarianā (that is, insufficiently concerned with effectiveness, insufficiently wide moral circles, and insufficiently appreciative of how strong our moral reasons are to do more good).
If youāre arguing āEA should be more like every other social movementā, you should probably first check whether those alternatives are actually doing a better job!