www.jimbuhler.site
Also on LessWrong (with different essays).
Jim Buhler
if we’re clueless whether Emily will feel pain or not then the difference disappears. In this case I don’t have the pro-not-shooting bracketing intuition.
Should this difference matter if we’re not difference-making risk-averse or something? In both cases, C is better for Emily in expectation (the same way reducing potential termite suffering is better for termites, in expectation, even if it might make no difference because they might not be sentient).
Now, new thought experiment. Consider whatever intervention you find robustly overall good in the near-term (without bracketing out any near-term effect) and replace A, B, and C with the following:A’) Bracket in the bad long-term effects (-> don’t intervene)
B’) Bracket in the good long-term effects (-> intervene)
C’) Bracket in the near-term effects (-> intervene)
Do you have the pro-C’ intuition, then? If yes, what’s different from the sniper case?
Interesting! This resembles Michael St. Jules’ hedging proposal.
But the animal representatives, even if they aren’t thrilled by retaining another omnivore, have more pressing priorities than trying to help animals by eliminating meat-eaters one by one.
Is it that obvious? I find it hard to come up with interventions that reduce farmed animal suffering (and farmed animal suffering only, ignoring all the rest) more robustly than “minimize the number of entities that incidentally perpetuate this farmed animal suffering”. I feel like this should actually be pretty high up the list for someone who strictly spotlights farmed animals (like the farmed animal representatives are supposed to in your spotlighting parliament, if I understand correctly).
Why don’t the managers of the Animal Welfare Fund support preventing the existence of more potential meat-eaters, then? (via, e.g., increasing x-risks.) I presume becausemost of their biggest donors don’t spotlight farmed animals, are concerned about effects on other things, and would stop donating to the fund if the managers were to do that.
the fund managers themselves, like most farmed animal people, also don’t spotlight farmed animals and are concerned about effects on other things.
Curious if you agree.
For the moral trade between the representatives of human victims of malaria and farmed animal representatives to be fair, in your setup, the preferences of the latter would have to actually stoplight farmed animals the same way the former spotlights human victims of malaria. I.e., the preferences of farmed animal representatives in your spotlighting parliament should not be those of real farmed animal advocates who are not spotlighting farmed animals (otherwise, they would obviously be pro-x-risks and stuff despite the downsides on other beings, the same way the representatives of human malaria victims are anti-poverty despite the meat-eater pb).
I would still say there are actions which are robustly beneficial in expectation, such as donating to SWP. It is possible SWP is harmful, but I still think donating to it is robustly better than killing my family, friends, and myself, even in terms of increasing impartial welfare.
It’s kinda funny to reread this 6 months later. Since then, the sign of your precise best guess flipped twice, right? You argued somewhere (can’t find the post) that shrimp welfare actually was slightly net bad after estimating that it increases soil animal populations. Later, you started weakly believing animal farming actually decreases the number of soil nematodes (which morally dominate in your view), which makes shrimp welfare (weakly) good again.
(Just saying this to check if that’s accurate because that’s interesting. I’m not trying to lead you into a trap where you’d be forced to buy imprecise credences or retract the main opinion you defend in this comment thread. As I suggest in this comment, let’s maybe discuss stuff like this on a better occasion.)
I suspect Vasco is reasoning about the implications of epistemic principles (applied to our evidence) in a way I’d find uncompelling even if I endorsed precise Bayesianism.
Oh so for the sake of argument, assume the implications he sees are compelling. You are unsure about whether your good epistemic principles E imply (a) or (b).[1]
So then, the difference between (a) and (b) is purely empirical, and MNB does not allow me to compare (a) and (b), right? This is what I’d find a bit arbitrary, at first glance. The isolated fact that the difference between (a) and (b) is technically empirical and not normative doesn’t feel like a good reason to say that your “bracket in consequentialist bracketing” move is ok but not the “bracket in ex post neartermism” move (with my generous assumptions in favor of ex post neartermism).- ^
I don’t mean to argue that this is a reasonable assumption. It’s just a useful one for me to understand what moves MNB does and does not allow. If you find this assumption hard to make, imagine that you learn that we likely are in simulation that is gonna shut down in 100 years and that the simulators aren’t watching us (so we don’t impact them).
- ^
I find impartial consequentialism and indeterminate beliefs very well-motivated, and these combined with consequentialist bracketing seem to imply neartermism (as Kollin et al. (2025) argue), I think it’s plausible that metanormative bracketing implies neartermism.
Say I find ex post neartermism (Vasco’s view that our impact washes out, ex post, after say 100 years) more plausible than consequentialist bracketing being both correct and action-guiding.
My favorite normative view (impartial consequentialism + plausible epistemic principles + maximality) gives me two options. Either:(a) long-term effects dominate, and I’m clueless.
(b) near-term effects dominate, and I know what to do (without having to use consequentialist bracketing, let’s assume).
Would you say that what dictates my view on (a)vs(b) is my uncertainty between different epistemic principles, such that I can dichotomize my favorite normative view based on the epistemic drivers of (a)vs(b)? (Such that, then, MNB allows me to bracket out the new normative view that implies (a) and bracket in the new normative view that implies (b), assuming no sensitivity to individuation.)
If not, I find it a bit arbitrary that MNB allows your “bracket in consequentialist bracketing” move and not this “bracket in ex post neartermism” move.
I was implicitly assuming the probability of hitting the kid or the terrorist is high enough that where the bullet ends strictly matters more than Emily’s pain. If I misunderstood you and this doesn’t address your point, we could also assume that Emily only might have shoulder pain if she takes the shot. Then the difference you point to disappears, right? (And this changes nothing to the thought experiment, assuming risk neutrality and stuff.)
This also makes this second difference disappear, right? On B and C, we’re actually clueful on the out-bracket (the terrorist dwarfs Emily, so it’s better to shoot in expectation). So it’s symmetric to cluefulness on the out-bracket on A.
Spent some more time thinking about this, and I think I mostly lost my intuition in favor of bracketing in Emily’s shoulder pain. I thought I’d share here.
The problem
In my contrived sniper setup, I’ve gotta do something, and my preferred normative view (impartial consequentialism + good epistemic principles + maximality) is silent. Options I feel like I have:
A) Bracket out the kid, but not the terrorist (-> shooting is better)
B) Bracket out the terrorist, but not the kid (-> no shooting is better)
C) Bracket out both the kid and the terrorist, but not Emily[1] (-> no shooting is better)
D) Flip a coin, whatever. This illustrates radical cluelessness.
All these options feel arbitrary, but I have to pick something.
Comparing poisons
Picking D demands accepting the arbitrariness of letting perfect randomness guide our actions. We can’t do worse than this.[2] It is the total-arbitrariness baseline we’re trying to beat.
Picking A or B demands accepting the arbitrariness of favoring one over the other, while my setup does not give me any good reason to do so (and A and B give opposite recommendations). I could pick A by sorta wagering on, e.g., an unlikely world where the kid dies of Reye’s syndrome (a disease that affects almost only children) before the potential bullet hits anything. But I could then also pick B by sorta wagering on the unlikely world where a comrade of the terrorist standing near him turns on him and kills him. And I don’t see either of these two wager moves as more warranted than the other.[3]Picking C, similarly, demands accepting the arbitrariness of favoring it over A (which gives the opposite recommendation), while my setup does not give me any good reason to do so. I could pick C by wagering on, e.g., an unlikely world where time ends between the potential shot hurting Emily’s shoulder and the moment the potential bullet hits something. But I could then also pick A by wagering on the unlikely world where the kid dies of Reye’s syndrome anyway. And same pb as above.[4] And this is what Anthony’s first objection to bracketing gestures at, I guess.
While I have a strong anti-D intuition with this sniper setup, it doesn’t favor C over A or B for me, at the very moment of writing.[5]
Should we think that our reasons for C are “more grounded” than our reasons for A, or something like that? I don’t see why. Is there a variant of this sniper story where it seems easier to argue that it is the case (while conserving the complex cluelessness assumption)? And is such a variant a relevant analogy to our real-world predicament?
- ^
Without necessarily assuming persons-based bracketing (for A, B, or C), but rather whatever form of bracketing results in ignoring the payoffs associated with one or two of the three relevant actors.
- ^
Our judgment calls can very well be worse than random due to systematic biases (and I remember reading somewhere in the forecasting literature that this happens). But if we believe that’s our case, we can just do the exact opposite of what our judgment calls say and this beats a coin flip.
- ^
It feels like I’m just adding non-decisive mildly sweet considerations on top of the complex cluelessness pile I already had (after thinking about the different wind layers, the Earth’s rotation, etc). This will not allow me to single out one of these considerations as a tie-breaker.
- ^
This is despite some apparent kind of symmetry existing only between A and B (not between C and A) that @Nicolas Mace recently pointed to in some doc comment—symmetry which may feel normatively relevant although it feels superficial to me at the very moment of writing.
- ^
In fact, given the apparent stakes difference between Emily’s shoulder pain and where the bullet ends, I may be more tempted to act in accordance to A or B, deciding between the two based on what seems to be the least arbitrary tie-breaker. However, not sure whether this temptation is, more precisely, one in favor of endorsing A or B, or in favor of rejecting cluelessness and the need for bracketing to begin with, or something else.
If most of the value we can influence is in the far future
To be clear, you don’t necessarily assume this in the paper, and you don’t need to, right? You need bracketing to escape cluelessness paralysis, even if you merely think it’s indeterminate whether most of the value we can influence is in the far future, afaiu.
One could try to argue that the second-order effects of near-term interventions are negligible in expectation (see “the washing out hypothesis”). But I don’t think this is plausible.
So even if this were plausible (as Vasco thinks, for instance), this wouldn’t be enough to think we don’t need bracketing. One would need to have determinate-ish beliefs that rule out the possibility of far future effects dominating.
Yup, something a variety of views can get behind. E.g., not “buying beef”.
For “consensual EAA interventions” above, I think I was thinking more “not something EAs see as ineffective like welfare reforms for circus animals”. If this turned out to be the safest animal intervention, I suspect this wouldn’t convince many EAs to consider it. But if, say, developing alternatives to rodents as snake food turned out to be very safe, this could weigh a lot in its favor for them.
hey sorry for reopening but very curious to get your take on this:
Say you have been asked to evaluate the overall[1] utilitarian impact of the very first Christianity-spreaders during the first century AD (like Paul the Apostle) on the world until now (independently of their intention ofc). You have perfect information on what’s causally counterfactually related to their actions. How much of their impact (whether good or bad) is on beings between 0 and 200 VS. on beings between 200 and now? (making your usual assumptions you specifically make about nematodes and stuff; don’t take anyone else’s perspective.)
If mostly the former, how do you explain that?
If mostly the latter, what’s the difference between their ex post impact and yours? Why is most of their ex post impact longtermist-ish while yours would be neartermist? Why would, e.g., most of the people helping nematodes, thanks to you (including very indirectly through your influence on others before them) be concentrated within the next hundred years?- ^
I.e., factoring in nematodes and stuff.
- ^
It seems highly plausible that you could counterfactually affect many more acres of this land (and thus many more soil animals) through building houses or other structures than trying to maintain factory farms.
This would not necessarily undermine your overall argument but, interestingly, Tomasik’s (2016-2022) estimates seem somewhat in tension with this claim. According to him, it’s really hard to beat “buying beef” in terms of cost-effectiveness to reduce wild invertebrate populations.[1] (Not saying I agree or that I think we should reduce wild invertebrate populations.)
- ^
Although he omits the fact that agriculture might in fact increase soil nematode populations, as also pointed out by Vasco in another comment thread here.
- ^
Aha oops very sorry, fixed ;)
An informal research agenda on robust animal welfare interventions and adjacent cause prioritization questions
Context: As I started filling out this expression of interest form to be a mentor for Sentient Futures’ project incubator program, I came up with the following list of topics I might be interested in mentoring. And I thought it was worth sharing here. :) (Feedback welcome!)
Animal-welfare-related research/work:
What are the safest (i.e., most backfire-proof)[1] consensual EAA interventions? (overlaps with #3.c and may require #6.)
How should we compare their cost-effectiveness to that of interventions that require something like spotlighting or bracketing (or more thereof) to be considered positive?[2] (may require A.)
Robust ways to reduce wild animal suffering
New/underrated arguments regarding whether reducing some wild animal populations is good for wild animals (a brief overview of the academic debate so far here).
Consensual ways of affecting the size of some wild animal populations (contingent planning that might become relevant depending on results from the above kind of research).
How do these and the safest consensual EAA interventions (see 1) interact?
Preventing the off-Earth replication of wild ecosystems.
Uncertainty on moral weights (some relevant context in this comment thread).
Red-teaming of different moral weights that have been explicitly proposed and defended (by Rethink Priorities, Vaso Grilo, …).
How and how much do cluelessness arguments apply to moral weights and inter-species tradeoffs?
What actions are robust to severe uncertainty about inter-species tradeoffs? (overlaps with #1.)
Considerations regarding the impact of saving human lives (c.f. top-GiveWell charities) on farmed and wild animals. (may require 3 and 5.)
The impact of agriculture on soil nematodes and other numerous soil animals, in terms of total population.
Evaluating the backfire risks of different welfare reforms for farmed insects, shrimp, fish, or chickens (see DiGiovanni 2025).
Other things related to deep uncertainty in animal welfare (see DiGiovanni 2025 and Graham 2025 for context).
Red-teaming the cost-effectiveness analyses made by key actors on different animal welfare interventions (especially those relevant to anything listed above).
More fundamental philosophical or psychological stuff relevant to cause prio:
A) Under cluelessness, what forms of bracketing (or different solutions) make most sense to guide our actions?
B) New/underrated arguments for being particularly worried about the suffering of sentient beings (rather than about pleasure or other things).
C) What explains the fact that some EA animal advocates buy suffering-focused ethics and others don’t? What are the cruxes? What persuaded them? Are there social backgrounds that determine someone’s degree of (non-)sympathy for suffering-focused ethics?
D) How to avoid reducing the credibility of any of the (fairly niche) kinds of work in these two lists?
How do we anticipate very understandable reactions like this one when talking about nematodes and/or indirect effects on wild animals? (e.g., how do we make clear what this work implies and does not imply?)
- ^
I.e., most ecologically inert, and most avoidant of substitution effects, funging, and other backfire risks.
- ^
See the last paragraph of this post section from Graham and this comment from Stevenson. This post section from DiGiovanni on an adjacent topic is also indirectly relevant.
Jim Buhler’s Quick takes
Thank you :)
I guess I recommend reading this overview (or this longer one?) and/or DiGiovanni’s Q&A and then checking the references in these that discuss whatever they think their own crux is.
This will likely point them to some parts of DiGiovanni’s (2025) sequence The challenge of unawareness for impartial altruist action guidance, which is the best and most comprehensive resource (in terms of arguments) on the topic imo.
I also thought your comment didn’t deserve to get downvoted :‘(, even though I disagreed and thought it partly missed my point (I ofc didn’t downvote it, tho). Even the number of upvotes of Mal’s comment responding to yours feels a bit violent, actually. I think people should maybe hold off from upvoting when it’s not necessary. They can just agree-vote.
I think Mal’s, James’, and Tristan’s potential explanations for why this happened are pretty plausible.
But, also, as I suggest in response to Mal, it’s probably just one single person, so :shrug:, I guess. :)
The other thing I’m not sure I understand is how much weight a single individual’s downvote can have—is there any chance that a few AW people have a ton of karma here, so that just a few people downvoting can take you negative in a way that wouldn’t happen as much in GHD?
It’s probably the strong downvote of one single user (especially given that there were only two disagree votes and one of them was mine). If I strongly downvote the same comment, it goes from 0 to −5 karma! (and I don’t think I’m a gigantic outlier karma monster).
A list of resources on Cluelessness
I partly share your pessimism. I hope we’ll have occasions to discuss specific proposals soon!
I would be surprised if such interventions were the ones increasing welfare the most cost-effectively.
If you define cost-effectiveness as something close to “what’s best in expectation according to my specific favorite among all the plausible ways of comparing welfare across individuals”, I agree. I would also be surprised. I’m just—as you probably have realized—very sympathetic to Anthony and Mal’s arguments (in the above-linked posts) that this is not what we should look for when we seek cost-effectiveness.
I saw you already discussed this and adjacent cruxes with them. I might write something relevant to this (precise vs imprecise beliefs, etc.) in the very context of moral weights at some point. I’ll reach back to you then, and maybe we’ll be able to hit finer-grained cruxes and advance this discussion. :)
What about trophic cascades? Maybe the populations most directly affected and reduced by aquatic noise were essential for keeping overall wild animal populations down?
Do you think aquatic noise is like some specific forms of fishing that determinately reduce overall populations? Is it because you think it directly affects/reduces all populations (unlike some other specific forms of fishing) such that trophic cascades can hardly compensate?