What’s going on with the progress on breeds for the Better Chicken Commitment? I’ve heard it hasn’t been going well. But I think I also read the BCC hadn’t actually settled on breeds until after many commitments were made, so we wouldn’t expect them to start making progress on breeds until after that, anyway. But I think we have settled on approved breeds for a while now.
MichaelStJules
FWIW, shrimp paste alternatives seem morally ambiguous and have a significant risk of backfiring.
Shrimp paste alternatives would probably increase paste shrimp populations. If you think paste shrimp have overall bad lives naturally, then increasing their populations this way would be bad. If you’re highly uncertain about this, then the effects on their population would be highly morally ambiguous.
Shrimp paste alternatives could increase paste shrimp catch (if they’re overfished; see my recent post).
I don’t know how common this is or will be, but I’ve also seen a few articles about paste shrimp (mostly from India) being used as farmed animal feed, like fishmeal. Shrimp paste alternatives could divert paste shrimp catch towards feed and therefore support and increase aquaculture, including shrimp farming, where fishmeal typically makes up >10% of shrimp diets.[1] However, it could also make it harder for insects to compete as a fishmeal substitute, and so decrease insect farming.
- ^
Whether or not paste shrimp are fed to farmed shrimp, they could increase the supply and reduce the prices of fishmeal and fishmeal substitutes, and so reduce fishmeal prices for shrimp farms.
FWIW, it seems reasonably likely that fishing has increased fish populations on the whole, by disproportionately reducing the populations of more predatory species and increasing the populations of their prey. See Christensen et al., 2014 (only considers fish, not invertebrates) and Bell et al., 2018 (very limited in regional representation).
In general, the effects of fishing on welfare seem quite morally ambiguous, when you consider the effects on population sizes across species, tradeoffs between species, the possibility that their lives are overall bad and the possibility that their live are overall good: The moral ambiguity of fishing on wild aquatic animal populations.
I also suspect efforts to make fishing more sustainable actually just increase fishing, while outright bans seem politically infeasible; see my other recent post Sustainable fishing policy increases fishing, and demand reductions might, too.
It may be worth considering even interventions that seem less cost-effective than marginal cage-free campaigns, say because:
You can gather evidence on their cost-effectiveness and build capacity for the future, when cage-free campaigns are less cost-effective.
If the upside is high enough and feedback loops are good enough, you could scale it up if it seems successful or shut it down if not. For example, if it has a 5% chance of being 10x cage-free campaigns is worthless otherwise, then the EV is only 50% cage-free. After a pilot, if you become confident that it will succeed and scale, then you now have a 10x intervention, which would be great. If instead you become confident that it isn’t cost-effective, hopefully you didn’t spent too much to find that out, and then you can stop funding it.
Diversifying across intervention types, regions, or species might be instrumentally useful (e.g. for capacity building) or useful if you’re somewhat difference-making risk averse or difference-making ambiguity averse, which I assume most who prioritize animal welfare are.
I suppose for most of these, careful cost-effectiveness modelling can actually capture all of these benefits. For 2, AIM/Charity Entrepreneurship often models the expected benefits and expected costs, taking into account different benefits and costs conditional on success/scale up and separately conditional on failure/shutdown (with good enough feedback loops). You can also think of this like value of information. For 1 and 3, you can also just include the value of indirect benefits like capacity building and value of information.
Interesting!
Fleurbaey and Voorhoeve wrote a related paper: https://doi.org/10.1093/acprof:oso/9780199931392.003.0009
FWIW, GPT said the greenhouse effect is not stronger locally to the emissions. So, I would guess that if you can offset and emit the same kind of greenhouse gas molecules roughly simultaneously, it would be very unlikely we’d be able to predict which regions are made worse off by this than neither emitting nor offsetting.
Would precision farming decrease costs or increase outputs (reduce mortality, increase growth) much compared standard conventional factory farmings? It could reduce labour costs, increase energy and equipment costs, and have no effect on feed and juvenile costs. It seems that feed often (e.g. chicken, salmon) accounts for around 50% or more of the cost of production in factory. So precision farming could only reduce costs so much.
I think there can be multiple benefits for apparently redundant writing:
Bringing more attention and interest to a topic, creating more space for discussing it
Having alternative write ups that are more accessible/attractive to some people, because people have different preferences over writing structure, styles, lengths, etc.
Identifying areas of disagreement or things to refine, red teaming
For your own understanding, to learn more about the topic and get feedback from others
But I do expect diminishing marginal returns in the benefits from others reading your work the more “redundant” it is. If you’re aiming for impact through influencing others with your writing, you should keep in mind whose behaviour you want to influence, what you could accomplish by doing so, and how to best do that with your writing.
He’s the guy farthest left, next to the panel host. He just looks very different now.
He said he was on a panel at EA Global and mentions PlayPumps, a favourite EA example in this 2015 post. Here’s the YouTube video of the EA Global panel discussion. EDIT: He’s the guy farthest left, next to the panel host.
I don’t think it’s true that other things are equal on the intuition of neutrality, after saying there are more deaths in A than B. The lives and deaths of the contingent/future people in A wouldn’t count at all on symmetric person-affecting views (narrow or wide). On some asymmetric person-affecting views, they might count, but the bad lives count fully, while the additional good lives only offset (possibly fully offset) but never outweigh the additional bad lives, so the extra lives and deaths need not count on net.
On the intuition of neutrality, there are more deaths that count in B, basically except if you’re an antinatalist (about this case).
What person-affecting views satisfying neutrality do you imagine would recommend B/extinction/taking precautions against A here?
For an argument against neutrality that isn’t just against antinatalism, I think you want to define B so that it’s better than or as good as A for necessary people. For example, the virus in B makes everyone infertile without killing them (but the virus in A kills people). Or, fewer people are killed early on in B, and the rest decide not to have children. Or, the deaths in A (for the necessary people) are painful and extended, but painless in B.
Granted, but this example presents just a binary choice, with none of the added complexity of choosing between three options, so we can’t infer much from it.
I can add any number of other options, as long as they respect the premises of your argument and are “unfair” to the necessary number of contingent people. What specific added complexity matters here and why?
I think you’d want to adjust your argument, replacing “present” with something like “the minimum number of contingent people” (and decide how to match counterparts if there are different numbers of contingent people). But this is moving to a less strict interpretation of “ethics being about affecting persons”. And then I could make your original complaint here against Dasgupta’s approach against the less strict wide interpretation.
Well, there is a necessary number of “contingent people”, which seems similar to having necessary (identical) people.
But it’s not the same, and we can argue against it on a stricter interpretation. The difference seems significant, too: no specific contingent person is or would be made worse off. They’d have no grounds for complaint. If you can’t tell me for whom the outcome is worse, why should I care? (And then I can just deny each reason you give as not in line with my intuitions, e.g. ”… so what?”)
Stepping back, I’m not saying that wide views are wrong. I’m sympathetic to them. I also have some sympathy for (asymmetric) narrow views for roughly the reasons I just gave. My point is that your argument or the way you argued could prove too much if taken to be a very strong argument. You criticize Dasgupta’s view from a stricter interpretation, but we can also criticize wide views from a stricter interpretation.
I could also criticize presentism, necessitarianism and wide necessitarianism for being insensitive to the differences between A+ and Z for persons affected. The choice between A, A+ and Z is not just a choice between A and A+ or between A and Z. Between A+ and Z, the “extra” persons exist in both and are affected, even if A is available.
I think there is a quite straightforward argument why IIA is false. (...)
I think these are okay arguments, but IIA still has independent appeal, and here you need a specific argument for why Z vs A+ depends on the availability of A. If the argument is that we should do what’s best for necessary people (or necessary people + necessary number of contingents and resolving how to match counterparts), where the latter is defined relative to the set of available options, including “irrelevant options”, then you’re close to assuming IIA is false, rather than defending it. Why should we define that relative to the option set?
And there are also other resolutions compatible with IIA. We can revise our intuitions about some of the binary choices, possibly to incomparability, which is what Dasgupta’s view does in the first step.
Or we can just accept cycles.[1]
I don’t see why this would be better than doing other comparisons first.
It is constrained by “more objective” impartial facts. Going straight for necessitarianism first seems too partial, and unfair in other ways (prioritarian, egalitarian, most plausible impartial standards). If you totally ignore the differences in welfare for the extra people between A+ and Z (not just outweighed, but taken to be irrelevant) when A is available, it seems you’re being infinitely partial to the necessary people.[2] Impartiality is somewhat more important to me than my person-affecting intuitions here.
I’m not saying this is a decisive argument or that there is any, but it’s one that appeals to my intuitions. If your person-affecting intuitions are more important or you don’t find necessitarianism or whatever objectionably partial, then you could be more inclined to compare another way.
- ^
We’d still have to make choices in practice, though, and a systematic procedure would violate a choice-based version of IIA (whichever we choose in the 3-option case of A, A+, Z would not be chosen in binary choice with one of the available options).
- ^
Or rejecting full aggregation, or aggregating in different ways, but we can consider other thought experiments for those possibilities.
- ^
Still, I think your argument is in fact an argument for antinatalism, or can be turned into one, based on the features of the problem to which you’ve been sensitive here so far. If you rejected antinatalism, then your argument proves too much and you should discount it, or you should be more sympathetic to antinatalism (or both).
You say B prevents more deaths, because it will prevent deaths of future people from the virus. But it prevents those future deaths by also preventing those people from existing.
So, for B to be better than A, you’re saying it’s worse for extra people to exist than not exist, and the reason it’s worse is that they will die. Or that the will die early, but early relative to what? There’s no counterfactual in which they live longer, the way you’ve set the problem up. They die early relative to other people around them or perhaps without achieving major life goals they would have achieved if they didn’t die early, I guess.
Similarly, going extinct now prevents more deaths from all causes, including age-related ones, but also everything that causes people to die early, like car accidents, war, diseases in young people, etc.. The effects are essentially the same.
What’s special about the virus in this hypothetical vs all other causes of (early) death in humans?
So, we should prevent (early) deaths by going extinct now, or collectively refusing to have children, if the alternative is the status quo with many (early) deaths for a long time. That looks like an principle antinatalist position.
Thanks for providing these external benchmarks and making it easier to compare! Do you mind if I updated the text to include a reference to your comments?
Feel free to!
Oh, I didn’t mean for you to define the period explicitly as a fixed interval period. I assume this can vary by catastrophe. Like maybe population declines over 5 years with massive crop failures. Or, an engineered pathogen causes massive population decline in a few months.
I just wasn’t sure what exactly you meant. Another intepretation would be that P_f is the total post-catastrophe population, summing over all future generations, and I just wanted to check that you meant the population at a given time, not aggregating over time.
Expected value density of the benefits and cost-effectiveness of saving a life
You’re modelling the cost-effectiveness of saving a life conditional on catastrophe here, right? I think it would be best to be more explicit about that, if so. Typically x-risk interventions aim at reducing the risk of catastrophe, not the benefits conditional on catastrophe. Also, it would make it easier to follow.
Denoting the pre- and post-catastrophe population by and , I assume
Also, to be clear, this is supposed to be ~immediately pre-catastrophe and ~immediately post-catastrophe, right? (Catastrophes can probably take time, but presumably we can still define pre- and post-catastrophe periods.)
Another benchmark is GiveWell-recommended charities, which save a life for around $5,000. Assuming that’s 70 years of life saved (mostly children), that would be 70 years of human life/$5000 = 0.014 years of human life/$. People spend about 1/3rd of their time sleeping, so it’s around 0.0093 years of waking human life/$.
Then, taking ratios of cost-effectiveness, that’s about 7 years of disabling chicken pain prevented per year of waking human life saved.
Then, we could consider:
How bad disabling pain is in a human vs a chicken
How bad human disabling pain is vs how valuable additional waking human life is
Indirect effects (of the additional years of human life, influences on attitudes towards nonhuman animals, etc.)
Measures aimed at addressing thermal stress, and improving hen access to feed and water show promise in reducing significant amounts of hours spent in pain cost-effectively. Example initial estimates:
Welfare issue Total impact
[hours of disabling pain averted/farm]Cost efficacy
[$/hen]Cost efficacy
[$cents/hour of disabling pain]Thermal stress 87.5k (46.25k-150k) 0.77 1.11 (0.65-2.09) Limited access to water 23.75k (12.5k-35k) 0.17 0.9 (0.61-1.71) Limited access to feed (feeders) 162.5k (103.75k-212.5k) 0.22 0.17 (0.13-0.27) Limited access to feed (feeders + feed) 362.5k (250k-475k) 1.43 0.49 (0.38-0.72) For the most promising, limited access to feed (feeders), at 0.17 cents/hour of disabling pain, this is around 0.067 years of disabling pain/$. It’s worth benchmarking against corporate campaigns for comparison. From Duffy, 2023, using disabling pain-equivalent:
1.7 years of suffering avoided per dollar that was spent on cage-free campaigns, with a range between 0.23 and 5.0 years per dollar.
At first, this looks much less cost-effective, 1.7/0.067 = 25. However, Emily Oehlsen from Open Phil said
We think that the marginal FAW funding opportunity is ~1/5th as cost-effective as the average from Saulius’ analysis.
And Duffy’s estimate is based on the same analysis by Saulius. So, more like 5x less cost-effective. However, Duffy’s estimate also included milder pains:
Table 25: Cage-free corporate campaign cost-effectiveness by pain type
Lower Bound (yrs. pain avoided/$/yr.) Average Estimate (yrs. pain avoided/$/yr.) Upper Bound (yrs. pain avoided/$/yr.) Weight Excruciating -0.000002 -0.000002 -0.000002 5 Disabling 0.019 0.052 0.107 1 Hurtful 0.10 0.39 0.88 0.15 Annoying 0.35 0.91 1.7 0.01 Suffering-
equivalent
0.05 0.12 0.23 1 More than half of the equivalent hours of disabling pain is actually not from disabling pain at all, instead hurtful pain. So a fairer comparison would either omit the hurtful pain for corporate campaigns or also include hurtful pain for this other intervention. This could bring us closer to around 2.5x, as a first guess, which seems near enough to the funding bar.
On the other hand, I picked the most promising of the interventions, and it’s less well-studied and tested than corporate campaigns, so we might expect some optimizer’s curse or regression towards being less cost-effective.
We should separate whether the view is well-motivated from whether it’s compatible with “ethics being about affecting persons”. It’s based only on comparisons between counterparts, never between existence and nonexistence. That seems compatible with “ethics being about affecting persons”.
We should also separate plausibility from whether it would follow on stricter interpretations of “ethics being about affecting persons”. An even stricter interpretation would also tell us to give less weight to or ignore nonidentity differences using essentially the same arguments you make for A+ over Z, so I think your arguments prove too much. For example,
Alice with welfare level 10 and 1 million people with welfare level 1 each
Alice with welfare level 4 and 1 million different people with welfare level 4 each
You said “Ruling out Z first seems more plausible, as Z negatively affects the present people, even quite strongly so compared to A and A+.” The same argument would support 1 over 2.
Then you said “Ruling out A+ is only motivated by an arbitrary-seeming decision to compare just A+ and Z first, merely because they have the same population size (...so what?).” Similarly, I could say “Picking 2 is only motivated by an arbitrary decision to compare contingent people, merely because there’s a minimum number of contingent people across outcomes (… so what?)”
So, similar arguments support narrow person-affecting views over wide ones.
The fact that non-existence is not involved here (a comparison to A) is just a result of that decision, not of there really existing just two options.
I think ignoring irrelevant alternatives has some independent appeal. Dasgupta’s view does that at step 1, but not at step 2. So, it doesn’t always ignore them, but it ignores them more than necessitarianism does.
I can further motivate Dasgupta’s view, or something similar:
There are some “more objective” facts about axiology or what we should do that don’t depend on who presently, actually or across all outcomes necessarily exists (or even wide versions of this). What we should do is first constrained by these “more objective” facts. Hence something like step 1. But these facts can leave a lot of options incomparable or undominated/permissible. I think all complete, transitive and independent of irrelevant alternatives (IIA) views are kind of implausible (e.g. the impossibility theorems of Arrhenius). Still, there are some things the most plausible of these views can agree on, including that Z>A+.
Z>A+ follows from Harsanyi’s theorem, extensions to variable population cases and other utilitarian theorems, e.g. McCarthy et al., 2020, Theorem 3.5; Thomas, 2022; sections 4.3 and 5; Gustafsson et al., 2023; Blackorby et al., 2002, Theorem 3.
Z>A+ follows from anonymous versions of total utilitarianism, average utilitarianism, prioritarianism, egalitarianism, rank-discounted utilitarianism, maximin/leximin, variable value theories and critical-level utilitarianism. Of anonymous, monotonic (Pareto-respecting), transitive, complete and IIA views, it’s only really (partially) ~anti-egalitarian views (e.g. increasing marginal returns to additional welfare, maximax/leximax, geometrism, views with positive lexical thresholds), which sometimes ~prioritize the better off more than ~proportionately, that reject Z>A+, as far as I know. That’s nearly a consensus in favour of Z>A+, and the dissidents have more plausible counterparts that support Z>A+.
On the other hand, there’s more disagreement on A vs A+, and on A vs Z.
Whether or not this step is person-affecting could depend on what kinds of views we use or the facts we’re constrained by, but I’m less worried about that than what I think are plausible (to me) requirements for axiology.
After being constrained by the “more objective” facts in step 1, we should (or are at least allowed to) pick between remaining permissible options in favour of necessary people (or minimizing harm or some other person-affecting principle). Other people wouldn’t have reasonable impartial grounds for complaint with our decisions, because we already addressed the “more objective” impartial facts in 1.
If you were going to defend utilitarian necessitarianism, i.e. maximize the total utility of necessary people, you’d need to justify the utilitarian bit. But the most plausible justifications for the utilitarian bit would end up being justifications for Z>A+, unless you restrict them apparently arbitrarily. So then, you ask: am I a necessitarian first, or a utilitarian first? If you’re utilitarian first, you end up with something like Dasgupta’s view. If you’re a necessitarian first, then you end up with utilitarian necessitarianism.
Similarly if you substitute a different wide, anonymous, monotonic, non-anti-egalitarian view for the utilitarian bit.
Do you think broiler breed ballot initiatives are worth trying or at least investigating further, given the potential upside and cost-effectiveness of cage-free ballot initiatives (Duffy, 2023)? EDIT: Also see Khimasia, 2023 on potential broiler ballot initiatives, from CE/AIM’s research program.
Has there been surveys/polls on potential broiler initiatives (target states, wording, etc.)?
To me, they seem quite promising, but the first step should be further investigation, e.g. finding the best wording for expected impact (impact if passed x probability of passing).