We wanted to focus on a specific and somewhat manageable question related to AI vs. non-AI cause prioritization. You’re right that it’s not the only important question to ask. If you think the following claim is true - ‘non-AI projects are never undercut but always outweighed’ - then it doesn’t seem like an important question at all. I doubt that claim holds generally, for reasons that were presented in the piece. When deciding what to prioritize, there are also broader strategic questions that matter—how is money and effort being allocated by other parties, what is your comparative advantage, etc. - that we don’t touch at all here.
Hayley Clatterbuck
By calling out one kind of mistake, we don’t want to incline people toward making the opposite mistake. We are calling for more careful evaluations of projects, both within AI and outside of AI. But we acknowledge the risk of focusing on just one kind of mistake (and focusing on an extreme version of it, to boot). We didn’t pursue comprehensive analyses of which cause areas will remain important conditional on short timelines (and the analysis we did give was pretty speculative), but that would be a good future project. Very near future, of course, if short-ish timelines are correct!
You make a helpful point. We’ve focused on a pretty extreme claim, but there are more nuanced discussions in the area that we think are important. We do think that “AI might solve this” can take chunks out of the expected value of lots of projects (and we’ve started kicking around some ideas for analyzing this). We’ve also done some work about how the background probabilities of x-risk affect the expected value of x-risk projects.
I don’t think that we can swap one general heuristic (e.g. AI futures make other work useless) for a more moderate one (e.g. AI futures reduce EV by 50%). The possibilities that “AI might make this problem worse” or “AI might raise the stakes of decisions we make now” can also amplify the EV of our current projects. Figuring out how AI futures affect cost-effectiveness estimates today is complicated, tricky, and necessary!
Thanks for the helpful addition. I’m not an expert in the x-risk funding landscape, so I’ll defer to you. Sounds like your suggestion could be a sensible one on cross-cause prio grounds. It’s possible that this dynamic illustrates a different pitfall of only making prio judgments at the level of big cause areas. If we lump AI in with other x-risks and hold cause-level funding steady, funding between AI and non-AI x-risks becomes zero sum.
Hi Michael!
You’ve identified a really weak plank in the argument against AI solving factory farming. I agree that capacity-building is not a significant bottleneck, for a lot of the reasons you present.
I think the key issue is whether there will be social and legal barriers that prevent people from switching to farmed animal alternatives. These barriers might prevent the kinds of capacity build-up that would make alternative proteins economically competitive.
I think I might be more pessimistic than you about whether people want to switch to more humane alternatives (and would do so if they were wealthier). That’s probably the case for welfare-enhanced meat (as we see with many affluent customers today). I’m less confident about willingness to switch to lab-grown meat or other alternatives.
I’m quite curious about a scenario in which: massive capacity for producing alt proteins happens without cultural buy-in, causing alt proteins to be far cheaper than animal proteins. The economic incentives to switch could cause quite swift cultural changes. But I’m quite uncertain when trying to predict culture changes.
Depending on the allocation method you use, you can still have high credence in expected total hedonistic utilitarianism and get allocations that give some funding to GHD projects. For example, in this parliament, I assigned 50% to total utilitarianism, 37% to total welfarist consequentialism, and 12% to common sense (these were picked semi-randomly for illustration). I set diminishing returns to 0 to make things even less likely to diversify. Some allocation methods (e.g. maximin) give everything to GHD, some diversify (e.g. bargaining, approval), and some (e.g. MEC) give everything to animals.
With respect to your second question, it wouldn’t follow that we should give money to causes that benefit the already well-off. Lots of worldviews that favor GHD will also favor projects to benefit the worst off (for various reasons). What’s your reason for thinking that they mustn’t? For what it’s worth, this comes out in our parliament tool as well. It’s really hard to get any parliament to favor projects that don’t target suffering (like Artists Without Borders).
Our estimate uses Saulius’s years/$ estimates. To convert to DALYs/$, we weighted by the amount of pain experienced by chickens per year. The details can be found in Laura Dufffy’s report here. The key bit:
I estimated the DALY equivalent of a year spent in each type of pain assessed by the Welfare Footprint Project by looking at the descriptions of and disability weights assigned to various conditions assessed by the Global Burden of Disease Study in 2019 and comparing these to the descriptions of each type of pain tracked by the Welfare Footprint Project.
These intensity-to-DALY conversion factors are:
1 year of annoying pain = 0.01 to 0.02 DALYs
1 year of hurtful pain = 0.1 to 0.25 DALYs
1 year of disabling pain = 2 to 10 DALYs
1 year of excruciating pain = 60 to 150 DALYs
Here’s one method that we’ve found helpful when presenting our work. To get a feel for how the tools work, we set challenges for the group: find a set of assumptions that gives all resources to animal welfare; find how risk averse you’d have to be to favor GHD over x-risk; what moral views best favor longtermist causes? Then, have the group discuss whether and why these assumptions would support those conclusions. Our accompanying reports are often designed to address these very questions, so that might be a way to find the posts that really matter to you.
I think that I’ve become more accepting of cause areas that I was not initially inclined toward (particularly various longtermist ones) and also more suspicious of dogmatism of all kinds. In developing and using the tools, it became clear that there were compelling moral reasons in favor of almost any course of action, and slight shifts in my beliefs about risk aversion, moral weights, aggregation methods etc. could lead me to very different conclusions. This inclines me more toward very significant diversification across cause areas.
A few things come to mind. First, I’ve been really struck by how robust animal welfare work is across lots of kinds of uncertainties. It has some of the virtues of both GHD (a high probability of actually making a difference) and x-risk work (huge scales). Second, when working with the Moral Parliament tool, it is really striking how much of a difference different aggregation methods make. If we use approval voting to navigate moral uncertainty, we get really different recommendations than if we give every worldview control over a share of the pie or if we maximize expected choiceworthiness. For me, figuring out which method we should use turns on what kind of community we want to be and which (or whether!) democratic ideals should govern our decision-making. This seems like an issue we can make headway on, even if there are empirical or moral uncertainties that prove less tractable.
All super interesting suggestions, Michael!
I agree that the plausibility of some DMRA decision theory will depend on how we actually formalize it (something I don’t do here but which Laura Duffy did some of here). Thanks for the suggestion.
Hi Richard,
That is indeed a very difficult objection for the “being an actual cause is always valuable” view. We could amend that principle in various ways. One is agent-neutral: it is valuable that someone makes a difference (rather than the world just turning out well), but it’s not valuable that I make a difference. One adds conditions to actual causation; you get credit only if you raise the probability of the outcome? Do not lower the probability of the outcome (in which case it’s unclear whether you’d be an actual cause at all).
Things get tricky here with the metaphysics of causation and how they interact with agency-based ethical principles. There’s stuff here I’m aware I haven’t quite grasped!
Thank you, Michael!
To your first point, that we have replaced arbitrariness over the threshold of probabilities with arbitrariness about how uncertain we must be before rounding down: I suppose I’m more inclined to accept that decisions about which metaprinciples to apply will be context-sensitive, vague, and unlikely to be capturable by any simple, idealized decision theory. A non-ideal agent deciding when to round down has to juggle lots of different factors: their epistemic limitations, asymmetries in evidence, costs of being right or wrong, past track records, etc. I doubt that there’s any decision theory that is both stateable and clear on this point. Even if there is a non-arbitrary threshold, I have trouble saying what that is. That is probably not a very satisfying response! I did enjoy Weatherson’s latest that touches on this point.
You suggest that the defenses of rounding down would also bolster decision-theoretic defenses of rounding down. It’s worth thinking what a defense of ambiguity aversion would look like. Indeed, it might turn out to be the same as the epistemic defense given here. I don’t have a favorite formal model of ambiguity aversion, so I’m all ears if you do!
Hi David,
Thanks for the comment. I agree that Wilkinson makes a lot of other (really persuasive) points against drawing some threshold of probability. As you point out, one reason is that the normative principle (Minimal Tradeoffs) seems to be independently justified, regardless of the probabilities involved. If you agree with that, then the arbitrariness point seems secondary. I’m suggesting that the uncertainty that accompanies very low probabilities might mean that applying Minimal Tradeoffs to very low probabilities is a bad idea, and there’s some non-arbitrary way to say when that will be. I should also note that one doesn’t need to reject Minimal Tradeoffs. You might think that if we did have precise knowledge of the low probabilities (say, in Pascal’s wager), then we should trade them off for greater payoffs.
It’s possible that invertebrate sentience is harder to investigate given that their behaviors and nervous systems differ from ours more than those of cows and pigs do. Fortunately, there’s been a lot more work on sentience in invertebrates and other less-studied animals over the past few years, and I do think that this work has moved a lot of people toward taking invertebrate sentience seriously. If I’m right about that, then the lack of basic research might be responsible for quite a bit of our uncertainty.
Hi weeatquince,
This is a great question. As I see it, there are at least 3 approaches to ambiguity that are out there (which are not mutually exclusive).
a. Ambiguity aversion reduces to risk aversion about outcomes.
You might think uncertainty is bad because leaves open the possibility of bad outcomes. One approach is to consider the range of probabilities consistent with your uncertainty, and then assume the worst/ put more weight on the probabilities that would be worse for EV. For example, Pat thinks the probability of heads could be anywhere from 0 to 1. If it’s 0, then she’s guaranteed to lose $5 by taking the gamble. If it’s 1, then she’s guaranteed to win $10. If she’s risk averse, she should put more weight on the possibility that it has a Pr(heads) = 0. In the extreme, she should assume that it’s Pr(heads) = 0 and maximin.b. Ambiguity aversion should lead you to adjust your probabilities
The Bayesian adjustment outlined above says that when your evidence leaves a lot of uncertainty, your posterior should revert to your prior. As you note, this is completely consistent with EV maximization. It’s about what you should believe given your evidence, not what you should do.c. Ambiguity aversion means you should avoid bets with uncertain probabilities
You might think uncertainty is bad because it’s irrational to take bets when you don’t know the chances. It’s not that you’re afraid of the possible bad outcomes within the range of things you’re uncertain about. There’s something more intrinsically bad about these bets.
Hi Edo,
There are indeed some problems that arise from adding risk weighting as a function of probabilities. Check out Bottomley and Williamson (2023) for an alternative model that introduces risk as a function of value, as you suggest. We discuss the contrast between REV and WLU a bit more here. I went with REV here in part because it’s better established, and we’re still figuring out how to work out some of the kinks when applying WLU.
Thanks for your comment, Michael. Our team started working through your super helpful recent post last week! We discuss some of these issues (including the last point you mention) in a document where we summarize some of the philosophical background issues. However, we only mention bounded utility very briefly and don’t discuss infinite cases at all. We focus instead on rounding down low probabilities, for two reasons: first, we think that’s what people are probably actually doing in practice, and second, it avoids the seeming conflict between bounded utility and theories of value. I’m sure you have answers to that problem, so let us know!
I think there are probably cases of each. For the former, there might be some large interventions in things like factory farming or climate change (i) that could have huge impacts and (ii) for which we don’t think AI will be particularly efficacious or impactful.
For the latter, here are some cases off the top of my head. Suppose we think that if AI is used to make factory farming more efficient and pernicious, it will be via X (idk, some kind of precision farming technology). Efforts to make X illegal look a lot better after accounting for AI. Or, right now, making it harder for people to buy ingredients for biological weapons might be good bets but not great bets. It reduces the chances of bio weapons somewhat, but knowledge about how to create weapons is the main bottleneck. If AI removes that bottleneck, then those projects look a lot better.