I read the stakes here differently to you. I don’t think folks thinking about cluelessness see it as substantially an exercise in developing a defeater to ‘everything which isn’t longtermism’. At least, that isn’t my interest, and I think the literature has focused on AMF etc. more as salient example to explore the concepts, rather than an important subject to apply them to.
The AMF discussions around cluelessness in the OP are intended as toy example—if you like, deliberating purely on “is it good or bad to give to AMF versus this particular alternative?” instead of “Out of all options, should it be AMF?” Parallel to you, although I do think (per OP) AMF donations are net good, I also think (per the contours of your reply) it should be excluded as a promising candidate for the best thing to donate to: if what really matters is how the deep future goes, and the axes of these accessible at present are things like x-risk, interventions which are only tangentially related to these are so unlikely to be best they can be ruled-out ~immediately.
So if that isn’t a main motivation, what is? Perhaps something like this:
1) How to manage deep uncertainty over the long-run ramifications of ones decisions is a challenge across EA-land—particularly acute for longtermists, but also elsewhere: most would care about risks about how in the medium term a charitable intervention could prove counter-productive. In most cases, these mechanisms for something to ‘backfire’ are fairly trivial, but how seriously credible ones should be investigated is up for grabs.
Although “just be indifferent if it is hard to figure out” is a bad technique which finds little favour, I see a variety of mistakes in and around here. E.g.:
a) People not tracking when the ground of appeal for an intervention has changed. Although I don’t see this with AMF, I do see this in and around animal advocacy. One crucial consideration around here is WAS, particularly an ‘inverse logic of the larder’ (see), such as “per area, a factory farm has a lower intensity of animal suffering than the environment it replaced”.
Even if so, it wouldn’t follow the best thing to do would to be as carnivorous as possible. There are also various lines of response. However, one is to say that the key objective of animal advocacy is to encourage greater concern about animal welfare, so that this can ramify through to benefits in the medium term. However, if this is the rationale, metrics of ‘animal suffering averted per $’ remain prominent despite having minimal relevance. If the aim of the game is attitude change, things like shelters and companion animals over changes in factory farmed welfare start looking a lot more credible again in virtue of their greater salience.
b) Early (or motivated) stopping across crucial considerations. There are a host of ramifications to population growth which point in both directions (e.g. climate change, economic output, increased meat consumption, larger aggregate welfare, etc.) Although very few folks rely on these when considering interventions like AMF (but cf.) they are often being relied upon by those suggesting interventions specifically targeted to fertility: enabling contraceptive access (e.g. more contraceptive access --> fewer births --> less of a poor meat eater problem), or reducing rates of abortion (e.g. less abortion --> more people with worthwhile lives --> greater total utility).
Discussions here are typically marred by proponents either completely ignoring considerations on the ‘other side’ of the population growth question, or giving very unequal time to them/sheltering behind uncertainty (e.g. “Considerations X, Y, and Z all tentatively support more population growth, admittedly there’s A, B, C, but we do not cover those in the interests of time—yet, if we had, they probably would tentatively oppose more population growth”).
2) Given my fairly deflationary OP, I don’t think these problems are best described as cluelessness (versus attending to resilient uncertainty and VoI in fairly orthodox evaluation procedures). But although I think I’m right, I don’t think I’m obviously right: if orthodox approaches struggle here, less orthodox ones with representors, incomparability or other features may be what should be used in decision-making (including when we should make decisions versus investigate further). If so then this reasoning looks like a fairly distinct species which could warrant it’s own label.
How to manage deep uncertainty over the long-run ramifications of ones decisions is a challenge across EA-land—particularly acute for longtermists, but also elsewhere: most would care about risks about how in the medium term a charitable intervention could prove counter-productive
This makes some sense to me, although if that’s all we’re talking about I’d prefer to use plain English since the concept is fairly common. I think this is not all other people are talking about though; see my discussion with MichaelStJules.
FWIW, I don’t think ‘risks’ is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. “Give Directly reduces extinction risk by reducing poverty, a known cause of conflict”); the surprisingly-high magnitude of an incidental impact is what is really catching my attention, because it suggests there are much better ways to do good.
Although I don’t see this with AMF, I do see this in and around animal advocacy. One crucial consideration around here is WAS, particularly an ‘inverse logic of the larder’ (see), such as “per area, a factory farm has a lower intensity of animal suffering than the environment it replaced”.
I think you were trying to draw a distinction, but FWIW this feels structurally similar to the ‘AMF impact population growth/economic growth’ argument to me, and I would give structually the same response: once you truly believe a factory farm net reduced animal suffering via the wild environment it incidentally destroyed, there are presumably much more efficient ways to destroy wild environments. As a result, it appears we can benefit wild animals much more than farmed animals, and ending factory farming should disappear from your priority list, at least as an end goal (it may come back as an itself-incidental consequence of e.g. ‘promoting as much concern for animal welfare as possible’). Is your point just that it does not in fact disappear from people’s priority lists in this case? That I’m not well-placed to observe or comment on either way.
b) Early (or motivated) stopping across crucial considerations.
This I agree is a problem. I’m not sure if thinking in terms of cluelessness makes it better or worse; I’ve had a few conversations now where my interlocutor tries to avoid the challenge of cluelessness by presenting an intervention that supposedly has no complex cluelessness attached. So far, I’ve been unconvinced of every case and think said interlocutor is ‘stopping early’ and missing aspects of impact about which they are complexly clueless (often economic/population growth impacts, since it’s actually quite hard to come up with an intervention that doesn’t credibly impact one of those).
I guess I think part of encouraging people to continue thinking rather than stop involves getting people comfortable with the fact that there’s a perfectly reasonable chance that what they end up doing backfires, everything has risk attached, and trying to entirely avoid such is both a fool’s errand and a quick path to analysis paralysis. Currently, my impression is that cluelessness-as-used is pushing towards avoidance rather than acceptance, but the sample size is small and so this opinion is very weakly held. I would be more positive on people thinking about this if it seemed to help push them towards acceptance though.
Given my fairly deflationary OP, I don’t think these problems are best described as cluelessness
Point taken. Given that I hope this wasn’t too much of a hijack, or at least was an interesting hijack. I think I misunderstood how literally you intended the statements I quoted and disagreed with in my original comment.
FWIW, I don’t think ‘risks’ is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. “Give Directly reduces extinction risk by reducing poverty, a known cause of conflict”); the surprisingly-high magnitude of an incidental impact is what is really catching my attention, because it suggests there are much better ways to do good.
(Apologies in advance I’m rehashing unhelpfully)
The usual cluelessness scenarios are more about that there may be powerful lever for impacting the future, and your intended intervention may be pulling it in the wrong direction (rather than a ‘confirmed discovery’). Say your expectation for the EV of GiveDirectly on conflict has a distribution with a mean of zero but an SD of 10x the magnitude of the benefits you had previously estimated. If it were (e.g.) +10, there’s a natural response of ‘shouldn’t we try something which targets this on purpose?‘; if it were 0, we wouldn’t attend to it further; if it meant you were −10, you wouldn’t give to (now net EV = “-9”) GiveDirectly.
The right response where all three scenarios are credible (plus all the intermediates) but you’re unsure which one you’re in isn’t intuitively obvious (at least to me). Even if (like me) you’re sympathetic to pretty doctrinaire standard EV accounts (i.e. you quantify this uncertainty + all the others and just ‘run the numbers’ and take the best EV) this approach seems to ignore this wide variance, which seems to be worthy of further attention.
The OP tries to reconcile this with the standard approach by saying this indeed often should be attended to, but under the guise of value of information rather than something ‘extra’ to orthodoxy. Even though we should still go with our best guess if we to decide (so expectation neutral but high variance terms ‘cancel out’), we might have the option to postpone our decision and improve our guesswork. Whether to take that option should be governed by how resilient our uncertainty is. If your central estimate of GiveDirectly and conflict would move on average by 2 units if you spent an hour thinking about it, that seems an hour well spent; if you thought you could spend a decade on it and remain where you are, going with the current best guess looks better.
This can be put in plain(er) English (although familiar-to-EA jargon like ‘EV’ may remain). Yet there are reasons to be hesitant about the orthodox approach (even though I think the case in favour is ultimately stronger): besides the usual bullets, we would be kidding ourselves if we ever really had in our head an uncertainty distribution to arbitrary precision, and maybe our uncertainty isn’t even remotely approximate to objects we manipulate in standard models of the same. Or (owed to Andreas) even if so, similar to how rule-consequentialism may be better than act-consequentialism, some other epistemic policy would get better results than applying the orthodox approach in these cases of deep uncertainty.
Insofar as folks are more sympathetic to this, they would not want to be deflationary and perhaps urge investment in new techniques/vocab to grapple with the problem. They may also think we don’t have a good ‘answer’ yet of what to do in these situations, so may hesitate to give ‘accept there’s uncertainty but don’t be paralysed by it’ advice that you and I would. Maybe these issues are an open problem we should try and figure out better before pressing on.
(I suppose I should mention I’m an intern at ACE now, although I’m not speaking for them in this comment.)
These are important points, although I’m not sure I agree with your object-level judgements about how EAs are acting.
Also, it seems like you intended to include some links in this comment, but they’re missing.
Although “just be indifferent if it is hard to figure out” is a bad technique which finds little favour
Little favour where? I think this is common in shorttermist EA. As you mention, effects on wild animals are rarely included. GiveWell’s analyses ignore most of the potentially important consequences of population growth, and the metrics they present to donors on their top charity page are very narrow, too. GiveWell did look at how (child) mortality rates affect population sizes, and I think there are some discussions of some effects scattered around (although not necessarily tied to population size), e.g., I think they believe economic growth is good. Have they written anything about the effects of population growth on climate change and nonhuman animals? What indirect plausibly negative effects of the interventions they support have they written about? I think it’s plausible they just don’t find these effects important or bad, although I wouldn’t be confident in such a judgement without looking further into it myself.
Even if you thought the population effects from AMF were plausibly bad and you had deep uncertainty about them, you could target those consequences better with different interventions (e.g. donating to a climate change charity or animal charity) or also support family planning to avoid affecting the population size much in expectation.
a) People not tracking when the ground of appeal for an intervention has changed. Although I don’t see this with AMF, I do see this in and around animal advocacy. One crucial consideration around here is WAS, particularly an ‘inverse logic of the larder’ (see), such as “per area, a factory farm has a lower intensity of animal suffering than the environment it replaced”.
Even if so, it wouldn’t follow the best thing to do would to be as carnivorous as possible. There are also various lines of response. However, one is to say that the key objective of animal advocacy is to encourage greater concern about animal welfare, so that this can ramify through to benefits in the medium term. However, if this is the rationale, metrics of ‘animal suffering averted per $’ remain prominent despite having minimal relevance.
I’d be willing to bet factory farming is worse if you include only mammals and birds (and probably fishes), and while including amphibians or invertebrates might lead to deep and moral uncertainty for me (although potentially resolvable with not too much research), it doesn’t for some others, and in this case, ‘animal suffering averted per $’ may not be that far off. Furthermore, I think it’s reasonable to believe the welfare effects of corporate campaigns for chickens farmed for eggs and meat are far more important than the effects on land use. Stocking density restrictions can also increase the amount of space for the same amount of food, too, although I don’t know what the net effect is.
I worry more about diet change. More directly, some animal advocates treat wild fishes spared from consumption as a good outcome (although in this case, the metric is not animal suffering), but it’s really not clear either way. I have heard the marginal fish killed for food might be farmed now, though.
If the aim of the game is attitude change, things like shelters and companion animals over changes in factory farmed welfare start looking a lot more credible again in virtue of their greater salience.
Credible as cost-effective interventions for attitude change? I think there’s already plenty of concern for companion animals, and it doesn’t transfer that well on its own to farmed and wild animals as individuals without advocacy for them. I’d expect farmed animal advocacy to do a much better job for attitude change.
enabling contraceptive access (e.g. more contraceptive access --> fewer births --> less of a poor meat eater problem)
And even ignoring the question of whether factory farming is better or worse than the wildlife it replaces, extra people replace wildlife for reasons other than farming, too, introducing further uncertainty.
Discussions here are typically marred by proponents either completely ignoring considerations on the ‘other side’ of the population growth question, or giving very unequal time to them/sheltering behind uncertainty (e.g. “Considerations X, Y, and Z all tentatively support more population growth, admittedly there’s A, B, C, but we do not cover those in the interests of time—yet, if we had, they probably would tentatively oppose more population growth”).
Some might just think the considerations on the other side actually aren’t very important compared to the considerations supporting their side, which may be why they’re on that side in the first place. The more shorter-term measurable effects are also easier to take a position on. I agree that there’s a significant risk of confirmation bias here, though, once they pick out one important estimable positive effect (including “direct” effects, like from AMF!).
1) I don’t closely follow the current state of play in terms of ‘shorttermist’ evaluation. The reply I hope (e.g.) a Givewell Analyst would make to (e.g.) “Why aren’t you factoring in impacts on climate change for these interventions?” would be some mix of:
a) “We have looked at this, and we’re confident we can bound the magnitude of this effect to pretty negligible values, so we neglect them in our write-ups etc.”
b) “We tried looking into this, but our uncertainty is highly resilient (and our best guess doesn’t vary appreciably between interventions) so we get higher yield investigating other things.”
c) “We are explicit our analysis is predicated on moral (e.g. “human lives are so much more important than animals lives any impact on the latter is ~moot”) or epistemic (e.g. some ‘common sense anti-cluelessness’ position) claims which either we corporately endorse and/or our audience typically endorses.”
Perhaps such hopes would be generally disappointed.
2) Similar to above, I don’t object to (re. animals) positions like “Our view is this consideration isn’t a concern as X” or “Given this consideration, we target Y rather than Z”, or “Although we aim for A, B is a very good proxy indicator for A which we use in comparative evaluation.”
But I at least used to see folks appeal to motivations which obviate (inverse/) logic of the larder issues, particularly re. diet change (“Sure, it’s actually really unclear becoming vegan reduces or increases animal suffering overall, but the reason to be vegan is to signal concern for animals and so influence broader societal attitudes, and this effect is much more important and what we’re aiming for”). Yet this overriding motivation typically only ‘came up’ in the context of this discussion, and corollary questions like:
* “Is maximizing short term farmed animal welfare the best way of furthering this crucial goal of attitude change?”
* “Is encouraging carnivores to adopt a vegan diet the best way to influence attitudes?”
*“Shouldn’t we try and avoid an intervention like v*ganism which credibly harms those we are urging concern for, as this might look bad/be bad by the lights of many/most non-consequentialist views?”
seemed seldom asked.
Naturally I hope this is a relic of my perhaps jaundiced memory.
a) “We have looked at this, and we’re confident we can bound the magnitude of this effect to pretty negligible values, so we neglect them in our write-ups etc.”
80,000 Hours and Toby Ord at least think that climate change could be an existential risk, and 80,000 Hours ranks it as a higher priority than global health and poverty, so I think it’s not obvious that the effects would be negligible (assuming total utilitarianism, say) if they tried to work through it, although they might still think so. Other responses they might give:
GiveWell-recommended charities mitigate x-risk more than they worsen them or do more good for the far future in other ways. Maybe there’s a longtermist case for growth. It doesn’t seem 80,000 Hours really believes this, though, or else health in poor countries would be higher up. Also, this seems like suspicious convergence, but they could still think the charities are justified primarily by short-term effects, if they think the long-term ones are plausibly close to 0 in expectation. Or,
GiveWell discounts the lives of future people (e.g. with person-affecting views, possibly asymmetric ones, although climate change could still be important on some person-affecting views), which falls under your point c). I think this is a plausible explanation for GiveWell’s views based on what I’ve seen.
I think another good response (although not the one I’d expect) is that they don’t need to be confident the charities do more good than harm in expectation, since it’s actually very cheap to mitigate any possible risks from climate change from them by also donating to effective climate change charities, even if you’re deeply uncertain about how important climate change is. I discuss this approach more here. The result would be that you’re pretty sure you’re doing some decent minimum of good in expectation (from the health effects), whereas just the global health and poverty charity would be plausibly bad (due to climate change), and just the climate change charity would be plausibly close to 0 in expectation (due to deep uncertainty about the importance of climate change).
But I at least used to see folks appeal to motivations which obviate (inverse/) logic of the larder issues, particularly re. diet change (“Sure, it’s actually really unclear becoming vegan reduces or increases animal suffering overall, but the reason to be vegan is to signal concern for animals and so influence broader societal attitudes, and this effect is much more important and what we’re aiming for”). Yet this overriding motivation typically only ‘came up’ in the context of this discussion
This is fair, and I expect that this still happens, but who was saying this? Is this how the animal charities (or their employees) themselves responded to these concerns? I think it’s plausible many did just think the short term benefits for farmed animals outweighed any effects on wild animals.
“Is maximizing short term farmed animal welfare the best way of furthering this crucial goal of attitude change?”
With respect to things other than diet, I don’t think EAs are assuming it is, and they are separately looking for the best approaches to attitude change, so this doesn’t seem important to ask. Corporate campaigns are primarily justified on the basis of their welfare effects for farmed animals, and still look good if you also include short term effects on wild animals. Other more promising approaches towards attitude change have been supported, like The Nonhuman Rights Project (previously an ACE Standout charity, and still a grantee), and plant-based substitutes and cultured meat (e.g. GFI).
“Is encouraging carnivores to adopt a vegan diet the best way to influence attitudes?”
I do think it’s among the best ways, depending on the approach, and I think people were already thinking this outside of the context of this discussion. I think eating animals causes speciesism and apathy, and is a significant psychological barrier to helping animals, farmed and wild. Becoming vegan (for many, not all) is a commitment to actively caring about animals, and can become part of someone’s identity. EAA has put a lot into the development of substitutes, especially through GFI, and these are basically our main hopes for influencing attitudes and also one of our best shots at eliminating factory farming.
I don’t think this is suspicious convergence. There are other promising approaches (like the Nonhuman Rights Project), but it’s hard enough to compare them directly that I don’t think any are clearly better, so I’d endorse supporting multiple approaches, including diet change.
“Shouldn’t we try and avoid an intervention like v*ganism which credibly harms those we are urging concern for, as this might look bad/be bad by the lights of many/most non-consequentialist views?”
I think the case for veganism is much stronger according to the most common non-consequentialist views (that still care about animals), because they often distinguish
intentional harms and exploitation/using others as mere means to ends, cruelty and supporting cruelty, from
incidental harms and harms from omissions, like more nonhuman animals being born because we are not farming some animals more.
Of course, advocacy is not an omission, and what you suggest is also plausible.
Belatedly:
I read the stakes here differently to you. I don’t think folks thinking about cluelessness see it as substantially an exercise in developing a defeater to ‘everything which isn’t longtermism’. At least, that isn’t my interest, and I think the literature has focused on AMF etc. more as salient example to explore the concepts, rather than an important subject to apply them to.
The AMF discussions around cluelessness in the OP are intended as toy example—if you like, deliberating purely on “is it good or bad to give to AMF versus this particular alternative?” instead of “Out of all options, should it be AMF?” Parallel to you, although I do think (per OP) AMF donations are net good, I also think (per the contours of your reply) it should be excluded as a promising candidate for the best thing to donate to: if what really matters is how the deep future goes, and the axes of these accessible at present are things like x-risk, interventions which are only tangentially related to these are so unlikely to be best they can be ruled-out ~immediately.
So if that isn’t a main motivation, what is? Perhaps something like this:
1) How to manage deep uncertainty over the long-run ramifications of ones decisions is a challenge across EA-land—particularly acute for longtermists, but also elsewhere: most would care about risks about how in the medium term a charitable intervention could prove counter-productive. In most cases, these mechanisms for something to ‘backfire’ are fairly trivial, but how seriously credible ones should be investigated is up for grabs.
Although “just be indifferent if it is hard to figure out” is a bad technique which finds little favour, I see a variety of mistakes in and around here. E.g.:
a) People not tracking when the ground of appeal for an intervention has changed. Although I don’t see this with AMF, I do see this in and around animal advocacy. One crucial consideration around here is WAS, particularly an ‘inverse logic of the larder’ (see), such as “per area, a factory farm has a lower intensity of animal suffering than the environment it replaced”.
Even if so, it wouldn’t follow the best thing to do would to be as carnivorous as possible. There are also various lines of response. However, one is to say that the key objective of animal advocacy is to encourage greater concern about animal welfare, so that this can ramify through to benefits in the medium term. However, if this is the rationale, metrics of ‘animal suffering averted per $’ remain prominent despite having minimal relevance. If the aim of the game is attitude change, things like shelters and companion animals over changes in factory farmed welfare start looking a lot more credible again in virtue of their greater salience.
b) Early (or motivated) stopping across crucial considerations. There are a host of ramifications to population growth which point in both directions (e.g. climate change, economic output, increased meat consumption, larger aggregate welfare, etc.) Although very few folks rely on these when considering interventions like AMF (but cf.) they are often being relied upon by those suggesting interventions specifically targeted to fertility: enabling contraceptive access (e.g. more contraceptive access --> fewer births --> less of a poor meat eater problem), or reducing rates of abortion (e.g. less abortion --> more people with worthwhile lives --> greater total utility).
Discussions here are typically marred by proponents either completely ignoring considerations on the ‘other side’ of the population growth question, or giving very unequal time to them/sheltering behind uncertainty (e.g. “Considerations X, Y, and Z all tentatively support more population growth, admittedly there’s A, B, C, but we do not cover those in the interests of time—yet, if we had, they probably would tentatively oppose more population growth”).
2) Given my fairly deflationary OP, I don’t think these problems are best described as cluelessness (versus attending to resilient uncertainty and VoI in fairly orthodox evaluation procedures). But although I think I’m right, I don’t think I’m obviously right: if orthodox approaches struggle here, less orthodox ones with representors, incomparability or other features may be what should be used in decision-making (including when we should make decisions versus investigate further). If so then this reasoning looks like a fairly distinct species which could warrant it’s own label.
This makes some sense to me, although if that’s all we’re talking about I’d prefer to use plain English since the concept is fairly common. I think this is not all other people are talking about though; see my discussion with MichaelStJules.
FWIW, I don’t think ‘risks’ is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. “Give Directly reduces extinction risk by reducing poverty, a known cause of conflict”); the surprisingly-high magnitude of an incidental impact is what is really catching my attention, because it suggests there are much better ways to do good.
I think you were trying to draw a distinction, but FWIW this feels structurally similar to the ‘AMF impact population growth/economic growth’ argument to me, and I would give structually the same response: once you truly believe a factory farm net reduced animal suffering via the wild environment it incidentally destroyed, there are presumably much more efficient ways to destroy wild environments. As a result, it appears we can benefit wild animals much more than farmed animals, and ending factory farming should disappear from your priority list, at least as an end goal (it may come back as an itself-incidental consequence of e.g. ‘promoting as much concern for animal welfare as possible’). Is your point just that it does not in fact disappear from people’s priority lists in this case? That I’m not well-placed to observe or comment on either way.
This I agree is a problem. I’m not sure if thinking in terms of cluelessness makes it better or worse; I’ve had a few conversations now where my interlocutor tries to avoid the challenge of cluelessness by presenting an intervention that supposedly has no complex cluelessness attached. So far, I’ve been unconvinced of every case and think said interlocutor is ‘stopping early’ and missing aspects of impact about which they are complexly clueless (often economic/population growth impacts, since it’s actually quite hard to come up with an intervention that doesn’t credibly impact one of those).
I guess I think part of encouraging people to continue thinking rather than stop involves getting people comfortable with the fact that there’s a perfectly reasonable chance that what they end up doing backfires, everything has risk attached, and trying to entirely avoid such is both a fool’s errand and a quick path to analysis paralysis. Currently, my impression is that cluelessness-as-used is pushing towards avoidance rather than acceptance, but the sample size is small and so this opinion is very weakly held. I would be more positive on people thinking about this if it seemed to help push them towards acceptance though.
Point taken. Given that I hope this wasn’t too much of a hijack, or at least was an interesting hijack. I think I misunderstood how literally you intended the statements I quoted and disagreed with in my original comment.
(Apologies in advance I’m rehashing unhelpfully)
The usual cluelessness scenarios are more about that there may be powerful lever for impacting the future, and your intended intervention may be pulling it in the wrong direction (rather than a ‘confirmed discovery’). Say your expectation for the EV of GiveDirectly on conflict has a distribution with a mean of zero but an SD of 10x the magnitude of the benefits you had previously estimated. If it were (e.g.) +10, there’s a natural response of ‘shouldn’t we try something which targets this on purpose?‘; if it were 0, we wouldn’t attend to it further; if it meant you were −10, you wouldn’t give to (now net EV = “-9”) GiveDirectly.
The right response where all three scenarios are credible (plus all the intermediates) but you’re unsure which one you’re in isn’t intuitively obvious (at least to me). Even if (like me) you’re sympathetic to pretty doctrinaire standard EV accounts (i.e. you quantify this uncertainty + all the others and just ‘run the numbers’ and take the best EV) this approach seems to ignore this wide variance, which seems to be worthy of further attention.
The OP tries to reconcile this with the standard approach by saying this indeed often should be attended to, but under the guise of value of information rather than something ‘extra’ to orthodoxy. Even though we should still go with our best guess if we to decide (so expectation neutral but high variance terms ‘cancel out’), we might have the option to postpone our decision and improve our guesswork. Whether to take that option should be governed by how resilient our uncertainty is. If your central estimate of GiveDirectly and conflict would move on average by 2 units if you spent an hour thinking about it, that seems an hour well spent; if you thought you could spend a decade on it and remain where you are, going with the current best guess looks better.
This can be put in plain(er) English (although familiar-to-EA jargon like ‘EV’ may remain). Yet there are reasons to be hesitant about the orthodox approach (even though I think the case in favour is ultimately stronger): besides the usual bullets, we would be kidding ourselves if we ever really had in our head an uncertainty distribution to arbitrary precision, and maybe our uncertainty isn’t even remotely approximate to objects we manipulate in standard models of the same. Or (owed to Andreas) even if so, similar to how rule-consequentialism may be better than act-consequentialism, some other epistemic policy would get better results than applying the orthodox approach in these cases of deep uncertainty.
Insofar as folks are more sympathetic to this, they would not want to be deflationary and perhaps urge investment in new techniques/vocab to grapple with the problem. They may also think we don’t have a good ‘answer’ yet of what to do in these situations, so may hesitate to give ‘accept there’s uncertainty but don’t be paralysed by it’ advice that you and I would. Maybe these issues are an open problem we should try and figure out better before pressing on.
(I suppose I should mention I’m an intern at ACE now, although I’m not speaking for them in this comment.)
These are important points, although I’m not sure I agree with your object-level judgements about how EAs are acting.
Also, it seems like you intended to include some links in this comment, but they’re missing.
Little favour where? I think this is common in shorttermist EA. As you mention, effects on wild animals are rarely included. GiveWell’s analyses ignore most of the potentially important consequences of population growth, and the metrics they present to donors on their top charity page are very narrow, too. GiveWell did look at how (child) mortality rates affect population sizes, and I think there are some discussions of some effects scattered around (although not necessarily tied to population size), e.g., I think they believe economic growth is good. Have they written anything about the effects of population growth on climate change and nonhuman animals? What indirect plausibly negative effects of the interventions they support have they written about? I think it’s plausible they just don’t find these effects important or bad, although I wouldn’t be confident in such a judgement without looking further into it myself.
Even if you thought the population effects from AMF were plausibly bad and you had deep uncertainty about them, you could target those consequences better with different interventions (e.g. donating to a climate change charity or animal charity) or also support family planning to avoid affecting the population size much in expectation.
I’d be willing to bet factory farming is worse if you include only mammals and birds (and probably fishes), and while including amphibians or invertebrates might lead to deep and moral uncertainty for me (although potentially resolvable with not too much research), it doesn’t for some others, and in this case, ‘animal suffering averted per $’ may not be that far off. Furthermore, I think it’s reasonable to believe the welfare effects of corporate campaigns for chickens farmed for eggs and meat are far more important than the effects on land use. Stocking density restrictions can also increase the amount of space for the same amount of food, too, although I don’t know what the net effect is.
I worry more about diet change. More directly, some animal advocates treat wild fishes spared from consumption as a good outcome (although in this case, the metric is not animal suffering), but it’s really not clear either way. I have heard the marginal fish killed for food might be farmed now, though.
Credible as cost-effective interventions for attitude change? I think there’s already plenty of concern for companion animals, and it doesn’t transfer that well on its own to farmed and wild animals as individuals without advocacy for them. I’d expect farmed animal advocacy to do a much better job for attitude change.
And even ignoring the question of whether factory farming is better or worse than the wildlife it replaces, extra people replace wildlife for reasons other than farming, too, introducing further uncertainty.
Some might just think the considerations on the other side actually aren’t very important compared to the considerations supporting their side, which may be why they’re on that side in the first place. The more shorter-term measurable effects are also easier to take a position on. I agree that there’s a significant risk of confirmation bias here, though, once they pick out one important estimable positive effect (including “direct” effects, like from AMF!).
[Mea culpa re. messing up the formatting again]
1) I don’t closely follow the current state of play in terms of ‘shorttermist’ evaluation. The reply I hope (e.g.) a Givewell Analyst would make to (e.g.) “Why aren’t you factoring in impacts on climate change for these interventions?” would be some mix of:
a) “We have looked at this, and we’re confident we can bound the magnitude of this effect to pretty negligible values, so we neglect them in our write-ups etc.”
b) “We tried looking into this, but our uncertainty is highly resilient (and our best guess doesn’t vary appreciably between interventions) so we get higher yield investigating other things.”
c) “We are explicit our analysis is predicated on moral (e.g. “human lives are so much more important than animals lives any impact on the latter is ~moot”) or epistemic (e.g. some ‘common sense anti-cluelessness’ position) claims which either we corporately endorse and/or our audience typically endorses.”
Perhaps such hopes would be generally disappointed.
2) Similar to above, I don’t object to (re. animals) positions like “Our view is this consideration isn’t a concern as X” or “Given this consideration, we target Y rather than Z”, or “Although we aim for A, B is a very good proxy indicator for A which we use in comparative evaluation.”
But I at least used to see folks appeal to motivations which obviate (inverse/) logic of the larder issues, particularly re. diet change (“Sure, it’s actually really unclear becoming vegan reduces or increases animal suffering overall, but the reason to be vegan is to signal concern for animals and so influence broader societal attitudes, and this effect is much more important and what we’re aiming for”). Yet this overriding motivation typically only ‘came up’ in the context of this discussion, and corollary questions like:
* “Is maximizing short term farmed animal welfare the best way of furthering this crucial goal of attitude change?”
* “Is encouraging carnivores to adopt a vegan diet the best way to influence attitudes?”
* “Shouldn’t we try and avoid an intervention like v*ganism which credibly harms those we are urging concern for, as this might look bad/be bad by the lights of many/most non-consequentialist views?”
seemed seldom asked.
Naturally I hope this is a relic of my perhaps jaundiced memory.
80,000 Hours and Toby Ord at least think that climate change could be an existential risk, and 80,000 Hours ranks it as a higher priority than global health and poverty, so I think it’s not obvious that the effects would be negligible (assuming total utilitarianism, say) if they tried to work through it, although they might still think so. Other responses they might give:
GiveWell-recommended charities mitigate x-risk more than they worsen them or do more good for the far future in other ways. Maybe there’s a longtermist case for growth. It doesn’t seem 80,000 Hours really believes this, though, or else health in poor countries would be higher up. Also, this seems like suspicious convergence, but they could still think the charities are justified primarily by short-term effects, if they think the long-term ones are plausibly close to 0 in expectation. Or,
GiveWell discounts the lives of future people (e.g. with person-affecting views, possibly asymmetric ones, although climate change could still be important on some person-affecting views), which falls under your point c). I think this is a plausible explanation for GiveWell’s views based on what I’ve seen.
I think another good response (although not the one I’d expect) is that they don’t need to be confident the charities do more good than harm in expectation, since it’s actually very cheap to mitigate any possible risks from climate change from them by also donating to effective climate change charities, even if you’re deeply uncertain about how important climate change is. I discuss this approach more here. The result would be that you’re pretty sure you’re doing some decent minimum of good in expectation (from the health effects), whereas just the global health and poverty charity would be plausibly bad (due to climate change), and just the climate change charity would be plausibly close to 0 in expectation (due to deep uncertainty about the importance of climate change).
This is fair, and I expect that this still happens, but who was saying this? Is this how the animal charities (or their employees) themselves responded to these concerns? I think it’s plausible many did just think the short term benefits for farmed animals outweighed any effects on wild animals.
With respect to things other than diet, I don’t think EAs are assuming it is, and they are separately looking for the best approaches to attitude change, so this doesn’t seem important to ask. Corporate campaigns are primarily justified on the basis of their welfare effects for farmed animals, and still look good if you also include short term effects on wild animals. Other more promising approaches towards attitude change have been supported, like The Nonhuman Rights Project (previously an ACE Standout charity, and still a grantee), and plant-based substitutes and cultured meat (e.g. GFI).
I do think it’s among the best ways, depending on the approach, and I think people were already thinking this outside of the context of this discussion. I think eating animals causes speciesism and apathy, and is a significant psychological barrier to helping animals, farmed and wild. Becoming vegan (for many, not all) is a commitment to actively caring about animals, and can become part of someone’s identity. EAA has put a lot into the development of substitutes, especially through GFI, and these are basically our main hopes for influencing attitudes and also one of our best shots at eliminating factory farming.
I don’t think this is suspicious convergence. There are other promising approaches (like the Nonhuman Rights Project), but it’s hard enough to compare them directly that I don’t think any are clearly better, so I’d endorse supporting multiple approaches, including diet change.
I think the case for veganism is much stronger according to the most common non-consequentialist views (that still care about animals), because they often distinguish
intentional harms and exploitation/using others as mere means to ends, cruelty and supporting cruelty, from
incidental harms and harms from omissions, like more nonhuman animals being born because we are not farming some animals more.
Of course, advocacy is not an omission, and what you suggest is also plausible.