I read the stakes here differently to you. I donât think folks thinking about cluelessness see it as substantially an exercise in developing a defeater to âeverything which isnât longtermismâ. At least, that isnât my interest, and I think the literature has focused on AMF etc. more as salient example to explore the concepts, rather than an important subject to apply them to.
The AMF discussions around cluelessness in the OP are intended as toy exampleâif you like, deliberating purely on âis it good or bad to give to AMF versus this particular alternative?â instead of âOut of all options, should it be AMF?â Parallel to you, although I do think (per OP) AMF donations are net good, I also think (per the contours of your reply) it should be excluded as a promising candidate for the best thing to donate to: if what really matters is how the deep future goes, and the axes of these accessible at present are things like x-risk, interventions which are only tangentially related to these are so unlikely to be best they can be ruled-out ~immediately.
So if that isnât a main motivation, what is? Perhaps something like this:
1) How to manage deep uncertainty over the long-run ramifications of ones decisions is a challenge across EA-landâparticularly acute for longtermists, but also elsewhere: most would care about risks about how in the medium term a charitable intervention could prove counter-productive. In most cases, these mechanisms for something to âbackfireâ are fairly trivial, but how seriously credible ones should be investigated is up for grabs.
Although âjust be indifferent if it is hard to figure outâ is a bad technique which finds little favour, I see a variety of mistakes in and around here. E.g.:
a) People not tracking when the ground of appeal for an intervention has changed. Although I donât see this with AMF, I do see this in and around animal advocacy. One crucial consideration around here is WAS, particularly an âinverse logic of the larderâ (see), such as âper area, a factory farm has a lower intensity of animal suffering than the environment it replacedâ.
Even if so, it wouldnât follow the best thing to do would to be as carnivorous as possible. There are also various lines of response. However, one is to say that the key objective of animal advocacy is to encourage greater concern about animal welfare, so that this can ramify through to benefits in the medium term. However, if this is the rationale, metrics of âanimal suffering averted per $â remain prominent despite having minimal relevance. If the aim of the game is attitude change, things like shelters and companion animals over changes in factory farmed welfare start looking a lot more credible again in virtue of their greater salience.
b) Early (or motivated) stopping across crucial considerations. There are a host of ramifications to population growth which point in both directions (e.g. climate change, economic output, increased meat consumption, larger aggregate welfare, etc.) Although very few folks rely on these when considering interventions like AMF (but cf.) they are often being relied upon by those suggesting interventions specifically targeted to fertility: enabling contraceptive access (e.g. more contraceptive access --> fewer births --> less of a poor meat eater problem), or reducing rates of abortion (e.g. less abortion --> more people with worthwhile lives --> greater total utility).
Discussions here are typically marred by proponents either completely ignoring considerations on the âother sideâ of the population growth question, or giving very unequal time to them/âsheltering behind uncertainty (e.g. âConsiderations X, Y, and Z all tentatively support more population growth, admittedly thereâs A, B, C, but we do not cover those in the interests of timeâyet, if we had, they probably would tentatively oppose more population growthâ).
2) Given my fairly deflationary OP, I donât think these problems are best described as cluelessness (versus attending to resilient uncertainty and VoI in fairly orthodox evaluation procedures). But although I think Iâm right, I donât think Iâm obviously right: if orthodox approaches struggle here, less orthodox ones with representors, incomparability or other features may be what should be used in decision-making (including when we should make decisions versus investigate further). If so then this reasoning looks like a fairly distinct species which could warrant itâs own label.
How to manage deep uncertainty over the long-run ramifications of ones decisions is a challenge across EA-landâparticularly acute for longtermists, but also elsewhere: most would care about risks about how in the medium term a charitable intervention could prove counter-productive
This makes some sense to me, although if thatâs all weâre talking about Iâd prefer to use plain English since the concept is fairly common. I think this is not all other people are talking about though; see my discussion with MichaelStJules.
FWIW, I donât think ârisksâ is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. âGive Directly reduces extinction risk by reducing poverty, a known cause of conflictâ); the surprisingly-high magnitude of an incidental impact is what is really catching my attention, because it suggests there are much better ways to do good.
Although I donât see this with AMF, I do see this in and around animal advocacy. One crucial consideration around here is WAS, particularly an âinverse logic of the larderâ (see), such as âper area, a factory farm has a lower intensity of animal suffering than the environment it replacedâ.
I think you were trying to draw a distinction, but FWIW this feels structurally similar to the âAMF impact population growth/âeconomic growthâ argument to me, and I would give structually the same response: once you truly believe a factory farm net reduced animal suffering via the wild environment it incidentally destroyed, there are presumably much more efficient ways to destroy wild environments. As a result, it appears we can benefit wild animals much more than farmed animals, and ending factory farming should disappear from your priority list, at least as an end goal (it may come back as an itself-incidental consequence of e.g. âpromoting as much concern for animal welfare as possibleâ). Is your point just that it does not in fact disappear from peopleâs priority lists in this case? That Iâm not well-placed to observe or comment on either way.
b) Early (or motivated) stopping across crucial considerations.
This I agree is a problem. Iâm not sure if thinking in terms of cluelessness makes it better or worse; Iâve had a few conversations now where my interlocutor tries to avoid the challenge of cluelessness by presenting an intervention that supposedly has no complex cluelessness attached. So far, Iâve been unconvinced of every case and think said interlocutor is âstopping earlyâ and missing aspects of impact about which they are complexly clueless (often economic/âpopulation growth impacts, since itâs actually quite hard to come up with an intervention that doesnât credibly impact one of those).
I guess I think part of encouraging people to continue thinking rather than stop involves getting people comfortable with the fact that thereâs a perfectly reasonable chance that what they end up doing backfires, everything has risk attached, and trying to entirely avoid such is both a foolâs errand and a quick path to analysis paralysis. Currently, my impression is that cluelessness-as-used is pushing towards avoidance rather than acceptance, but the sample size is small and so this opinion is very weakly held. I would be more positive on people thinking about this if it seemed to help push them towards acceptance though.
Given my fairly deflationary OP, I donât think these problems are best described as cluelessness
Point taken. Given that I hope this wasnât too much of a hijack, or at least was an interesting hijack. I think I misunderstood how literally you intended the statements I quoted and disagreed with in my original comment.
FWIW, I donât think ârisksâ is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. âGive Directly reduces extinction risk by reducing poverty, a known cause of conflictâ); the surprisingly-high magnitude of an incidental impact is what is really catching my attention, because it suggests there are much better ways to do good.
(Apologies in advance Iâm rehashing unhelpfully)
The usual cluelessness scenarios are more about that there may be powerful lever for impacting the future, and your intended intervention may be pulling it in the wrong direction (rather than a âconfirmed discoveryâ). Say your expectation for the EV of GiveDirectly on conflict has a distribution with a mean of zero but an SD of 10x the magnitude of the benefits you had previously estimated. If it were (e.g.) +10, thereâs a natural response of âshouldnât we try something which targets this on purpose?â; if it were 0, we wouldnât attend to it further; if it meant you were â10, you wouldnât give to (now net EV = â-9â) GiveDirectly.
The right response where all three scenarios are credible (plus all the intermediates) but youâre unsure which one youâre in isnât intuitively obvious (at least to me). Even if (like me) youâre sympathetic to pretty doctrinaire standard EV accounts (i.e. you quantify this uncertainty + all the others and just ârun the numbersâ and take the best EV) this approach seems to ignore this wide variance, which seems to be worthy of further attention.
The OP tries to reconcile this with the standard approach by saying this indeed often should be attended to, but under the guise of value of information rather than something âextraâ to orthodoxy. Even though we should still go with our best guess if we to decide (so expectation neutral but high variance terms âcancel outâ), we might have the option to postpone our decision and improve our guesswork. Whether to take that option should be governed by how resilient our uncertainty is. If your central estimate of GiveDirectly and conflict would move on average by 2 units if you spent an hour thinking about it, that seems an hour well spent; if you thought you could spend a decade on it and remain where you are, going with the current best guess looks better.
This can be put in plain(er) English (although familiar-to-EA jargon like âEVâ may remain). Yet there are reasons to be hesitant about the orthodox approach (even though I think the case in favour is ultimately stronger): besides the usual bullets, we would be kidding ourselves if we ever really had in our head an uncertainty distribution to arbitrary precision, and maybe our uncertainty isnât even remotely approximate to objects we manipulate in standard models of the same. Or (owed to Andreas) even if so, similar to how rule-consequentialism may be better than act-consequentialism, some other epistemic policy would get better results than applying the orthodox approach in these cases of deep uncertainty.
Insofar as folks are more sympathetic to this, they would not want to be deflationary and perhaps urge investment in new techniques/âvocab to grapple with the problem. They may also think we donât have a good âanswerâ yet of what to do in these situations, so may hesitate to give âaccept thereâs uncertainty but donât be paralysed by itâ advice that you and I would. Maybe these issues are an open problem we should try and figure out better before pressing on.
(I suppose I should mention Iâm an intern at ACE now, although Iâm not speaking for them in this comment.)
These are important points, although Iâm not sure I agree with your object-level judgements about how EAs are acting.
Also, it seems like you intended to include some links in this comment, but theyâre missing.
Although âjust be indifferent if it is hard to figure outâ is a bad technique which finds little favour
Little favour where? I think this is common in shorttermist EA. As you mention, effects on wild animals are rarely included. GiveWellâs analyses ignore most of the potentially important consequences of population growth, and the metrics they present to donors on their top charity page are very narrow, too. GiveWell did look at how (child) mortality rates affect population sizes, and I think there are some discussions of some effects scattered around (although not necessarily tied to population size), e.g., I think they believe economic growth is good. Have they written anything about the effects of population growth on climate change and nonhuman animals? What indirect plausibly negative effects of the interventions they support have they written about? I think itâs plausible they just donât find these effects important or bad, although I wouldnât be confident in such a judgement without looking further into it myself.
Even if you thought the population effects from AMF were plausibly bad and you had deep uncertainty about them, you could target those consequences better with different interventions (e.g. donating to a climate change charity or animal charity) or also support family planning to avoid affecting the population size much in expectation.
a) People not tracking when the ground of appeal for an intervention has changed. Although I donât see this with AMF, I do see this in and around animal advocacy. One crucial consideration around here is WAS, particularly an âinverse logic of the larderâ (see), such as âper area, a factory farm has a lower intensity of animal suffering than the environment it replacedâ.
Even if so, it wouldnât follow the best thing to do would to be as carnivorous as possible. There are also various lines of response. However, one is to say that the key objective of animal advocacy is to encourage greater concern about animal welfare, so that this can ramify through to benefits in the medium term. However, if this is the rationale, metrics of âanimal suffering averted per $â remain prominent despite having minimal relevance.
Iâd be willing to bet factory farming is worse if you include only mammals and birds (and probably fishes), and while including amphibians or invertebrates might lead to deep and moral uncertainty for me (although potentially resolvable with not too much research), it doesnât for some others, and in this case, âanimal suffering averted per $â may not be that far off. Furthermore, I think itâs reasonable to believe the welfare effects of corporate campaigns for chickens farmed for eggs and meat are far more important than the effects on land use. Stocking density restrictions can also increase the amount of space for the same amount of food, too, although I donât know what the net effect is.
I worry more about diet change. More directly, some animal advocates treat wild fishes spared from consumption as a good outcome (although in this case, the metric is not animal suffering), but itâs really not clear either way. I have heard the marginal fish killed for food might be farmed now, though.
If the aim of the game is attitude change, things like shelters and companion animals over changes in factory farmed welfare start looking a lot more credible again in virtue of their greater salience.
Credible as cost-effective interventions for attitude change? I think thereâs already plenty of concern for companion animals, and it doesnât transfer that well on its own to farmed and wild animals as individuals without advocacy for them. Iâd expect farmed animal advocacy to do a much better job for attitude change.
enabling contraceptive access (e.g. more contraceptive access --> fewer births --> less of a poor meat eater problem)
And even ignoring the question of whether factory farming is better or worse than the wildlife it replaces, extra people replace wildlife for reasons other than farming, too, introducing further uncertainty.
Discussions here are typically marred by proponents either completely ignoring considerations on the âother sideâ of the population growth question, or giving very unequal time to them/âsheltering behind uncertainty (e.g. âConsiderations X, Y, and Z all tentatively support more population growth, admittedly thereâs A, B, C, but we do not cover those in the interests of timeâyet, if we had, they probably would tentatively oppose more population growthâ).
Some might just think the considerations on the other side actually arenât very important compared to the considerations supporting their side, which may be why theyâre on that side in the first place. The more shorter-term measurable effects are also easier to take a position on. I agree that thereâs a significant risk of confirmation bias here, though, once they pick out one important estimable positive effect (including âdirectâ effects, like from AMF!).
1) I donât closely follow the current state of play in terms of âshorttermistâ evaluation. The reply I hope (e.g.) a Givewell Analyst would make to (e.g.) âWhy arenât you factoring in impacts on climate change for these interventions?â would be some mix of:
a) âWe have looked at this, and weâre confident we can bound the magnitude of this effect to pretty negligible values, so we neglect them in our write-ups etc.â
b) âWe tried looking into this, but our uncertainty is highly resilient (and our best guess doesnât vary appreciably between interventions) so we get higher yield investigating other things.â
c) âWe are explicit our analysis is predicated on moral (e.g. âhuman lives are so much more important than animals lives any impact on the latter is ~mootâ) or epistemic (e.g. some âcommon sense anti-cluelessnessâ position) claims which either we corporately endorse and/âor our audience typically endorses.â
Perhaps such hopes would be generally disappointed.
2) Similar to above, I donât object to (re. animals) positions like âOur view is this consideration isnât a concern as Xâ or âGiven this consideration, we target Y rather than Zâ, or âAlthough we aim for A, B is a very good proxy indicator for A which we use in comparative evaluation.â
But I at least used to see folks appeal to motivations which obviate (inverse/â) logic of the larder issues, particularly re. diet change (âSure, itâs actually really unclear becoming vegan reduces or increases animal suffering overall, but the reason to be vegan is to signal concern for animals and so influence broader societal attitudes, and this effect is much more important and what weâre aiming forâ). Yet this overriding motivation typically only âcame upâ in the context of this discussion, and corollary questions like:
* âIs maximizing short term farmed animal welfare the best way of furthering this crucial goal of attitude change?â
* âIs encouraging carnivores to adopt a vegan diet the best way to influence attitudes?â
*âShouldnât we try and avoid an intervention like v*ganism which credibly harms those we are urging concern for, as this might look bad/âbe bad by the lights of many/âmost non-consequentialist views?â
seemed seldom asked.
Naturally I hope this is a relic of my perhaps jaundiced memory.
a) âWe have looked at this, and weâre confident we can bound the magnitude of this effect to pretty negligible values, so we neglect them in our write-ups etc.â
80,000 Hours and Toby Ord at least think that climate change could be an existential risk, and 80,000 Hours ranks it as a higher priority than global health and poverty, so I think itâs not obvious that the effects would be negligible (assuming total utilitarianism, say) if they tried to work through it, although they might still think so. Other responses they might give:
GiveWell-recommended charities mitigate x-risk more than they worsen them or do more good for the far future in other ways. Maybe thereâs a longtermist case for growth. It doesnât seem 80,000 Hours really believes this, though, or else health in poor countries would be higher up. Also, this seems like suspicious convergence, but they could still think the charities are justified primarily by short-term effects, if they think the long-term ones are plausibly close to 0 in expectation. Or,
GiveWell discounts the lives of future people (e.g. with person-affecting views, possibly asymmetric ones, although climate change could still be important on some person-affecting views), which falls under your point c). I think this is a plausible explanation for GiveWellâs views based on what Iâve seen.
I think another good response (although not the one Iâd expect) is that they donât need to be confident the charities do more good than harm in expectation, since itâs actually very cheap to mitigate any possible risks from climate change from them by also donating to effective climate change charities, even if youâre deeply uncertain about how important climate change is. I discuss this approach more here. The result would be that youâre pretty sure youâre doing some decent minimum of good in expectation (from the health effects), whereas just the global health and poverty charity would be plausibly bad (due to climate change), and just the climate change charity would be plausibly close to 0 in expectation (due to deep uncertainty about the importance of climate change).
But I at least used to see folks appeal to motivations which obviate (inverse/â) logic of the larder issues, particularly re. diet change (âSure, itâs actually really unclear becoming vegan reduces or increases animal suffering overall, but the reason to be vegan is to signal concern for animals and so influence broader societal attitudes, and this effect is much more important and what weâre aiming forâ). Yet this overriding motivation typically only âcame upâ in the context of this discussion
This is fair, and I expect that this still happens, but who was saying this? Is this how the animal charities (or their employees) themselves responded to these concerns? I think itâs plausible many did just think the short term benefits for farmed animals outweighed any effects on wild animals.
âIs maximizing short term farmed animal welfare the best way of furthering this crucial goal of attitude change?â
With respect to things other than diet, I donât think EAs are assuming it is, and they are separately looking for the best approaches to attitude change, so this doesnât seem important to ask. Corporate campaigns are primarily justified on the basis of their welfare effects for farmed animals, and still look good if you also include short term effects on wild animals. Other more promising approaches towards attitude change have been supported, like The Nonhuman Rights Project (previously an ACE Standout charity, and still a grantee), and plant-based substitutes and cultured meat (e.g. GFI).
âIs encouraging carnivores to adopt a vegan diet the best way to influence attitudes?â
I do think itâs among the best ways, depending on the approach, and I think people were already thinking this outside of the context of this discussion. I think eating animals causes speciesism and apathy, and is a significant psychological barrier to helping animals, farmed and wild. Becoming vegan (for many, not all) is a commitment to actively caring about animals, and can become part of someoneâs identity. EAA has put a lot into the development of substitutes, especially through GFI, and these are basically our main hopes for influencing attitudes and also one of our best shots at eliminating factory farming.
I donât think this is suspicious convergence. There are other promising approaches (like the Nonhuman Rights Project), but itâs hard enough to compare them directly that I donât think any are clearly better, so Iâd endorse supporting multiple approaches, including diet change.
âShouldnât we try and avoid an intervention like v*ganism which credibly harms those we are urging concern for, as this might look bad/âbe bad by the lights of many/âmost non-consequentialist views?â
I think the case for veganism is much stronger according to the most common non-consequentialist views (that still care about animals), because they often distinguish
intentional harms and exploitation/âusing others as mere means to ends, cruelty and supporting cruelty, from
incidental harms and harms from omissions, like more nonhuman animals being born because we are not farming some animals more.
Of course, advocacy is not an omission, and what you suggest is also plausible.
Belatedly:
I read the stakes here differently to you. I donât think folks thinking about cluelessness see it as substantially an exercise in developing a defeater to âeverything which isnât longtermismâ. At least, that isnât my interest, and I think the literature has focused on AMF etc. more as salient example to explore the concepts, rather than an important subject to apply them to.
The AMF discussions around cluelessness in the OP are intended as toy exampleâif you like, deliberating purely on âis it good or bad to give to AMF versus this particular alternative?â instead of âOut of all options, should it be AMF?â Parallel to you, although I do think (per OP) AMF donations are net good, I also think (per the contours of your reply) it should be excluded as a promising candidate for the best thing to donate to: if what really matters is how the deep future goes, and the axes of these accessible at present are things like x-risk, interventions which are only tangentially related to these are so unlikely to be best they can be ruled-out ~immediately.
So if that isnât a main motivation, what is? Perhaps something like this:
1) How to manage deep uncertainty over the long-run ramifications of ones decisions is a challenge across EA-landâparticularly acute for longtermists, but also elsewhere: most would care about risks about how in the medium term a charitable intervention could prove counter-productive. In most cases, these mechanisms for something to âbackfireâ are fairly trivial, but how seriously credible ones should be investigated is up for grabs.
Although âjust be indifferent if it is hard to figure outâ is a bad technique which finds little favour, I see a variety of mistakes in and around here. E.g.:
a) People not tracking when the ground of appeal for an intervention has changed. Although I donât see this with AMF, I do see this in and around animal advocacy. One crucial consideration around here is WAS, particularly an âinverse logic of the larderâ (see), such as âper area, a factory farm has a lower intensity of animal suffering than the environment it replacedâ.
Even if so, it wouldnât follow the best thing to do would to be as carnivorous as possible. There are also various lines of response. However, one is to say that the key objective of animal advocacy is to encourage greater concern about animal welfare, so that this can ramify through to benefits in the medium term. However, if this is the rationale, metrics of âanimal suffering averted per $â remain prominent despite having minimal relevance. If the aim of the game is attitude change, things like shelters and companion animals over changes in factory farmed welfare start looking a lot more credible again in virtue of their greater salience.
b) Early (or motivated) stopping across crucial considerations. There are a host of ramifications to population growth which point in both directions (e.g. climate change, economic output, increased meat consumption, larger aggregate welfare, etc.) Although very few folks rely on these when considering interventions like AMF (but cf.) they are often being relied upon by those suggesting interventions specifically targeted to fertility: enabling contraceptive access (e.g. more contraceptive access --> fewer births --> less of a poor meat eater problem), or reducing rates of abortion (e.g. less abortion --> more people with worthwhile lives --> greater total utility).
Discussions here are typically marred by proponents either completely ignoring considerations on the âother sideâ of the population growth question, or giving very unequal time to them/âsheltering behind uncertainty (e.g. âConsiderations X, Y, and Z all tentatively support more population growth, admittedly thereâs A, B, C, but we do not cover those in the interests of timeâyet, if we had, they probably would tentatively oppose more population growthâ).
2) Given my fairly deflationary OP, I donât think these problems are best described as cluelessness (versus attending to resilient uncertainty and VoI in fairly orthodox evaluation procedures). But although I think Iâm right, I donât think Iâm obviously right: if orthodox approaches struggle here, less orthodox ones with representors, incomparability or other features may be what should be used in decision-making (including when we should make decisions versus investigate further). If so then this reasoning looks like a fairly distinct species which could warrant itâs own label.
This makes some sense to me, although if thatâs all weâre talking about Iâd prefer to use plain English since the concept is fairly common. I think this is not all other people are talking about though; see my discussion with MichaelStJules.
FWIW, I donât think ârisksâ is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. âGive Directly reduces extinction risk by reducing poverty, a known cause of conflictâ); the surprisingly-high magnitude of an incidental impact is what is really catching my attention, because it suggests there are much better ways to do good.
I think you were trying to draw a distinction, but FWIW this feels structurally similar to the âAMF impact population growth/âeconomic growthâ argument to me, and I would give structually the same response: once you truly believe a factory farm net reduced animal suffering via the wild environment it incidentally destroyed, there are presumably much more efficient ways to destroy wild environments. As a result, it appears we can benefit wild animals much more than farmed animals, and ending factory farming should disappear from your priority list, at least as an end goal (it may come back as an itself-incidental consequence of e.g. âpromoting as much concern for animal welfare as possibleâ). Is your point just that it does not in fact disappear from peopleâs priority lists in this case? That Iâm not well-placed to observe or comment on either way.
This I agree is a problem. Iâm not sure if thinking in terms of cluelessness makes it better or worse; Iâve had a few conversations now where my interlocutor tries to avoid the challenge of cluelessness by presenting an intervention that supposedly has no complex cluelessness attached. So far, Iâve been unconvinced of every case and think said interlocutor is âstopping earlyâ and missing aspects of impact about which they are complexly clueless (often economic/âpopulation growth impacts, since itâs actually quite hard to come up with an intervention that doesnât credibly impact one of those).
I guess I think part of encouraging people to continue thinking rather than stop involves getting people comfortable with the fact that thereâs a perfectly reasonable chance that what they end up doing backfires, everything has risk attached, and trying to entirely avoid such is both a foolâs errand and a quick path to analysis paralysis. Currently, my impression is that cluelessness-as-used is pushing towards avoidance rather than acceptance, but the sample size is small and so this opinion is very weakly held. I would be more positive on people thinking about this if it seemed to help push them towards acceptance though.
Point taken. Given that I hope this wasnât too much of a hijack, or at least was an interesting hijack. I think I misunderstood how literally you intended the statements I quoted and disagreed with in my original comment.
(Apologies in advance Iâm rehashing unhelpfully)
The usual cluelessness scenarios are more about that there may be powerful lever for impacting the future, and your intended intervention may be pulling it in the wrong direction (rather than a âconfirmed discoveryâ). Say your expectation for the EV of GiveDirectly on conflict has a distribution with a mean of zero but an SD of 10x the magnitude of the benefits you had previously estimated. If it were (e.g.) +10, thereâs a natural response of âshouldnât we try something which targets this on purpose?â; if it were 0, we wouldnât attend to it further; if it meant you were â10, you wouldnât give to (now net EV = â-9â) GiveDirectly.
The right response where all three scenarios are credible (plus all the intermediates) but youâre unsure which one youâre in isnât intuitively obvious (at least to me). Even if (like me) youâre sympathetic to pretty doctrinaire standard EV accounts (i.e. you quantify this uncertainty + all the others and just ârun the numbersâ and take the best EV) this approach seems to ignore this wide variance, which seems to be worthy of further attention.
The OP tries to reconcile this with the standard approach by saying this indeed often should be attended to, but under the guise of value of information rather than something âextraâ to orthodoxy. Even though we should still go with our best guess if we to decide (so expectation neutral but high variance terms âcancel outâ), we might have the option to postpone our decision and improve our guesswork. Whether to take that option should be governed by how resilient our uncertainty is. If your central estimate of GiveDirectly and conflict would move on average by 2 units if you spent an hour thinking about it, that seems an hour well spent; if you thought you could spend a decade on it and remain where you are, going with the current best guess looks better.
This can be put in plain(er) English (although familiar-to-EA jargon like âEVâ may remain). Yet there are reasons to be hesitant about the orthodox approach (even though I think the case in favour is ultimately stronger): besides the usual bullets, we would be kidding ourselves if we ever really had in our head an uncertainty distribution to arbitrary precision, and maybe our uncertainty isnât even remotely approximate to objects we manipulate in standard models of the same. Or (owed to Andreas) even if so, similar to how rule-consequentialism may be better than act-consequentialism, some other epistemic policy would get better results than applying the orthodox approach in these cases of deep uncertainty.
Insofar as folks are more sympathetic to this, they would not want to be deflationary and perhaps urge investment in new techniques/âvocab to grapple with the problem. They may also think we donât have a good âanswerâ yet of what to do in these situations, so may hesitate to give âaccept thereâs uncertainty but donât be paralysed by itâ advice that you and I would. Maybe these issues are an open problem we should try and figure out better before pressing on.
(I suppose I should mention Iâm an intern at ACE now, although Iâm not speaking for them in this comment.)
These are important points, although Iâm not sure I agree with your object-level judgements about how EAs are acting.
Also, it seems like you intended to include some links in this comment, but theyâre missing.
Little favour where? I think this is common in shorttermist EA. As you mention, effects on wild animals are rarely included. GiveWellâs analyses ignore most of the potentially important consequences of population growth, and the metrics they present to donors on their top charity page are very narrow, too. GiveWell did look at how (child) mortality rates affect population sizes, and I think there are some discussions of some effects scattered around (although not necessarily tied to population size), e.g., I think they believe economic growth is good. Have they written anything about the effects of population growth on climate change and nonhuman animals? What indirect plausibly negative effects of the interventions they support have they written about? I think itâs plausible they just donât find these effects important or bad, although I wouldnât be confident in such a judgement without looking further into it myself.
Even if you thought the population effects from AMF were plausibly bad and you had deep uncertainty about them, you could target those consequences better with different interventions (e.g. donating to a climate change charity or animal charity) or also support family planning to avoid affecting the population size much in expectation.
Iâd be willing to bet factory farming is worse if you include only mammals and birds (and probably fishes), and while including amphibians or invertebrates might lead to deep and moral uncertainty for me (although potentially resolvable with not too much research), it doesnât for some others, and in this case, âanimal suffering averted per $â may not be that far off. Furthermore, I think itâs reasonable to believe the welfare effects of corporate campaigns for chickens farmed for eggs and meat are far more important than the effects on land use. Stocking density restrictions can also increase the amount of space for the same amount of food, too, although I donât know what the net effect is.
I worry more about diet change. More directly, some animal advocates treat wild fishes spared from consumption as a good outcome (although in this case, the metric is not animal suffering), but itâs really not clear either way. I have heard the marginal fish killed for food might be farmed now, though.
Credible as cost-effective interventions for attitude change? I think thereâs already plenty of concern for companion animals, and it doesnât transfer that well on its own to farmed and wild animals as individuals without advocacy for them. Iâd expect farmed animal advocacy to do a much better job for attitude change.
And even ignoring the question of whether factory farming is better or worse than the wildlife it replaces, extra people replace wildlife for reasons other than farming, too, introducing further uncertainty.
Some might just think the considerations on the other side actually arenât very important compared to the considerations supporting their side, which may be why theyâre on that side in the first place. The more shorter-term measurable effects are also easier to take a position on. I agree that thereâs a significant risk of confirmation bias here, though, once they pick out one important estimable positive effect (including âdirectâ effects, like from AMF!).
[Mea culpa re. messing up the formatting again]
1) I donât closely follow the current state of play in terms of âshorttermistâ evaluation. The reply I hope (e.g.) a Givewell Analyst would make to (e.g.) âWhy arenât you factoring in impacts on climate change for these interventions?â would be some mix of:
a) âWe have looked at this, and weâre confident we can bound the magnitude of this effect to pretty negligible values, so we neglect them in our write-ups etc.â
b) âWe tried looking into this, but our uncertainty is highly resilient (and our best guess doesnât vary appreciably between interventions) so we get higher yield investigating other things.â
c) âWe are explicit our analysis is predicated on moral (e.g. âhuman lives are so much more important than animals lives any impact on the latter is ~mootâ) or epistemic (e.g. some âcommon sense anti-cluelessnessâ position) claims which either we corporately endorse and/âor our audience typically endorses.â
Perhaps such hopes would be generally disappointed.
2) Similar to above, I donât object to (re. animals) positions like âOur view is this consideration isnât a concern as Xâ or âGiven this consideration, we target Y rather than Zâ, or âAlthough we aim for A, B is a very good proxy indicator for A which we use in comparative evaluation.â
But I at least used to see folks appeal to motivations which obviate (inverse/â) logic of the larder issues, particularly re. diet change (âSure, itâs actually really unclear becoming vegan reduces or increases animal suffering overall, but the reason to be vegan is to signal concern for animals and so influence broader societal attitudes, and this effect is much more important and what weâre aiming forâ). Yet this overriding motivation typically only âcame upâ in the context of this discussion, and corollary questions like:
* âIs maximizing short term farmed animal welfare the best way of furthering this crucial goal of attitude change?â
* âIs encouraging carnivores to adopt a vegan diet the best way to influence attitudes?â
* âShouldnât we try and avoid an intervention like v*ganism which credibly harms those we are urging concern for, as this might look bad/âbe bad by the lights of many/âmost non-consequentialist views?â
seemed seldom asked.
Naturally I hope this is a relic of my perhaps jaundiced memory.
80,000 Hours and Toby Ord at least think that climate change could be an existential risk, and 80,000 Hours ranks it as a higher priority than global health and poverty, so I think itâs not obvious that the effects would be negligible (assuming total utilitarianism, say) if they tried to work through it, although they might still think so. Other responses they might give:
GiveWell-recommended charities mitigate x-risk more than they worsen them or do more good for the far future in other ways. Maybe thereâs a longtermist case for growth. It doesnât seem 80,000 Hours really believes this, though, or else health in poor countries would be higher up. Also, this seems like suspicious convergence, but they could still think the charities are justified primarily by short-term effects, if they think the long-term ones are plausibly close to 0 in expectation. Or,
GiveWell discounts the lives of future people (e.g. with person-affecting views, possibly asymmetric ones, although climate change could still be important on some person-affecting views), which falls under your point c). I think this is a plausible explanation for GiveWellâs views based on what Iâve seen.
I think another good response (although not the one Iâd expect) is that they donât need to be confident the charities do more good than harm in expectation, since itâs actually very cheap to mitigate any possible risks from climate change from them by also donating to effective climate change charities, even if youâre deeply uncertain about how important climate change is. I discuss this approach more here. The result would be that youâre pretty sure youâre doing some decent minimum of good in expectation (from the health effects), whereas just the global health and poverty charity would be plausibly bad (due to climate change), and just the climate change charity would be plausibly close to 0 in expectation (due to deep uncertainty about the importance of climate change).
This is fair, and I expect that this still happens, but who was saying this? Is this how the animal charities (or their employees) themselves responded to these concerns? I think itâs plausible many did just think the short term benefits for farmed animals outweighed any effects on wild animals.
With respect to things other than diet, I donât think EAs are assuming it is, and they are separately looking for the best approaches to attitude change, so this doesnât seem important to ask. Corporate campaigns are primarily justified on the basis of their welfare effects for farmed animals, and still look good if you also include short term effects on wild animals. Other more promising approaches towards attitude change have been supported, like The Nonhuman Rights Project (previously an ACE Standout charity, and still a grantee), and plant-based substitutes and cultured meat (e.g. GFI).
I do think itâs among the best ways, depending on the approach, and I think people were already thinking this outside of the context of this discussion. I think eating animals causes speciesism and apathy, and is a significant psychological barrier to helping animals, farmed and wild. Becoming vegan (for many, not all) is a commitment to actively caring about animals, and can become part of someoneâs identity. EAA has put a lot into the development of substitutes, especially through GFI, and these are basically our main hopes for influencing attitudes and also one of our best shots at eliminating factory farming.
I donât think this is suspicious convergence. There are other promising approaches (like the Nonhuman Rights Project), but itâs hard enough to compare them directly that I donât think any are clearly better, so Iâd endorse supporting multiple approaches, including diet change.
I think the case for veganism is much stronger according to the most common non-consequentialist views (that still care about animals), because they often distinguish
intentional harms and exploitation/âusing others as mere means to ends, cruelty and supporting cruelty, from
incidental harms and harms from omissions, like more nonhuman animals being born because we are not farming some animals more.
Of course, advocacy is not an omission, and what you suggest is also plausible.