Many of the considerations regarding the influence we can have on the deep future seem extremely hard, but not totally intractable, to investigate. Offering naive guestimates for these, whilst lavishing effort to investigate easier but less consequential issues, is a grave mistake. The EA community has likely erred in this direction.
***
Yet others, those of complex cluelessness, do not score zero on âtractabilityâ. My credence in âeconomic growth in poorer countries is good for the longterm futureâ is fragile: if I spent an hour (or a week, or a decade) mulling it over, I would expect my central estimate to change, although my remaining uncertainty to be only a little reduced. Given this consideration has much greater impact on what I ultimately care about, time spent on this looks better than time further improving the estimate of immediate impacts like ânumber of children savedâ. It would be unwise to continue delving into the latter at the expense of the former. Would that we do otherwise.
At current, I straightforwardly disagree with both of these claims; I do not think it is a good use of âEA timeâ to try and pin down exactly what the long term (say >30 years out) effects of AMF donations are, or for that matter any other action originally chosen for its short term benefits (such as...saving a drowning child). I feel very confused about why some people seem to think it is a good use of time, and would appreciate any enlightenment offered on this point.
The below is somewhat blunt, because at current cluelessness arguments appear to me to be almost entirely without decision-relevance, but at the same time a lot of people who I think are smarter than me appear to think they are relevant and interesting for something and someone, and Iâm not quite clear what the something or who the someone is, so my confidence is necessarily more limited than the tone would suggest. Then again, this thought is clearly not limited to me, e.g. Buck makes a very similar point here. Iâd really love to get an example, even a hypothetical one, of where any of this would actually matter at the level of making donation/âcareer/âetc. decisions. Or if everybody agrees it would not impact decisions, an explanation of why it is a good idea to spend any âEA timeâ on this at all.
***
For the challenge of complex cluelessness to have bite in the case of AMF donations, it seems to me that we need something in the vicinity of these two claims:
The expected long term consequences of our actions dominate the expected short term consequences, in terms of their moral relevance.
We can tractably make progress on predicting what those those long term consequences are, beyond simple 1-to-1 extrapolation from the short term consequences, by considering those consequences directly.
In short, I claim that once we truly believe (1) and (2) AMF would no longer be on our list of donation candidates.
For example, suppose you go through your suggested process of reflection, come to the conclusion AMF will in fact tractably boost economic growth in poorer countries, that such growth is one of the best ways to improve the longterm future, and that AMFâs impact on growth is morally far more important than the considerations which motivated the choice of AMF in the first place, namely its impact on child mortality in the short term. Satisfied that you have now met the challenge of cluelessness, should you go ahead and click the âdonateâ button?
I think you obviously shouldnât. It seems obvious to me that at minimum you should now go and find an intervention that was explicitly chosen for the purpose of boosting long term growth by the largest amount possible. Since AMF was apparently able to predictably and tractably impact the long term via incidentally impacting growth, it seems like you should be able to do much better if you actually try, and for the optimal approach to improve the long term future to be increasing growth via donating to AMF would be a prime example of Surprising and Suspicious Convergence. In fact, the opening quote from that piece seems particularly apt here:
Oliver: ⊠Thus we see that donating to the opera is the best way of promoting the arts.
Eleanor: Okay, but Iâm principally interested in improving human welfare.
Oliver: Oh! Well I think it is also the case that donating to the opera is best for improving human welfare too.
Similar thoughts would seem to apply to also other possible side-effects of AMF donations; population growth impacts, impacts on animal welfare (wild or farmed), etc. In no case do I have reason to think that AMF is a particularly powerful lever to move those things, and so if I decide that any of them is the Most Important Thing then AMF would not even be on my list of candidate interventions.
Indeed, none of the people I know who think that the far future is massively morally important and that we can tractably impact it focus their do-gooding efforts on AMF, or anything remotely like AMF. To the extent they give money to AMF-like things, it is for appearencesâ sake, for personal comfort, or a hedge against their central beliefs about how to do good being very wrong (see e.g. this comment by Aaron Gertler). As a result, cluelessness arguments appear to me to be addressing a constituency that doesnât actually exist, and attempts to resolve cluelessness are a âsolution looking for a problemâ.
If cluelessness arguments are intended have impact on the actual people donating to short term interventions as a primary form of doing good, they need to engage with the actual disagreements those people have, namely the questions of whether we can actually predict the size/âdirection of the longterm consequences despite the natural lack of feedback loops (see e.g. KHortonâs comment here or MichaelStJulesâ comment here), or the empirical question of whether the impacts of our actions do in fact wax or wane over time (see e.g. reallyeli here), or the legion of potential philosophical objections.
Instead, cluelessness arguments appear to me to assume away all the disagreements, and then say âif we assume the future is massively morally relevant compared to the present, that we can tractably and predictably impact said future, and a broadly consequentalist approach, one should be longtermist in order to maximise good doneâ. Which is in fact true, but unexciting once made clear.
Great comment. I agree with most of what youâve said. Particularly that trying to uncover if donating to AMF is going to be a great way to improve the long-run future seems a fools errand.
This is where my quibble comes in:
If cluelessness arguments are intended have impact on the actual people donating to short term interventions as a primary form of doing good, they need to engage with the actual disagreements those people have, namely the questions of whether we can actually predict the size/âdirection of the longterm consequences despite the natural lack of feedback loops
I donât think this is true. Cluelessness arguments intend to demonstrate that we canât be confident that we are actually doing good when we donate to say GiveWell charities, by noting that there are important indirect/âlong-run effects that we have good reason to expect will occur, and have good reason to suspect are sufficiently important such that they could change the sign of GiveWellâs final number if properly included in their analysis. It seems to me that this should have an impact on any person donating to GiveWell charities unlessforsome reason these people just donât care about indirect/âlong-run effects of their actions (e.g. they have a very high rate of pure time preference). In reality though youâll be hard-pressed to find EAs who donât think indirect/âlong-run effects matter, so I would expect cluelessness arguments to have bite for many EAs.
I donât think you need to demonstrate to people that we can tractably influence the far future for them to be impacted by cluelessness arguments. It is certainly possible to think cluelessness arguments are problematic for justifying giving to GiveWell charities, andalso to think we canât tractably affect the far future. At that point you may be left in a tough place but, well, tough! You might at that point be forgiven for giving up on EA entirely as Greaves notes.
(By the way I recall us having a similar convo on Facebook about this, but this is certainly a better place to have it!)
If we have good reason to expect important far future effects to occur when donating to AMF, important enough to change the sign if properly included in the ex ante analysis, that is equivalent to (actually somewhat stronger than) saying we can tractably influence the far future, since by stipulation AMF itself now meaningfully and predictably influences the far future. I currently donât think you can believe the first and not the second, though Iâm open to someone showing me where Iâm wrong.
Note that complex cluelessness only arises when we know something about how the future will be impacted, but donât know enough about these foreseeable impactsto know if they are net good or bad when taken in aggregation. If we knew literally nothing about how the future would be impacted by an intervention this would be a case of simple cluelessness, not complex cluelessness, and Greaves argues we can ignore simple cluelessness.
What Greaves argues is that we donât in fact know literally nothing about the long-run impacts of giving to GiveWell charities. For example, Greaves says we can be pretty sure there will be long-term population effects of giving to AMF, and that these effects will be very important in the moral calculus (so we know something). But she also says, amongst other things, that we canât be sure even if the long-run effect on population will be positive or negative (so we clearly donât know very much).
So yes we are in fact predictably influencing the far future by giving to AMF, in that we know we will be affecting the number of people who will live in the future. However, I wouldnât say we are influencing the far future in a âtractable wayâ because weâre not actually making the future better (or worse) in expectation, because we are utterly clueless. Making the far future better or worse in expectation is the sort of thing longtermists want to do and they claim there are some ways to do so.
So yes we are in fact predictably influencing the far future by giving to AMF, in that we know we will be affecting the number of people who will live in the future. However, I wouldnât say we are influencing the far future in a âtractable wayâ because weâre not actually making the future better (or worse) in expectation
If we arenât making the future better or worse in expectation, itâs not impacting my decision whether or not to donate to AMF. We can then safely ignore complex cluelessness for the same reason we would ignore simple cluelessness.
Cluelessness only has potential to be interesting if we can plausibly reduce how clueless we are with investigation (this is a lot of Gregâs point in the OP); in this sense the simple/âcomplex difference Greaves identifies is not quite the action-relevant distinction. If, having investigated, far future impacts meaningfully alter the AMF analysis, this is precisely because we have decided that AMF meaningfully impacts the far future in at least one way that is good or bad in expectation, i.e. we can tractably impact the far future.
Put simply, if we cannot affect the far future in expectation at all, then logically AMF cannot affect the far future in expectation. If AMF does not affect the far future in expectation, far future effects need not concern its donors.
If we arenât making the future better or worse in expectation, itâs not impacting my decision whether or not to donate to AMF. We can then safely ignore complex cluelessness for the same reason we would ignore simple cluelessness.
Saying that the long-run effects of giving to AMF are not positive or negative in expectation is not the same as saying that the long-run effects are zero in expectation. The point of complex cluelessness is that we donât really have a well-formed expectation at all because there are so many forseeable complex factors at play.
In simple cluelessness there is symmetry across acts so we can say the long-run effects are zero in expectation, but in complex cluelessness we canât say this. If you canât say the long-run effects are zero in expectation, then you canât ignore the long-run effects.
Iâm not sure how to parse this âexpectation that is neither positive nor negative or zero but still somehow impacts decisionsâ concept, so maybe thatâs where my confusion lies. If I try to work with it, my first thought is that not giving money to AMF would seem to have an undefined expectation for the exact same reason that giving money to AMF would have an undefined expectation; if we wish to avoid actions with undefined expectations (but why?), weâre out of luck and this collapses back to being decision-irrelevant.
I have read the paper. Iâm surprised you think itâs well-explained there, since itâs pretty dense. Accordingly, I wonât pretend I understood all of it. But I do note it ends as follows (emphasis added):
It is not at all obvious on reflection, however, what the phenomenon of cluelessness really amounts to. In particular, it (at least at first sight) seems difficult to capture within an orthodox Bayesian model, according to which any given rational agent simply settles on some particular precise credence function, and the subjective betterness facts follow. Here, I have explored various possibilities within an âimprecise-credenceâ model. Of these, the most promising account â on the assumption that the phenomenon of cluelessness really is a genuine and deep one â involved a âsupervaluationalâ account of the connection between imprecise credences and permissibility. It is also not at all obvious, however, how deep or important the phenomenon of cluelessness really is. In the context of effective altruism, it strikes many as compelling and as deeply problematic. However, mundane, everyday cases that have a similar structure in all respects I have considered are also ubiquitous, and few regard any resulting sense of cluelessness as deeply problematic in the latter cases. It may therefore be that the diagnosis of would-be effective altruistsâ sense of cluelessness, in terms of psychology and/âor the theory of rationality, lies quite elsewhere.
And of course Greaves has since said that she does think we can tractably influence the far future, which resolves the conflict Iâm pointing to anyway. In other words, Iâm not sure I actually disagree with Greaves-the-individual at all, just with (some of) the people who quote her work.
Perhaps we could find some other interventions for which thatâs the case to a much lesser extent. If we deliberately try to beneficially influence the course of the very far future, can we find things where we more robustly have at least some clue that what weâre doing is beneficial and of how beneficial it is? I think the answer is yes.
Iâm not sure how to parse this âexpectation that is neither positive nor negative or zero but still somehow impacts decisionsâ concept, so maybe thatâs where my confusion lies. If I try to work with it, my first thought is that not giving money to AMF would seem to have an undefined expectation for the exact same reason that giving money to AMF would have an undefined expectation; if we wish to avoid actions with undefined expectations (but why?), weâre out of luck and this collapses back to being decision-irrelevant.
I would put it as entertaining multiple probability distributions for the same decision, with different expected values. Even if you have ranges of (so not singly defined) expected values, there can still be useful things you can say.
Suppose you have 4 different acts with EVs in the following ranges:
[-100, 100] (say this is AMF in our example)
[5, 50]
[1, 1000]
[100, 105]
I would prefer each of 2, 3 and 4 to 1, since theyâre all robustly positive, while 1 is not. 4 is also definitely better in expectation than 1 and 2 (according to the probability distributions weâre considering), since its EV falls completely to the right of eachâs, so this means neither 1 nor 2 is permissible. Without some other decision criteria or information, 3 and 4 would both be permissible, and itâs not clear which is better.
Thanks for the response, but I donât think this saves it. In the below Iâm going to treat your ranges as being about the far future impacts of particular actions, but you could substitute for âall the impacts of particular actionsâ if you prefer.
In order for there to be useful things to say, you need to be able to compare the ranges. And if you can rank the ranges (âI would prefer 2 to 1â âI am indifferent between 3 and 4âł, etc.), and that ranking obeys basic rules like transitivity, that seems equivalent to collapsing the all the ranges to single numbers. Collapsing two actions to the same number is fine. So in your example I could arbitrarily assign a âscoreâ of 0 to action 1, a score of 1 to action 2, and scores of 2 to each of 3 and 4.
Then my decision rule just switches from âdo the thing with highest expected valueâ to âdo (one of) the things with highest scoreâ, and the rest of the argument is essentially unchanged: either every possible action has the same score or it doesnât. If some things have higher scores than others, then replacing a lower score action with a higher score action is a way to tractably make the far future better.
Therefore, claims that we cannot tractably make the far future better force all the scores among all actions being taken to be the same, and if the scores are all the same I think your scoring system is decision-irrelevant; it will never push for action A over action B.
Did I miss an out? Itâs been a while since Iâve had to think about weak orderings..
Ya, itâs a weak ordering, so you canât necessarily collapse them to single numbers, because of incomparability.
[1, 1000] and [100, 105] are incomparable. If you tried to make them equivalent, you could run into problems, say with [5, 50], which is also incomparable with [1, 1000] but dominated by [100, 105].
[5, 50] < [100, 105]
[1, 1000] incomparable to the other two
If your set of options was just these 3, then, sure, you could say [100, 105] and [1, 1000] are equivalent since neither is dominated, but if you introduce another option which dominates one but not the other, that equivalence would be broken.
Therefore, claims that we cannot tractably make the far future better force all the scores among all actions being taken to be the same, and if the scores are all the same I think your scoring system is decision-irrelevant; it will never push for action A over action B.
I think there are two ways of interpreting âmake the far future betterâ:
compared to doing nothing/âbusiness as usual, and
compared to a specific other option.
1 implies 2, but 2 does not imply 1. It might be the case that none of the options look robustly better than doing nothing, but still some options are better than others. For example, writing their expected values as the difference with doing nothing, we could have:
[-2, 1]
[-1, 2]
0 (do nothing)
and suppose specifically that our distibutions are such that 2 always dominates 1, because of some correspondence between pairs of distributions. For example, although I can think up scenarios where the opposite might be true, it seems going out of your way to torture an animal to death (for no particular benefit) is dominated at least by killing them without torturing them. Basically, 1 looks like 2 but with extra suffering and the harms to your character.
In this scenario, we canât reliably make the world better, compared to doing nothing, but we still have that option 2 is better than option 1.
Thanks again. I think my issue is that Iâm unconvinced that incomparability applies when faced with ranking decisions. In a forced choice between A and B, Iâd generally say you have three options: choose A, choose B, or be indifferent.
Incomparability in this context seems to imply that one could be indifferent between A and B, prefer C to A, yet be indifferent between C and B. That just sounds wrong to me, and is part of what I was getting at when I mentioned transitivity, curious if you have a concrete example where this feels intuitive?
For the second half, note I said among all actions being taken. If âbusiness as usualâ includes action A which is dominated by action B, we can improve things by replacing A with B.
I think my issue is that Iâm unconvinced that incomparability applies when faced with ranking decisions. In a forced choice between A and B, Iâd generally say you have three options: choose A, choose B, or be indifferent.
I think if you reject incomparability, youâre essentially assuming away complex cluelessness and deep uncertainty. The point in this case is that there are considerations going in each direction, and I donât know how to weigh them against one another (in particular, no evidential symmetry). So, while I might just pick an option if forced to choose between A, B and indifferent, it doesnât reveal a ranking, since youâve eliminated the option Iâd want to give, âI really donât knowâ. You could force me to choose among wrong answers to other questions, too.
That just sounds wrong to me, and is part of what I was getting at when I mentioned transitivity, curious if you have a concrete example where this feels intuitive?
B = business as usual /â âdoing nothingâ
C= working on a cause you have complex cluelessness about, i.e. youâre not wiling to say itâs better or worse than or equivalent to B (e.g. for me, climate change is an example)
A=C but also torturing a dog that was about to be put down anyway (or maybe generally just being mean to others)
Iâm willing to accept that C>A, although I could see arguments made for complex cluelessness about that comparison (e.g. through the indirect effects of torturing a dog on your work, that you already have complex cluelessness about). Torturing a dog, however, could be easily dominated by the extra effects of climate change in A or C compared to B, so it doesnât break the complex cluelessness that we already had comparing B and C.
Some other potential examples here, although these depend on how the numbers work out.
I think if you reject incomparability, youâre essentially assuming away complex cluelessness and deep uncertainty.
Thatâs really useful, thanks, at the very least I now feel like Iâm much closer to identifying where the different positions are coming from. I still think I reject incomparability; the example you gave didnât strike me as compelling, though I can imagine it compelling others.
So, while I might just pick an option if forced to choose between A, B and indifferent, it doesnât reveal a ranking, since youâve eliminated the option Iâd want to give, âI really donât knowâ. You could force me to choose among wrong answers to other questions, too.
I would say itâs reality thatâs doing the forcing. I have money to donate currently; I can choose to donate it to charity A, or B, or C, etc., or to not donate it. I am forced to choose and the decision has large stakes; âI donât knowâ is not an option (âwait and do more researchâ is, but that doesnât seem like it would help here). I am doing a particular job as opposed to all the other things I could be doing with that time; I have made a choice and for the rest of my life I will continue to be forced to choose what to do with my time. Etc.
It feels intuitively obvious to me that those many high-stakes forced choices can and should be compared in order to determine the all-things-considered best course of action, but itâs useful to know that this intuition is apparently not shared.
if we wish to avoid actions with undefined expectations (but why?), weâre out of luck.
Itâs not so much that we should avoid doing it full stop, itâs more that if weâre looking to do the most good then we should probably avoid doing it because we donât actually know if it does good. If you donât have your EA hat on then you can justify doing it for other reasons.
I have read the paper. Iâm surprised you think itâs well-explained there, since itâs pretty dense.
Iâve only properly read it once and it was a while back. I just remember it having quite an effect on me. Maybe I read it a few times to fully grasp it, canât quite remember. Iâd be quite surprised if it immediately clicked for me to be honest. I clearly donât remember it that well because I forgot that Greaves had that discussion about the psychology of cluelessness which is interesting.
And of course Greaves has since said that she does think we can tractably influence the far future, which resolves the conflict Iâm pointing to anyway. In other words, Iâm not sure I actually disagree with Greaves-the-individual at all, just with (some of) the people who quote her work.
Just to be clear I also think that we can tractably influence the far future in expectation (e.g. by taking steps to reduce x-risk). Iâm not really sure how that resolves things.
Iâm surprised to hear you say youâre unsure you disagree with Greaves. Hereâs another quote from her (from here). Iâd imagine you disagree with this?
What do we get when we put all those three observations together? Well, what I get is a deep seated worry about the extent to which it really makes sense to be guided by cost-effectiveness analyses of the kinds that are provided by meta-charities like GiveWell. If what we have is a cost-effectiveness analysis that focuses on a tiny part of the thing we care about, and if we basically know that the real calculationâthe one we actually care aboutâis going to be swamped by this further future stuff that hasnât been included in the cost-effectiveness analysis; how confident should we be really that the cost-effectiveness analysis weâve got is any decent guide at all to how we should be spending our money? Thatâs the worry that I call âcluelessnessâ. We might feel clueless about how to spend money even after reading GiveWellâs website.
Just to be clear I also think that we can tractably influence the far future in expectation (e.g. by taking steps to reduce x-risk). Iâm not really sure how that resolves things.
If you think you can tractably impact the far future in expectation, AMF can impact the far future in expectation. At which point itâs reasonable to think that those far future impacts could be predictably negative on further investigation, since we werenât really selecting for them to be positive. I do think trying to resolve the question of whether they are negative is probably a waste of time for reasons in my first comment, and it sounds like we agree on that, but at that point itâs reasonable to say that âAMF could be good or bad, Iâm not really sure, because Iâve chosen to focus my limited time and attention elsewhereâ. Thereâs no deep or fundamental uncertainty here, just a classic example of triage leading us to prioritise promising-looking paths over unpromising-looking ones.
For the same reason, I donât see anything wrong with that quote from Greaves; coming from someone who thinks we can tractably impact the far future and that the far future is massively morally relevant, it makes a lot of sense. If it came from someone who thought it was impossible to tractably impact the future, Iâd want to dig into it more.
On a slightly different note, I can understand why one might not think we can tractably impact the far future, but what about the medium-term future? For example it seems that mitigating climate change is a pretty surefire way to improve the medium-term future (in expectation). Would you agree with that?
If you accept that then you might also accept that we are clueless about giving to AMF based on itâs possible medium-term climate change impacts (e.g. maybe giving to AMF will increase populations in the near to medium term, and this will increase carbon emissions). What do you think about this line of reasoning?
Medium-term indirect impacts are certainly worth monitoring, but they have a tendency to be much smaller in magnitude than primary impacts being measured, in which case they donât pose much of an issue; to be best of my current knowledge carbon emissions from saving lives are a good example of this.
Of course, one could absolutely think that a dollar spent on climate mitigation is more valuable than a dollar spent saving the lives of the global poor. But thatâs very different to the cluelessness line of attack; put harshly itâs the difference between choosing not to save a drowning child because there is another pond with even more drowning children and you have to make a hard trolley-problem-like choice, versus choosing to walk on by because who even really knows if saving that child would be good anyway. FWIW, I feel like many people effectively arguing the latter in the abstract would not actually walk on by if faced with that actual physical situation, which if Iâm being honest is probably part of why I find it difficult to take these arguments seriously; we are in fact in that situation all the time, whether we realise it or not, and if we wouldnât ignore the drowning child on our doorstep we shouldnât entirely ignore the ones half a world away...unless we are unfortunately forced to do so by the need to save even greater numbers /â prevent even greater suffering.
Medium-term indirect impacts are certainly worth monitoring, but they have a tendency to be much smaller in magnitude than primary impacts being measured, in which case they donât pose much of an issue; to be best of my current knowledge carbon emissions from saving lives are a good example of this.
Perhaps, although I wouldnât say itâs a priori obvious, so I would have to read more to be convinced.
I didnât raise animal welfare concerns either which I also think are relevant in the case of saving lives. In other words Iâm not sure you need to raise future effects for cluelessness worries to have bite, although I admit Iâm less sure about this.
put harshly itâs the difference between choosing not to save a drowning child because there is another pond with even more drowning children and you have to make a hard trolley-problem-like choice, versus choosing to walk on by because who even really knows if saving that child would be good anyway. FWIW, I feel like many people effectively arguing the latter in the abstract would not actually walk on by if faced with that actual physical situation, which if Iâm being honest is probably part of why I find it difficult to take these arguments seriously;
I certainly wouldnât walk on by, but thatâs mainly due to a mix of factoring in moral uncertainty (deontologists would think me the devil) and not wanting the guilt of having walked on by. Also Iâm certainly not 100% sure about the cluelessness critique, so thereâs that too. The cluelessness critique seems sufficient to me to want to search for other ways than AMF to do the most good, but not to literally walk past a drowning child.
I certainly wouldnât walk on by, but thatâs mainly due to a mix of factoring in moral uncertainty (deontologists would think me the devil) and not wanting the guilt of having walked on by.
This makes some sense, but to take a different example, Iâve followed a lot of the COVID debates in EA and EA-adjacent circles, and literally not once have I seen cluelessness brought up as a reason to be concerned that maybe saving lives via faster lockdowns or more testing or more vaccines or whatever is not actually a good thing to do. Yet it seems obvious that some level of complex cluelessness applies here if it applies anywhere, and this is a case where simply ignoring COVID efforts and getting on with your daily life (as best one can) is what most people have done, and certainly not something I would expect to leave people struggling with guilt or facing harsh critique from deontologists.
As Greaves herself notes, such situations are ubiquitous, and the fact that cluelessness worries are only being felt in a very small subset of the situations should lead to a certain degree of skepticism that they are in fact what is really going on. But I donât want to spend too much time throwing around speculation about intentions relative to focusing the object level arguments made, so will leave this train of thought here.
This makes some sense, but to take a different example, Iâve followed a lot of the COVID debates in EA and EA-adjacent circles, and literally not once have I seen cluelessness brought up as a reason to be concerned
To be fair I would say that taking the cluelessness critique seriously is still quite fringe even within EA (my poll on Facebook provided some indication of this).
With an EA hat on I want us to sort out COVID because I think COVID is restricting our ability to do certain things that may be robustly good. With a non-EA hat on I want us to sort out COVID because lockdown is utterly boring (although it actually got me into EA and this forum a bit more which is good) and I donât want my friends and family (or myself!) to be at risk of dying from it.
and this is a case where simply ignoring COVID efforts and getting on with your daily life (as best one can) is what most people have done, and certainly not something I would expect to leave people struggling with guilt or facing harsh critique from deontologists.
Most people have decided to obey lockdowns and be careful in how they interact with others, in order to save lives. In terms of EAs not doing more (e.g. donating money) I think this comes down to the regular argument of COVID not being that neglected and that there are probably better ways to do good. In terms of saving lives, I think deontologists require you to save a drowning child in front of you, but Iâm not actually sure how far that obligation extends temporally/âspatially.
As Greaves herself notes, such situations are ubiquitous, and the fact that cluelessness worries are only being felt in a very small subset of the situations should lead to a certain degree of skepticism that they are in fact what is really going on.
This is interesting and slightly difficult to think about. I think that when I encounter decisions in non-EA-life that I am complexly clueless about, that I let my personal gut feeling takes over. This doesnât feel acceptable in EA situations because, well, EA is all about not letting personal gut feelings take over. So I guess this is my tentative answer to Greavesâ question.
For example it seems that mitigating climate change is a pretty surefire way to improve the medium-term future (in expectation). Would you agree with that?
I have complex cluelessness about the effects of climate change on wild animals, which could dominate the effects on humans and farmed animals.
I read the stakes here differently to you. I donât think folks thinking about cluelessness see it as substantially an exercise in developing a defeater to âeverything which isnât longtermismâ. At least, that isnât my interest, and I think the literature has focused on AMF etc. more as salient example to explore the concepts, rather than an important subject to apply them to.
The AMF discussions around cluelessness in the OP are intended as toy exampleâif you like, deliberating purely on âis it good or bad to give to AMF versus this particular alternative?â instead of âOut of all options, should it be AMF?â Parallel to you, although I do think (per OP) AMF donations are net good, I also think (per the contours of your reply) it should be excluded as a promising candidate for the best thing to donate to: if what really matters is how the deep future goes, and the axes of these accessible at present are things like x-risk, interventions which are only tangentially related to these are so unlikely to be best they can be ruled-out ~immediately.
So if that isnât a main motivation, what is? Perhaps something like this:
1) How to manage deep uncertainty over the long-run ramifications of ones decisions is a challenge across EA-landâparticularly acute for longtermists, but also elsewhere: most would care about risks about how in the medium term a charitable intervention could prove counter-productive. In most cases, these mechanisms for something to âbackfireâ are fairly trivial, but how seriously credible ones should be investigated is up for grabs.
Although âjust be indifferent if it is hard to figure outâ is a bad technique which finds little favour, I see a variety of mistakes in and around here. E.g.:
a) People not tracking when the ground of appeal for an intervention has changed. Although I donât see this with AMF, I do see this in and around animal advocacy. One crucial consideration around here is WAS, particularly an âinverse logic of the larderâ (see), such as âper area, a factory farm has a lower intensity of animal suffering than the environment it replacedâ.
Even if so, it wouldnât follow the best thing to do would to be as carnivorous as possible. There are also various lines of response. However, one is to say that the key objective of animal advocacy is to encourage greater concern about animal welfare, so that this can ramify through to benefits in the medium term. However, if this is the rationale, metrics of âanimal suffering averted per $â remain prominent despite having minimal relevance. If the aim of the game is attitude change, things like shelters and companion animals over changes in factory farmed welfare start looking a lot more credible again in virtue of their greater salience.
b) Early (or motivated) stopping across crucial considerations. There are a host of ramifications to population growth which point in both directions (e.g. climate change, economic output, increased meat consumption, larger aggregate welfare, etc.) Although very few folks rely on these when considering interventions like AMF (but cf.) they are often being relied upon by those suggesting interventions specifically targeted to fertility: enabling contraceptive access (e.g. more contraceptive access --> fewer births --> less of a poor meat eater problem), or reducing rates of abortion (e.g. less abortion --> more people with worthwhile lives --> greater total utility).
Discussions here are typically marred by proponents either completely ignoring considerations on the âother sideâ of the population growth question, or giving very unequal time to them/âsheltering behind uncertainty (e.g. âConsiderations X, Y, and Z all tentatively support more population growth, admittedly thereâs A, B, C, but we do not cover those in the interests of timeâyet, if we had, they probably would tentatively oppose more population growthâ).
2) Given my fairly deflationary OP, I donât think these problems are best described as cluelessness (versus attending to resilient uncertainty and VoI in fairly orthodox evaluation procedures). But although I think Iâm right, I donât think Iâm obviously right: if orthodox approaches struggle here, less orthodox ones with representors, incomparability or other features may be what should be used in decision-making (including when we should make decisions versus investigate further). If so then this reasoning looks like a fairly distinct species which could warrant itâs own label.
How to manage deep uncertainty over the long-run ramifications of ones decisions is a challenge across EA-landâparticularly acute for longtermists, but also elsewhere: most would care about risks about how in the medium term a charitable intervention could prove counter-productive
This makes some sense to me, although if thatâs all weâre talking about Iâd prefer to use plain English since the concept is fairly common. I think this is not all other people are talking about though; see my discussion with MichaelStJules.
FWIW, I donât think ârisksâ is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. âGive Directly reduces extinction risk by reducing poverty, a known cause of conflictâ); the surprisingly-high magnitude of an incidental impact is what is really catching my attention, because it suggests there are much better ways to do good.
Although I donât see this with AMF, I do see this in and around animal advocacy. One crucial consideration around here is WAS, particularly an âinverse logic of the larderâ (see), such as âper area, a factory farm has a lower intensity of animal suffering than the environment it replacedâ.
I think you were trying to draw a distinction, but FWIW this feels structurally similar to the âAMF impact population growth/âeconomic growthâ argument to me, and I would give structually the same response: once you truly believe a factory farm net reduced animal suffering via the wild environment it incidentally destroyed, there are presumably much more efficient ways to destroy wild environments. As a result, it appears we can benefit wild animals much more than farmed animals, and ending factory farming should disappear from your priority list, at least as an end goal (it may come back as an itself-incidental consequence of e.g. âpromoting as much concern for animal welfare as possibleâ). Is your point just that it does not in fact disappear from peopleâs priority lists in this case? That Iâm not well-placed to observe or comment on either way.
b) Early (or motivated) stopping across crucial considerations.
This I agree is a problem. Iâm not sure if thinking in terms of cluelessness makes it better or worse; Iâve had a few conversations now where my interlocutor tries to avoid the challenge of cluelessness by presenting an intervention that supposedly has no complex cluelessness attached. So far, Iâve been unconvinced of every case and think said interlocutor is âstopping earlyâ and missing aspects of impact about which they are complexly clueless (often economic/âpopulation growth impacts, since itâs actually quite hard to come up with an intervention that doesnât credibly impact one of those).
I guess I think part of encouraging people to continue thinking rather than stop involves getting people comfortable with the fact that thereâs a perfectly reasonable chance that what they end up doing backfires, everything has risk attached, and trying to entirely avoid such is both a foolâs errand and a quick path to analysis paralysis. Currently, my impression is that cluelessness-as-used is pushing towards avoidance rather than acceptance, but the sample size is small and so this opinion is very weakly held. I would be more positive on people thinking about this if it seemed to help push them towards acceptance though.
Given my fairly deflationary OP, I donât think these problems are best described as cluelessness
Point taken. Given that I hope this wasnât too much of a hijack, or at least was an interesting hijack. I think I misunderstood how literally you intended the statements I quoted and disagreed with in my original comment.
FWIW, I donât think ârisksâ is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. âGive Directly reduces extinction risk by reducing poverty, a known cause of conflictâ); the surprisingly-high magnitude of an incidental impact is what is really catching my attention, because it suggests there are much better ways to do good.
(Apologies in advance Iâm rehashing unhelpfully)
The usual cluelessness scenarios are more about that there may be powerful lever for impacting the future, and your intended intervention may be pulling it in the wrong direction (rather than a âconfirmed discoveryâ). Say your expectation for the EV of GiveDirectly on conflict has a distribution with a mean of zero but an SD of 10x the magnitude of the benefits you had previously estimated. If it were (e.g.) +10, thereâs a natural response of âshouldnât we try something which targets this on purpose?â; if it were 0, we wouldnât attend to it further; if it meant you were â10, you wouldnât give to (now net EV = â-9â) GiveDirectly.
The right response where all three scenarios are credible (plus all the intermediates) but youâre unsure which one youâre in isnât intuitively obvious (at least to me). Even if (like me) youâre sympathetic to pretty doctrinaire standard EV accounts (i.e. you quantify this uncertainty + all the others and just ârun the numbersâ and take the best EV) this approach seems to ignore this wide variance, which seems to be worthy of further attention.
The OP tries to reconcile this with the standard approach by saying this indeed often should be attended to, but under the guise of value of information rather than something âextraâ to orthodoxy. Even though we should still go with our best guess if we to decide (so expectation neutral but high variance terms âcancel outâ), we might have the option to postpone our decision and improve our guesswork. Whether to take that option should be governed by how resilient our uncertainty is. If your central estimate of GiveDirectly and conflict would move on average by 2 units if you spent an hour thinking about it, that seems an hour well spent; if you thought you could spend a decade on it and remain where you are, going with the current best guess looks better.
This can be put in plain(er) English (although familiar-to-EA jargon like âEVâ may remain). Yet there are reasons to be hesitant about the orthodox approach (even though I think the case in favour is ultimately stronger): besides the usual bullets, we would be kidding ourselves if we ever really had in our head an uncertainty distribution to arbitrary precision, and maybe our uncertainty isnât even remotely approximate to objects we manipulate in standard models of the same. Or (owed to Andreas) even if so, similar to how rule-consequentialism may be better than act-consequentialism, some other epistemic policy would get better results than applying the orthodox approach in these cases of deep uncertainty.
Insofar as folks are more sympathetic to this, they would not want to be deflationary and perhaps urge investment in new techniques/âvocab to grapple with the problem. They may also think we donât have a good âanswerâ yet of what to do in these situations, so may hesitate to give âaccept thereâs uncertainty but donât be paralysed by itâ advice that you and I would. Maybe these issues are an open problem we should try and figure out better before pressing on.
(I suppose I should mention Iâm an intern at ACE now, although Iâm not speaking for them in this comment.)
These are important points, although Iâm not sure I agree with your object-level judgements about how EAs are acting.
Also, it seems like you intended to include some links in this comment, but theyâre missing.
Although âjust be indifferent if it is hard to figure outâ is a bad technique which finds little favour
Little favour where? I think this is common in shorttermist EA. As you mention, effects on wild animals are rarely included. GiveWellâs analyses ignore most of the potentially important consequences of population growth, and the metrics they present to donors on their top charity page are very narrow, too. GiveWell did look at how (child) mortality rates affect population sizes, and I think there are some discussions of some effects scattered around (although not necessarily tied to population size), e.g., I think they believe economic growth is good. Have they written anything about the effects of population growth on climate change and nonhuman animals? What indirect plausibly negative effects of the interventions they support have they written about? I think itâs plausible they just donât find these effects important or bad, although I wouldnât be confident in such a judgement without looking further into it myself.
Even if you thought the population effects from AMF were plausibly bad and you had deep uncertainty about them, you could target those consequences better with different interventions (e.g. donating to a climate change charity or animal charity) or also support family planning to avoid affecting the population size much in expectation.
a) People not tracking when the ground of appeal for an intervention has changed. Although I donât see this with AMF, I do see this in and around animal advocacy. One crucial consideration around here is WAS, particularly an âinverse logic of the larderâ (see), such as âper area, a factory farm has a lower intensity of animal suffering than the environment it replacedâ.
Even if so, it wouldnât follow the best thing to do would to be as carnivorous as possible. There are also various lines of response. However, one is to say that the key objective of animal advocacy is to encourage greater concern about animal welfare, so that this can ramify through to benefits in the medium term. However, if this is the rationale, metrics of âanimal suffering averted per $â remain prominent despite having minimal relevance.
Iâd be willing to bet factory farming is worse if you include only mammals and birds (and probably fishes), and while including amphibians or invertebrates might lead to deep and moral uncertainty for me (although potentially resolvable with not too much research), it doesnât for some others, and in this case, âanimal suffering averted per $â may not be that far off. Furthermore, I think itâs reasonable to believe the welfare effects of corporate campaigns for chickens farmed for eggs and meat are far more important than the effects on land use. Stocking density restrictions can also increase the amount of space for the same amount of food, too, although I donât know what the net effect is.
I worry more about diet change. More directly, some animal advocates treat wild fishes spared from consumption as a good outcome (although in this case, the metric is not animal suffering), but itâs really not clear either way. I have heard the marginal fish killed for food might be farmed now, though.
If the aim of the game is attitude change, things like shelters and companion animals over changes in factory farmed welfare start looking a lot more credible again in virtue of their greater salience.
Credible as cost-effective interventions for attitude change? I think thereâs already plenty of concern for companion animals, and it doesnât transfer that well on its own to farmed and wild animals as individuals without advocacy for them. Iâd expect farmed animal advocacy to do a much better job for attitude change.
enabling contraceptive access (e.g. more contraceptive access --> fewer births --> less of a poor meat eater problem)
And even ignoring the question of whether factory farming is better or worse than the wildlife it replaces, extra people replace wildlife for reasons other than farming, too, introducing further uncertainty.
Discussions here are typically marred by proponents either completely ignoring considerations on the âother sideâ of the population growth question, or giving very unequal time to them/âsheltering behind uncertainty (e.g. âConsiderations X, Y, and Z all tentatively support more population growth, admittedly thereâs A, B, C, but we do not cover those in the interests of timeâyet, if we had, they probably would tentatively oppose more population growthâ).
Some might just think the considerations on the other side actually arenât very important compared to the considerations supporting their side, which may be why theyâre on that side in the first place. The more shorter-term measurable effects are also easier to take a position on. I agree that thereâs a significant risk of confirmation bias here, though, once they pick out one important estimable positive effect (including âdirectâ effects, like from AMF!).
1) I donât closely follow the current state of play in terms of âshorttermistâ evaluation. The reply I hope (e.g.) a Givewell Analyst would make to (e.g.) âWhy arenât you factoring in impacts on climate change for these interventions?â would be some mix of:
a) âWe have looked at this, and weâre confident we can bound the magnitude of this effect to pretty negligible values, so we neglect them in our write-ups etc.â
b) âWe tried looking into this, but our uncertainty is highly resilient (and our best guess doesnât vary appreciably between interventions) so we get higher yield investigating other things.â
c) âWe are explicit our analysis is predicated on moral (e.g. âhuman lives are so much more important than animals lives any impact on the latter is ~mootâ) or epistemic (e.g. some âcommon sense anti-cluelessnessâ position) claims which either we corporately endorse and/âor our audience typically endorses.â
Perhaps such hopes would be generally disappointed.
2) Similar to above, I donât object to (re. animals) positions like âOur view is this consideration isnât a concern as Xâ or âGiven this consideration, we target Y rather than Zâ, or âAlthough we aim for A, B is a very good proxy indicator for A which we use in comparative evaluation.â
But I at least used to see folks appeal to motivations which obviate (inverse/â) logic of the larder issues, particularly re. diet change (âSure, itâs actually really unclear becoming vegan reduces or increases animal suffering overall, but the reason to be vegan is to signal concern for animals and so influence broader societal attitudes, and this effect is much more important and what weâre aiming forâ). Yet this overriding motivation typically only âcame upâ in the context of this discussion, and corollary questions like:
* âIs maximizing short term farmed animal welfare the best way of furthering this crucial goal of attitude change?â
* âIs encouraging carnivores to adopt a vegan diet the best way to influence attitudes?â
*âShouldnât we try and avoid an intervention like v*ganism which credibly harms those we are urging concern for, as this might look bad/âbe bad by the lights of many/âmost non-consequentialist views?â
seemed seldom asked.
Naturally I hope this is a relic of my perhaps jaundiced memory.
a) âWe have looked at this, and weâre confident we can bound the magnitude of this effect to pretty negligible values, so we neglect them in our write-ups etc.â
80,000 Hours and Toby Ord at least think that climate change could be an existential risk, and 80,000 Hours ranks it as a higher priority than global health and poverty, so I think itâs not obvious that the effects would be negligible (assuming total utilitarianism, say) if they tried to work through it, although they might still think so. Other responses they might give:
GiveWell-recommended charities mitigate x-risk more than they worsen them or do more good for the far future in other ways. Maybe thereâs a longtermist case for growth. It doesnât seem 80,000 Hours really believes this, though, or else health in poor countries would be higher up. Also, this seems like suspicious convergence, but they could still think the charities are justified primarily by short-term effects, if they think the long-term ones are plausibly close to 0 in expectation. Or,
GiveWell discounts the lives of future people (e.g. with person-affecting views, possibly asymmetric ones, although climate change could still be important on some person-affecting views), which falls under your point c). I think this is a plausible explanation for GiveWellâs views based on what Iâve seen.
I think another good response (although not the one Iâd expect) is that they donât need to be confident the charities do more good than harm in expectation, since itâs actually very cheap to mitigate any possible risks from climate change from them by also donating to effective climate change charities, even if youâre deeply uncertain about how important climate change is. I discuss this approach more here. The result would be that youâre pretty sure youâre doing some decent minimum of good in expectation (from the health effects), whereas just the global health and poverty charity would be plausibly bad (due to climate change), and just the climate change charity would be plausibly close to 0 in expectation (due to deep uncertainty about the importance of climate change).
But I at least used to see folks appeal to motivations which obviate (inverse/â) logic of the larder issues, particularly re. diet change (âSure, itâs actually really unclear becoming vegan reduces or increases animal suffering overall, but the reason to be vegan is to signal concern for animals and so influence broader societal attitudes, and this effect is much more important and what weâre aiming forâ). Yet this overriding motivation typically only âcame upâ in the context of this discussion
This is fair, and I expect that this still happens, but who was saying this? Is this how the animal charities (or their employees) themselves responded to these concerns? I think itâs plausible many did just think the short term benefits for farmed animals outweighed any effects on wild animals.
âIs maximizing short term farmed animal welfare the best way of furthering this crucial goal of attitude change?â
With respect to things other than diet, I donât think EAs are assuming it is, and they are separately looking for the best approaches to attitude change, so this doesnât seem important to ask. Corporate campaigns are primarily justified on the basis of their welfare effects for farmed animals, and still look good if you also include short term effects on wild animals. Other more promising approaches towards attitude change have been supported, like The Nonhuman Rights Project (previously an ACE Standout charity, and still a grantee), and plant-based substitutes and cultured meat (e.g. GFI).
âIs encouraging carnivores to adopt a vegan diet the best way to influence attitudes?â
I do think itâs among the best ways, depending on the approach, and I think people were already thinking this outside of the context of this discussion. I think eating animals causes speciesism and apathy, and is a significant psychological barrier to helping animals, farmed and wild. Becoming vegan (for many, not all) is a commitment to actively caring about animals, and can become part of someoneâs identity. EAA has put a lot into the development of substitutes, especially through GFI, and these are basically our main hopes for influencing attitudes and also one of our best shots at eliminating factory farming.
I donât think this is suspicious convergence. There are other promising approaches (like the Nonhuman Rights Project), but itâs hard enough to compare them directly that I donât think any are clearly better, so Iâd endorse supporting multiple approaches, including diet change.
âShouldnât we try and avoid an intervention like v*ganism which credibly harms those we are urging concern for, as this might look bad/âbe bad by the lights of many/âmost non-consequentialist views?â
I think the case for veganism is much stronger according to the most common non-consequentialist views (that still care about animals), because they often distinguish
intentional harms and exploitation/âusing others as mere means to ends, cruelty and supporting cruelty, from
incidental harms and harms from omissions, like more nonhuman animals being born because we are not farming some animals more.
Of course, advocacy is not an omission, and what you suggest is also plausible.
For what itâs worth, I think itâs plausible that some interventions chosen for their short-term effects may be promising candidates for longtermist interventions. If you thought that s-risks were important and that larger moral circles mitigate s-risks, then plant-based and cultured animal product substitutes might be promising, since these seem most likely to shift attitudes towards animals the most and fastest, and this would (hopefully) help make the case for wild animals and artificial sentience mext. Maybe direct advocacy for protections for artificial sentience would be best, though, but I wouldnât be surprised if youâd still at least want to target animals somewhat, since this seems more incremental and the step to artificial sentience is greater.
That being said, depending on how urgent s-risks are, how exactly we should approach animal product subsititutes may be different for the short term and long term. Longtermist-focused animal adocacy might be different in other ways, too; see this post.
Furthermore, if growth is fast enough in the future (exponentially? EDIT: It might hit a cubic limit due to physical constraints), and the future growth rate canât reliably be increased, then growth today may have a huge effect on wealth in the long term. The difference XatâYat goes to +/ââ as tââ, if Xâ Y.
If our sphere of influence grows exponentially fast, and our moral circle expands gradually, then you can make a similar argument supporting expanding our moral circle more quickly now.
I think thereâs a difference between the muddy concept of âcause areasâ and actual specific charities/âinterventions here. At the level of cause areas, there could be overlap, because I agree that if you think the Most Important Thing is to expand the moral circle, then there are things in the animal-substitute space that might be interesting, but Iâd be surprised and suspicious (not infinitely suspicious, just moderately so) if the actual bottom-line charity-you-donate-to was the exact same thing as what you got to when trying to minimise the suffering of animals in the present day. Tobias makes a virtually identical point in the post you link to, so we may not disagree, apart from perhaps thinking about the word âinterventionâ differently.
Most animal advocacy efforts are focused on helping animals in the here and now. If we take the longtermist perspective seriously, we will likely arrive at different priorities and focus areas: it would be a remarkable coincidence if short-term-focused work were also ideal from this different perspective.5
Similarly, I could imagine a longtermist concluding that if you look back through history, attempts to e.g. prevent extinction directly or implement better governance seem like they would have been critically hamstrung by a lack of development in the relevant fields, e.g. economics, and the general difficulty of imagining the future. But attempts to grow the economy and advance science seem to have snowballed in a way that impacts the future and also incidentally benefits the present. So in that way you could end up with a longtermist-inspired focus on things like âspeed up economic growthâ or âadvance important researchâ which arguably fall under the ânear-term human-centric welfareâ area on some categorisations of causes. But you didnât get there from that starting point, and again I expect your eventual specific area of focus to be quite different.
If our sphere of influence grows exponentially, and our moral circle expands gradually, then you can make a similar argument supporting expanding it more quickly now
Just want to check I understand what youâre saying here. Are you saying we might want to focus more on expanding growth today because it could have a huge effect on wealth in the long term, or are you saying that we might want to focus more on expanding the moral circle today because we want our future large sphere of influence to be used in a good way rather than a bad way?
The second, we want our future large sphere of influence to be used in a good way. If our sphere of influence grows much faster than our moral circle (and our moral circle still misses a huge number of whom we should rightfully consider significant moral patients; the moral circle could be different in different parts of the universe), then itâs possible that the number of moral patients we could have helped but didnât to grow very quickly, too. Basically, expanding the moral circle would always be urgent, and sooner has a much greater payoff than later.
Not sure if you were referring to that particular post or the whole sequence. If I follow it correctly, I think that particular post is trying to answer the question âhow can we plausibly impact the long-term future, assuming itâs important to do soâ. I think itâs a pretty good treatment of that question!
But I wouldnât mentally file that under cluelessness as I understand the term, because that would also be an issue under ordinary uncertainty. To the extent you explain how cluelessness is different to garden-variety uncertainty and why we canât deal with it in the same way(s), itâs earlier in your sequence of posts, and so far I think I have not been moved away from the objection you try to address in your second post, though if you read the (long) exchange with MichaelStJules above you can see him trying to move me and at minimum succeeding in giving me a better picture of where the disagreements might be.
Edit: I guess what Iâm really saying is that the part of that sequence which seems useful and interesting to meâthe last bitâcould also have been written and would be just as important if we were merely normally-uncertain about the future, as opposed to cluelessly-uncertain.
At current, I straightforwardly disagree with both of these claims; I do not think it is a good use of âEA timeâ to try and pin down exactly what the long term (say >30 years out) effects of AMF donations are, or for that matter any other action originally chosen for its short term benefits (such as...saving a drowning child). I feel very confused about why some people seem to think it is a good use of time, and would appreciate any enlightenment offered on this point.
The below is somewhat blunt, because at current cluelessness arguments appear to me to be almost entirely without decision-relevance, but at the same time a lot of people who I think are smarter than me appear to think they are relevant and interesting for something and someone, and Iâm not quite clear what the something or who the someone is, so my confidence is necessarily more limited than the tone would suggest. Then again, this thought is clearly not limited to me, e.g. Buck makes a very similar point here. Iâd really love to get an example, even a hypothetical one, of where any of this would actually matter at the level of making donation/âcareer/âetc. decisions. Or if everybody agrees it would not impact decisions, an explanation of why it is a good idea to spend any âEA timeâ on this at all.
***
For the challenge of complex cluelessness to have bite in the case of AMF donations, it seems to me that we need something in the vicinity of these two claims:
The expected long term consequences of our actions dominate the expected short term consequences, in terms of their moral relevance.
We can tractably make progress on predicting what those those long term consequences are, beyond simple 1-to-1 extrapolation from the short term consequences, by considering those consequences directly.
In short, I claim that once we truly believe (1) and (2) AMF would no longer be on our list of donation candidates.
For example, suppose you go through your suggested process of reflection, come to the conclusion AMF will in fact tractably boost economic growth in poorer countries, that such growth is one of the best ways to improve the longterm future, and that AMFâs impact on growth is morally far more important than the considerations which motivated the choice of AMF in the first place, namely its impact on child mortality in the short term. Satisfied that you have now met the challenge of cluelessness, should you go ahead and click the âdonateâ button?
I think you obviously shouldnât. It seems obvious to me that at minimum you should now go and find an intervention that was explicitly chosen for the purpose of boosting long term growth by the largest amount possible. Since AMF was apparently able to predictably and tractably impact the long term via incidentally impacting growth, it seems like you should be able to do much better if you actually try, and for the optimal approach to improve the long term future to be increasing growth via donating to AMF would be a prime example of Surprising and Suspicious Convergence. In fact, the opening quote from that piece seems particularly apt here:
Similar thoughts would seem to apply to also other possible side-effects of AMF donations; population growth impacts, impacts on animal welfare (wild or farmed), etc. In no case do I have reason to think that AMF is a particularly powerful lever to move those things, and so if I decide that any of them is the Most Important Thing then AMF would not even be on my list of candidate interventions.
Indeed, none of the people I know who think that the far future is massively morally important and that we can tractably impact it focus their do-gooding efforts on AMF, or anything remotely like AMF. To the extent they give money to AMF-like things, it is for appearencesâ sake, for personal comfort, or a hedge against their central beliefs about how to do good being very wrong (see e.g. this comment by Aaron Gertler). As a result, cluelessness arguments appear to me to be addressing a constituency that doesnât actually exist, and attempts to resolve cluelessness are a âsolution looking for a problemâ.
If cluelessness arguments are intended have impact on the actual people donating to short term interventions as a primary form of doing good, they need to engage with the actual disagreements those people have, namely the questions of whether we can actually predict the size/âdirection of the longterm consequences despite the natural lack of feedback loops (see e.g. KHortonâs comment here or MichaelStJulesâ comment here), or the empirical question of whether the impacts of our actions do in fact wax or wane over time (see e.g. reallyeli here), or the legion of potential philosophical objections.
Instead, cluelessness arguments appear to me to assume away all the disagreements, and then say âif we assume the future is massively morally relevant compared to the present, that we can tractably and predictably impact said future, and a broadly consequentalist approach, one should be longtermist in order to maximise good doneâ. Which is in fact true, but unexciting once made clear.
Great comment. I agree with most of what youâve said. Particularly that trying to uncover if donating to AMF is going to be a great way to improve the long-run future seems a fools errand.
This is where my quibble comes in:
I donât think this is true. Cluelessness arguments intend to demonstrate that we canât be confident that we are actually doing good when we donate to say GiveWell charities, by noting that there are important indirect/âlong-run effects that we have good reason to expect will occur, and have good reason to suspect are sufficiently important such that they could change the sign of GiveWellâs final number if properly included in their analysis. It seems to me that this should have an impact on any person donating to GiveWell charities unless for some reason these people just donât care about indirect/âlong-run effects of their actions (e.g. they have a very high rate of pure time preference). In reality though youâll be hard-pressed to find EAs who donât think indirect/âlong-run effects matter, so I would expect cluelessness arguments to have bite for many EAs.
I donât think you need to demonstrate to people that we can tractably influence the far future for them to be impacted by cluelessness arguments. It is certainly possible to think cluelessness arguments are problematic for justifying giving to GiveWell charities, and also to think we canât tractably affect the far future. At that point you may be left in a tough place but, well, tough! You might at that point be forgiven for giving up on EA entirely as Greaves notes.
(By the way I recall us having a similar convo on Facebook about this, but this is certainly a better place to have it!)
If we have good reason to expect important far future effects to occur when donating to AMF, important enough to change the sign if properly included in the ex ante analysis, that is equivalent to (actually somewhat stronger than) saying we can tractably influence the far future, since by stipulation AMF itself now meaningfully and predictably influences the far future. I currently donât think you can believe the first and not the second, though Iâm open to someone showing me where Iâm wrong.
Thereâs an important and subtle nuance here.
Note that complex cluelessness only arises when we know something about how the future will be impacted, but donât know enough about these foreseeable impacts to know if they are net good or bad when taken in aggregation. If we knew literally nothing about how the future would be impacted by an intervention this would be a case of simple cluelessness, not complex cluelessness, and Greaves argues we can ignore simple cluelessness.
What Greaves argues is that we donât in fact know literally nothing about the long-run impacts of giving to GiveWell charities. For example, Greaves says we can be pretty sure there will be long-term population effects of giving to AMF, and that these effects will be very important in the moral calculus (so we know something). But she also says, amongst other things, that we canât be sure even if the long-run effect on population will be positive or negative (so we clearly donât know very much).
So yes we are in fact predictably influencing the far future by giving to AMF, in that we know we will be affecting the number of people who will live in the future. However, I wouldnât say we are influencing the far future in a âtractable wayâ because weâre not actually making the future better (or worse) in expectation, because we are utterly clueless. Making the far future better or worse in expectation is the sort of thing longtermists want to do and they claim there are some ways to do so.
If we arenât making the future better or worse in expectation, itâs not impacting my decision whether or not to donate to AMF. We can then safely ignore complex cluelessness for the same reason we would ignore simple cluelessness.
Cluelessness only has potential to be interesting if we can plausibly reduce how clueless we are with investigation (this is a lot of Gregâs point in the OP); in this sense the simple/âcomplex difference Greaves identifies is not quite the action-relevant distinction. If, having investigated, far future impacts meaningfully alter the AMF analysis, this is precisely because we have decided that AMF meaningfully impacts the far future in at least one way that is good or bad in expectation, i.e. we can tractably impact the far future.
Put simply, if we cannot affect the far future in expectation at all, then logically AMF cannot affect the far future in expectation. If AMF does not affect the far future in expectation, far future effects need not concern its donors.
Saying that the long-run effects of giving to AMF are not positive or negative in expectation is not the same as saying that the long-run effects are zero in expectation. The point of complex cluelessness is that we donât really have a well-formed expectation at all because there are so many forseeable complex factors at play.
In simple cluelessness there is symmetry across acts so we can say the long-run effects are zero in expectation, but in complex cluelessness we canât say this. If you canât say the long-run effects are zero in expectation, then you canât ignore the long-run effects.
I think all of this is best explained in Greavesâ original paper.
Iâm not sure how to parse this âexpectation that is neither positive nor negative or zero but still somehow impacts decisionsâ concept, so maybe thatâs where my confusion lies. If I try to work with it, my first thought is that not giving money to AMF would seem to have an undefined expectation for the exact same reason that giving money to AMF would have an undefined expectation; if we wish to avoid actions with undefined expectations (but why?), weâre out of luck and this collapses back to being decision-irrelevant.
I have read the paper. Iâm surprised you think itâs well-explained there, since itâs pretty dense. Accordingly, I wonât pretend I understood all of it. But I do note it ends as follows (emphasis added):
And of course Greaves has since said that she does think we can tractably influence the far future, which resolves the conflict Iâm pointing to anyway. In other words, Iâm not sure I actually disagree with Greaves-the-individual at all, just with (some of) the people who quote her work.
I would put it as entertaining multiple probability distributions for the same decision, with different expected values. Even if you have ranges of (so not singly defined) expected values, there can still be useful things you can say.
Suppose you have 4 different acts with EVs in the following ranges:
[-100, 100] (say this is AMF in our example)
[5, 50]
[1, 1000]
[100, 105]
I would prefer each of 2, 3 and 4 to 1, since theyâre all robustly positive, while 1 is not. 4 is also definitely better in expectation than 1 and 2 (according to the probability distributions weâre considering), since its EV falls completely to the right of eachâs, so this means neither 1 nor 2 is permissible. Without some other decision criteria or information, 3 and 4 would both be permissible, and itâs not clear which is better.
Thanks for the response, but I donât think this saves it. In the below Iâm going to treat your ranges as being about the far future impacts of particular actions, but you could substitute for âall the impacts of particular actionsâ if you prefer.
In order for there to be useful things to say, you need to be able to compare the ranges. And if you can rank the ranges (âI would prefer 2 to 1â âI am indifferent between 3 and 4âł, etc.), and that ranking obeys basic rules like transitivity, that seems equivalent to collapsing the all the ranges to single numbers. Collapsing two actions to the same number is fine. So in your example I could arbitrarily assign a âscoreâ of 0 to action 1, a score of 1 to action 2, and scores of 2 to each of 3 and 4.
Then my decision rule just switches from âdo the thing with highest expected valueâ to âdo (one of) the things with highest scoreâ, and the rest of the argument is essentially unchanged: either every possible action has the same score or it doesnât. If some things have higher scores than others, then replacing a lower score action with a higher score action is a way to tractably make the far future better.
Therefore, claims that we cannot tractably make the far future better force all the scores among all actions being taken to be the same, and if the scores are all the same I think your scoring system is decision-irrelevant; it will never push for action A over action B.
Did I miss an out? Itâs been a while since Iâve had to think about weak orderings..
Ya, itâs a weak ordering, so you canât necessarily collapse them to single numbers, because of incomparability.
[1, 1000] and [100, 105] are incomparable. If you tried to make them equivalent, you could run into problems, say with [5, 50], which is also incomparable with [1, 1000] but dominated by [100, 105].
[5, 50] < [100, 105]
[1, 1000] incomparable to the other two
If your set of options was just these 3, then, sure, you could say [100, 105] and [1, 1000] are equivalent since neither is dominated, but if you introduce another option which dominates one but not the other, that equivalence would be broken.
I think there are two ways of interpreting âmake the far future betterâ:
compared to doing nothing/âbusiness as usual, and
compared to a specific other option.
1 implies 2, but 2 does not imply 1. It might be the case that none of the options look robustly better than doing nothing, but still some options are better than others. For example, writing their expected values as the difference with doing nothing, we could have:
[-2, 1]
[-1, 2]
0 (do nothing)
and suppose specifically that our distibutions are such that 2 always dominates 1, because of some correspondence between pairs of distributions. For example, although I can think up scenarios where the opposite might be true, it seems going out of your way to torture an animal to death (for no particular benefit) is dominated at least by killing them without torturing them. Basically, 1 looks like 2 but with extra suffering and the harms to your character.
In this scenario, we canât reliably make the world better, compared to doing nothing, but we still have that option 2 is better than option 1.
Thanks again. I think my issue is that Iâm unconvinced that incomparability applies when faced with ranking decisions. In a forced choice between A and B, Iâd generally say you have three options: choose A, choose B, or be indifferent.
Incomparability in this context seems to imply that one could be indifferent between A and B, prefer C to A, yet be indifferent between C and B. That just sounds wrong to me, and is part of what I was getting at when I mentioned transitivity, curious if you have a concrete example where this feels intuitive?
For the second half, note I said among all actions being taken. If âbusiness as usualâ includes action A which is dominated by action B, we can improve things by replacing A with B.
I think if you reject incomparability, youâre essentially assuming away complex cluelessness and deep uncertainty. The point in this case is that there are considerations going in each direction, and I donât know how to weigh them against one another (in particular, no evidential symmetry). So, while I might just pick an option if forced to choose between A, B and indifferent, it doesnât reveal a ranking, since youâve eliminated the option Iâd want to give, âI really donât knowâ. You could force me to choose among wrong answers to other questions, too.
B = business as usual /â âdoing nothingâ
C= working on a cause you have complex cluelessness about, i.e. youâre not wiling to say itâs better or worse than or equivalent to B (e.g. for me, climate change is an example)
A=C but also torturing a dog that was about to be put down anyway (or maybe generally just being mean to others)
Iâm willing to accept that C>A, although I could see arguments made for complex cluelessness about that comparison (e.g. through the indirect effects of torturing a dog on your work, that you already have complex cluelessness about). Torturing a dog, however, could be easily dominated by the extra effects of climate change in A or C compared to B, so it doesnât break the complex cluelessness that we already had comparing B and C.
Some other potential examples here, although these depend on how the numbers work out.
Thatâs really useful, thanks, at the very least I now feel like Iâm much closer to identifying where the different positions are coming from. I still think I reject incomparability; the example you gave didnât strike me as compelling, though I can imagine it compelling others.
I would say itâs reality thatâs doing the forcing. I have money to donate currently; I can choose to donate it to charity A, or B, or C, etc., or to not donate it. I am forced to choose and the decision has large stakes; âI donât knowâ is not an option (âwait and do more researchâ is, but that doesnât seem like it would help here). I am doing a particular job as opposed to all the other things I could be doing with that time; I have made a choice and for the rest of my life I will continue to be forced to choose what to do with my time. Etc.
It feels intuitively obvious to me that those many high-stakes forced choices can and should be compared in order to determine the all-things-considered best course of action, but itâs useful to know that this intuition is apparently not shared.
Itâs not so much that we should avoid doing it full stop, itâs more that if weâre looking to do the most good then we should probably avoid doing it because we donât actually know if it does good. If you donât have your EA hat on then you can justify doing it for other reasons.
Iâve only properly read it once and it was a while back. I just remember it having quite an effect on me. Maybe I read it a few times to fully grasp it, canât quite remember. Iâd be quite surprised if it immediately clicked for me to be honest. I clearly donât remember it that well because I forgot that Greaves had that discussion about the psychology of cluelessness which is interesting.
Just to be clear I also think that we can tractably influence the far future in expectation (e.g. by taking steps to reduce x-risk). Iâm not really sure how that resolves things.
Iâm surprised to hear you say youâre unsure you disagree with Greaves. Hereâs another quote from her (from here). Iâd imagine you disagree with this?
If you think you can tractably impact the far future in expectation, AMF can impact the far future in expectation. At which point itâs reasonable to think that those far future impacts could be predictably negative on further investigation, since we werenât really selecting for them to be positive. I do think trying to resolve the question of whether they are negative is probably a waste of time for reasons in my first comment, and it sounds like we agree on that, but at that point itâs reasonable to say that âAMF could be good or bad, Iâm not really sure, because Iâve chosen to focus my limited time and attention elsewhereâ. Thereâs no deep or fundamental uncertainty here, just a classic example of triage leading us to prioritise promising-looking paths over unpromising-looking ones.
For the same reason, I donât see anything wrong with that quote from Greaves; coming from someone who thinks we can tractably impact the far future and that the far future is massively morally relevant, it makes a lot of sense. If it came from someone who thought it was impossible to tractably impact the future, Iâd want to dig into it more.
On a slightly different note, I can understand why one might not think we can tractably impact the far future, but what about the medium-term future? For example it seems that mitigating climate change is a pretty surefire way to improve the medium-term future (in expectation). Would you agree with that?
If you accept that then you might also accept that we are clueless about giving to AMF based on itâs possible medium-term climate change impacts (e.g. maybe giving to AMF will increase populations in the near to medium term, and this will increase carbon emissions). What do you think about this line of reasoning?
Medium-term indirect impacts are certainly worth monitoring, but they have a tendency to be much smaller in magnitude than primary impacts being measured, in which case they donât pose much of an issue; to be best of my current knowledge carbon emissions from saving lives are a good example of this.
Of course, one could absolutely think that a dollar spent on climate mitigation is more valuable than a dollar spent saving the lives of the global poor. But thatâs very different to the cluelessness line of attack; put harshly itâs the difference between choosing not to save a drowning child because there is another pond with even more drowning children and you have to make a hard trolley-problem-like choice, versus choosing to walk on by because who even really knows if saving that child would be good anyway. FWIW, I feel like many people effectively arguing the latter in the abstract would not actually walk on by if faced with that actual physical situation, which if Iâm being honest is probably part of why I find it difficult to take these arguments seriously; we are in fact in that situation all the time, whether we realise it or not, and if we wouldnât ignore the drowning child on our doorstep we shouldnât entirely ignore the ones half a world away...unless we are unfortunately forced to do so by the need to save even greater numbers /â prevent even greater suffering.
Perhaps, although I wouldnât say itâs a priori obvious, so I would have to read more to be convinced.
I didnât raise animal welfare concerns either which I also think are relevant in the case of saving lives. In other words Iâm not sure you need to raise future effects for cluelessness worries to have bite, although I admit Iâm less sure about this.
I certainly wouldnât walk on by, but thatâs mainly due to a mix of factoring in moral uncertainty (deontologists would think me the devil) and not wanting the guilt of having walked on by. Also Iâm certainly not 100% sure about the cluelessness critique, so thereâs that too. The cluelessness critique seems sufficient to me to want to search for other ways than AMF to do the most good, but not to literally walk past a drowning child.
This makes some sense, but to take a different example, Iâve followed a lot of the COVID debates in EA and EA-adjacent circles, and literally not once have I seen cluelessness brought up as a reason to be concerned that maybe saving lives via faster lockdowns or more testing or more vaccines or whatever is not actually a good thing to do. Yet it seems obvious that some level of complex cluelessness applies here if it applies anywhere, and this is a case where simply ignoring COVID efforts and getting on with your daily life (as best one can) is what most people have done, and certainly not something I would expect to leave people struggling with guilt or facing harsh critique from deontologists.
As Greaves herself notes, such situations are ubiquitous, and the fact that cluelessness worries are only being felt in a very small subset of the situations should lead to a certain degree of skepticism that they are in fact what is really going on. But I donât want to spend too much time throwing around speculation about intentions relative to focusing the object level arguments made, so will leave this train of thought here.
To be fair I would say that taking the cluelessness critique seriously is still quite fringe even within EA (my poll on Facebook provided some indication of this).
With an EA hat on I want us to sort out COVID because I think COVID is restricting our ability to do certain things that may be robustly good. With a non-EA hat on I want us to sort out COVID because lockdown is utterly boring (although it actually got me into EA and this forum a bit more which is good) and I donât want my friends and family (or myself!) to be at risk of dying from it.
Most people have decided to obey lockdowns and be careful in how they interact with others, in order to save lives. In terms of EAs not doing more (e.g. donating money) I think this comes down to the regular argument of COVID not being that neglected and that there are probably better ways to do good. In terms of saving lives, I think deontologists require you to save a drowning child in front of you, but Iâm not actually sure how far that obligation extends temporally/âspatially.
This is interesting and slightly difficult to think about. I think that when I encounter decisions in non-EA-life that I am complexly clueless about, that I let my personal gut feeling takes over. This doesnât feel acceptable in EA situations because, well, EA is all about not letting personal gut feelings take over. So I guess this is my tentative answer to Greavesâ question.
I have complex cluelessness about the effects of climate change on wild animals, which could dominate the effects on humans and farmed animals.
Belatedly:
I read the stakes here differently to you. I donât think folks thinking about cluelessness see it as substantially an exercise in developing a defeater to âeverything which isnât longtermismâ. At least, that isnât my interest, and I think the literature has focused on AMF etc. more as salient example to explore the concepts, rather than an important subject to apply them to.
The AMF discussions around cluelessness in the OP are intended as toy exampleâif you like, deliberating purely on âis it good or bad to give to AMF versus this particular alternative?â instead of âOut of all options, should it be AMF?â Parallel to you, although I do think (per OP) AMF donations are net good, I also think (per the contours of your reply) it should be excluded as a promising candidate for the best thing to donate to: if what really matters is how the deep future goes, and the axes of these accessible at present are things like x-risk, interventions which are only tangentially related to these are so unlikely to be best they can be ruled-out ~immediately.
So if that isnât a main motivation, what is? Perhaps something like this:
1) How to manage deep uncertainty over the long-run ramifications of ones decisions is a challenge across EA-landâparticularly acute for longtermists, but also elsewhere: most would care about risks about how in the medium term a charitable intervention could prove counter-productive. In most cases, these mechanisms for something to âbackfireâ are fairly trivial, but how seriously credible ones should be investigated is up for grabs.
Although âjust be indifferent if it is hard to figure outâ is a bad technique which finds little favour, I see a variety of mistakes in and around here. E.g.:
a) People not tracking when the ground of appeal for an intervention has changed. Although I donât see this with AMF, I do see this in and around animal advocacy. One crucial consideration around here is WAS, particularly an âinverse logic of the larderâ (see), such as âper area, a factory farm has a lower intensity of animal suffering than the environment it replacedâ.
Even if so, it wouldnât follow the best thing to do would to be as carnivorous as possible. There are also various lines of response. However, one is to say that the key objective of animal advocacy is to encourage greater concern about animal welfare, so that this can ramify through to benefits in the medium term. However, if this is the rationale, metrics of âanimal suffering averted per $â remain prominent despite having minimal relevance. If the aim of the game is attitude change, things like shelters and companion animals over changes in factory farmed welfare start looking a lot more credible again in virtue of their greater salience.
b) Early (or motivated) stopping across crucial considerations. There are a host of ramifications to population growth which point in both directions (e.g. climate change, economic output, increased meat consumption, larger aggregate welfare, etc.) Although very few folks rely on these when considering interventions like AMF (but cf.) they are often being relied upon by those suggesting interventions specifically targeted to fertility: enabling contraceptive access (e.g. more contraceptive access --> fewer births --> less of a poor meat eater problem), or reducing rates of abortion (e.g. less abortion --> more people with worthwhile lives --> greater total utility).
Discussions here are typically marred by proponents either completely ignoring considerations on the âother sideâ of the population growth question, or giving very unequal time to them/âsheltering behind uncertainty (e.g. âConsiderations X, Y, and Z all tentatively support more population growth, admittedly thereâs A, B, C, but we do not cover those in the interests of timeâyet, if we had, they probably would tentatively oppose more population growthâ).
2) Given my fairly deflationary OP, I donât think these problems are best described as cluelessness (versus attending to resilient uncertainty and VoI in fairly orthodox evaluation procedures). But although I think Iâm right, I donât think Iâm obviously right: if orthodox approaches struggle here, less orthodox ones with representors, incomparability or other features may be what should be used in decision-making (including when we should make decisions versus investigate further). If so then this reasoning looks like a fairly distinct species which could warrant itâs own label.
This makes some sense to me, although if thatâs all weâre talking about Iâd prefer to use plain English since the concept is fairly common. I think this is not all other people are talking about though; see my discussion with MichaelStJules.
FWIW, I donât think ârisksâ is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. âGive Directly reduces extinction risk by reducing poverty, a known cause of conflictâ); the surprisingly-high magnitude of an incidental impact is what is really catching my attention, because it suggests there are much better ways to do good.
I think you were trying to draw a distinction, but FWIW this feels structurally similar to the âAMF impact population growth/âeconomic growthâ argument to me, and I would give structually the same response: once you truly believe a factory farm net reduced animal suffering via the wild environment it incidentally destroyed, there are presumably much more efficient ways to destroy wild environments. As a result, it appears we can benefit wild animals much more than farmed animals, and ending factory farming should disappear from your priority list, at least as an end goal (it may come back as an itself-incidental consequence of e.g. âpromoting as much concern for animal welfare as possibleâ). Is your point just that it does not in fact disappear from peopleâs priority lists in this case? That Iâm not well-placed to observe or comment on either way.
This I agree is a problem. Iâm not sure if thinking in terms of cluelessness makes it better or worse; Iâve had a few conversations now where my interlocutor tries to avoid the challenge of cluelessness by presenting an intervention that supposedly has no complex cluelessness attached. So far, Iâve been unconvinced of every case and think said interlocutor is âstopping earlyâ and missing aspects of impact about which they are complexly clueless (often economic/âpopulation growth impacts, since itâs actually quite hard to come up with an intervention that doesnât credibly impact one of those).
I guess I think part of encouraging people to continue thinking rather than stop involves getting people comfortable with the fact that thereâs a perfectly reasonable chance that what they end up doing backfires, everything has risk attached, and trying to entirely avoid such is both a foolâs errand and a quick path to analysis paralysis. Currently, my impression is that cluelessness-as-used is pushing towards avoidance rather than acceptance, but the sample size is small and so this opinion is very weakly held. I would be more positive on people thinking about this if it seemed to help push them towards acceptance though.
Point taken. Given that I hope this wasnât too much of a hijack, or at least was an interesting hijack. I think I misunderstood how literally you intended the statements I quoted and disagreed with in my original comment.
(Apologies in advance Iâm rehashing unhelpfully)
The usual cluelessness scenarios are more about that there may be powerful lever for impacting the future, and your intended intervention may be pulling it in the wrong direction (rather than a âconfirmed discoveryâ). Say your expectation for the EV of GiveDirectly on conflict has a distribution with a mean of zero but an SD of 10x the magnitude of the benefits you had previously estimated. If it were (e.g.) +10, thereâs a natural response of âshouldnât we try something which targets this on purpose?â; if it were 0, we wouldnât attend to it further; if it meant you were â10, you wouldnât give to (now net EV = â-9â) GiveDirectly.
The right response where all three scenarios are credible (plus all the intermediates) but youâre unsure which one youâre in isnât intuitively obvious (at least to me). Even if (like me) youâre sympathetic to pretty doctrinaire standard EV accounts (i.e. you quantify this uncertainty + all the others and just ârun the numbersâ and take the best EV) this approach seems to ignore this wide variance, which seems to be worthy of further attention.
The OP tries to reconcile this with the standard approach by saying this indeed often should be attended to, but under the guise of value of information rather than something âextraâ to orthodoxy. Even though we should still go with our best guess if we to decide (so expectation neutral but high variance terms âcancel outâ), we might have the option to postpone our decision and improve our guesswork. Whether to take that option should be governed by how resilient our uncertainty is. If your central estimate of GiveDirectly and conflict would move on average by 2 units if you spent an hour thinking about it, that seems an hour well spent; if you thought you could spend a decade on it and remain where you are, going with the current best guess looks better.
This can be put in plain(er) English (although familiar-to-EA jargon like âEVâ may remain). Yet there are reasons to be hesitant about the orthodox approach (even though I think the case in favour is ultimately stronger): besides the usual bullets, we would be kidding ourselves if we ever really had in our head an uncertainty distribution to arbitrary precision, and maybe our uncertainty isnât even remotely approximate to objects we manipulate in standard models of the same. Or (owed to Andreas) even if so, similar to how rule-consequentialism may be better than act-consequentialism, some other epistemic policy would get better results than applying the orthodox approach in these cases of deep uncertainty.
Insofar as folks are more sympathetic to this, they would not want to be deflationary and perhaps urge investment in new techniques/âvocab to grapple with the problem. They may also think we donât have a good âanswerâ yet of what to do in these situations, so may hesitate to give âaccept thereâs uncertainty but donât be paralysed by itâ advice that you and I would. Maybe these issues are an open problem we should try and figure out better before pressing on.
(I suppose I should mention Iâm an intern at ACE now, although Iâm not speaking for them in this comment.)
These are important points, although Iâm not sure I agree with your object-level judgements about how EAs are acting.
Also, it seems like you intended to include some links in this comment, but theyâre missing.
Little favour where? I think this is common in shorttermist EA. As you mention, effects on wild animals are rarely included. GiveWellâs analyses ignore most of the potentially important consequences of population growth, and the metrics they present to donors on their top charity page are very narrow, too. GiveWell did look at how (child) mortality rates affect population sizes, and I think there are some discussions of some effects scattered around (although not necessarily tied to population size), e.g., I think they believe economic growth is good. Have they written anything about the effects of population growth on climate change and nonhuman animals? What indirect plausibly negative effects of the interventions they support have they written about? I think itâs plausible they just donât find these effects important or bad, although I wouldnât be confident in such a judgement without looking further into it myself.
Even if you thought the population effects from AMF were plausibly bad and you had deep uncertainty about them, you could target those consequences better with different interventions (e.g. donating to a climate change charity or animal charity) or also support family planning to avoid affecting the population size much in expectation.
Iâd be willing to bet factory farming is worse if you include only mammals and birds (and probably fishes), and while including amphibians or invertebrates might lead to deep and moral uncertainty for me (although potentially resolvable with not too much research), it doesnât for some others, and in this case, âanimal suffering averted per $â may not be that far off. Furthermore, I think itâs reasonable to believe the welfare effects of corporate campaigns for chickens farmed for eggs and meat are far more important than the effects on land use. Stocking density restrictions can also increase the amount of space for the same amount of food, too, although I donât know what the net effect is.
I worry more about diet change. More directly, some animal advocates treat wild fishes spared from consumption as a good outcome (although in this case, the metric is not animal suffering), but itâs really not clear either way. I have heard the marginal fish killed for food might be farmed now, though.
Credible as cost-effective interventions for attitude change? I think thereâs already plenty of concern for companion animals, and it doesnât transfer that well on its own to farmed and wild animals as individuals without advocacy for them. Iâd expect farmed animal advocacy to do a much better job for attitude change.
And even ignoring the question of whether factory farming is better or worse than the wildlife it replaces, extra people replace wildlife for reasons other than farming, too, introducing further uncertainty.
Some might just think the considerations on the other side actually arenât very important compared to the considerations supporting their side, which may be why theyâre on that side in the first place. The more shorter-term measurable effects are also easier to take a position on. I agree that thereâs a significant risk of confirmation bias here, though, once they pick out one important estimable positive effect (including âdirectâ effects, like from AMF!).
[Mea culpa re. messing up the formatting again]
1) I donât closely follow the current state of play in terms of âshorttermistâ evaluation. The reply I hope (e.g.) a Givewell Analyst would make to (e.g.) âWhy arenât you factoring in impacts on climate change for these interventions?â would be some mix of:
a) âWe have looked at this, and weâre confident we can bound the magnitude of this effect to pretty negligible values, so we neglect them in our write-ups etc.â
b) âWe tried looking into this, but our uncertainty is highly resilient (and our best guess doesnât vary appreciably between interventions) so we get higher yield investigating other things.â
c) âWe are explicit our analysis is predicated on moral (e.g. âhuman lives are so much more important than animals lives any impact on the latter is ~mootâ) or epistemic (e.g. some âcommon sense anti-cluelessnessâ position) claims which either we corporately endorse and/âor our audience typically endorses.â
Perhaps such hopes would be generally disappointed.
2) Similar to above, I donât object to (re. animals) positions like âOur view is this consideration isnât a concern as Xâ or âGiven this consideration, we target Y rather than Zâ, or âAlthough we aim for A, B is a very good proxy indicator for A which we use in comparative evaluation.â
But I at least used to see folks appeal to motivations which obviate (inverse/â) logic of the larder issues, particularly re. diet change (âSure, itâs actually really unclear becoming vegan reduces or increases animal suffering overall, but the reason to be vegan is to signal concern for animals and so influence broader societal attitudes, and this effect is much more important and what weâre aiming forâ). Yet this overriding motivation typically only âcame upâ in the context of this discussion, and corollary questions like:
* âIs maximizing short term farmed animal welfare the best way of furthering this crucial goal of attitude change?â
* âIs encouraging carnivores to adopt a vegan diet the best way to influence attitudes?â
* âShouldnât we try and avoid an intervention like v*ganism which credibly harms those we are urging concern for, as this might look bad/âbe bad by the lights of many/âmost non-consequentialist views?â
seemed seldom asked.
Naturally I hope this is a relic of my perhaps jaundiced memory.
80,000 Hours and Toby Ord at least think that climate change could be an existential risk, and 80,000 Hours ranks it as a higher priority than global health and poverty, so I think itâs not obvious that the effects would be negligible (assuming total utilitarianism, say) if they tried to work through it, although they might still think so. Other responses they might give:
GiveWell-recommended charities mitigate x-risk more than they worsen them or do more good for the far future in other ways. Maybe thereâs a longtermist case for growth. It doesnât seem 80,000 Hours really believes this, though, or else health in poor countries would be higher up. Also, this seems like suspicious convergence, but they could still think the charities are justified primarily by short-term effects, if they think the long-term ones are plausibly close to 0 in expectation. Or,
GiveWell discounts the lives of future people (e.g. with person-affecting views, possibly asymmetric ones, although climate change could still be important on some person-affecting views), which falls under your point c). I think this is a plausible explanation for GiveWellâs views based on what Iâve seen.
I think another good response (although not the one Iâd expect) is that they donât need to be confident the charities do more good than harm in expectation, since itâs actually very cheap to mitigate any possible risks from climate change from them by also donating to effective climate change charities, even if youâre deeply uncertain about how important climate change is. I discuss this approach more here. The result would be that youâre pretty sure youâre doing some decent minimum of good in expectation (from the health effects), whereas just the global health and poverty charity would be plausibly bad (due to climate change), and just the climate change charity would be plausibly close to 0 in expectation (due to deep uncertainty about the importance of climate change).
This is fair, and I expect that this still happens, but who was saying this? Is this how the animal charities (or their employees) themselves responded to these concerns? I think itâs plausible many did just think the short term benefits for farmed animals outweighed any effects on wild animals.
With respect to things other than diet, I donât think EAs are assuming it is, and they are separately looking for the best approaches to attitude change, so this doesnât seem important to ask. Corporate campaigns are primarily justified on the basis of their welfare effects for farmed animals, and still look good if you also include short term effects on wild animals. Other more promising approaches towards attitude change have been supported, like The Nonhuman Rights Project (previously an ACE Standout charity, and still a grantee), and plant-based substitutes and cultured meat (e.g. GFI).
I do think itâs among the best ways, depending on the approach, and I think people were already thinking this outside of the context of this discussion. I think eating animals causes speciesism and apathy, and is a significant psychological barrier to helping animals, farmed and wild. Becoming vegan (for many, not all) is a commitment to actively caring about animals, and can become part of someoneâs identity. EAA has put a lot into the development of substitutes, especially through GFI, and these are basically our main hopes for influencing attitudes and also one of our best shots at eliminating factory farming.
I donât think this is suspicious convergence. There are other promising approaches (like the Nonhuman Rights Project), but itâs hard enough to compare them directly that I donât think any are clearly better, so Iâd endorse supporting multiple approaches, including diet change.
I think the case for veganism is much stronger according to the most common non-consequentialist views (that still care about animals), because they often distinguish
intentional harms and exploitation/âusing others as mere means to ends, cruelty and supporting cruelty, from
incidental harms and harms from omissions, like more nonhuman animals being born because we are not farming some animals more.
Of course, advocacy is not an omission, and what you suggest is also plausible.
For what itâs worth, I think itâs plausible that some interventions chosen for their short-term effects may be promising candidates for longtermist interventions. If you thought that s-risks were important and that larger moral circles mitigate s-risks, then plant-based and cultured animal product substitutes might be promising, since these seem most likely to shift attitudes towards animals the most and fastest, and this would (hopefully) help make the case for wild animals and artificial sentience mext. Maybe direct advocacy for protections for artificial sentience would be best, though, but I wouldnât be surprised if youâd still at least want to target animals somewhat, since this seems more incremental and the step to artificial sentience is greater.
That being said, depending on how urgent s-risks are, how exactly we should approach animal product subsititutes may be different for the short term and long term. Longtermist-focused animal adocacy might be different in other ways, too; see this post.
Furthermore, if growth is fast enough in the future (exponentially? EDIT: It might hit a cubic limit due to physical constraints), and the future growth rate canât reliably be increased, then growth today may have a huge effect on wealth in the long term. The difference XatâYat goes to +/ââ as tââ, if Xâ Y.
If our sphere of influence grows
exponentiallyfast, and our moral circle expands gradually, then you can make a similar argument supporting expanding our moral circle more quickly now.I think thereâs a difference between the muddy concept of âcause areasâ and actual specific charities/âinterventions here. At the level of cause areas, there could be overlap, because I agree that if you think the Most Important Thing is to expand the moral circle, then there are things in the animal-substitute space that might be interesting, but Iâd be surprised and suspicious (not infinitely suspicious, just moderately so) if the actual bottom-line charity-you-donate-to was the exact same thing as what you got to when trying to minimise the suffering of animals in the present day. Tobias makes a virtually identical point in the post you link to, so we may not disagree, apart from perhaps thinking about the word âinterventionâ differently.
Similarly, I could imagine a longtermist concluding that if you look back through history, attempts to e.g. prevent extinction directly or implement better governance seem like they would have been critically hamstrung by a lack of development in the relevant fields, e.g. economics, and the general difficulty of imagining the future. But attempts to grow the economy and advance science seem to have snowballed in a way that impacts the future and also incidentally benefits the present. So in that way you could end up with a longtermist-inspired focus on things like âspeed up economic growthâ or âadvance important researchâ which arguably fall under the ânear-term human-centric welfareâ area on some categorisations of causes. But you didnât get there from that starting point, and again I expect your eventual specific area of focus to be quite different.
Just want to check I understand what youâre saying here. Are you saying we might want to focus more on expanding growth today because it could have a huge effect on wealth in the long term, or are you saying that we might want to focus more on expanding the moral circle today because we want our future large sphere of influence to be used in a good way rather than a bad way?
The second, we want our future large sphere of influence to be used in a good way. If our sphere of influence grows much faster than our moral circle (and our moral circle still misses a huge number of whom we should rightfully consider significant moral patients; the moral circle could be different in different parts of the universe), then itâs possible that the number of moral patients we could have helped but didnât to grow very quickly, too. Basically, expanding the moral circle would always be urgent, and sooner has a much greater payoff than later.
This is pretty much pure speculation, though.
That makes sense! It does seem particularly important to have expanded the moral circle a lot before we spread to the stars.
Do you feel that the frame I offered here has no decision-relevance?
Not sure if you were referring to that particular post or the whole sequence. If I follow it correctly, I think that particular post is trying to answer the question âhow can we plausibly impact the long-term future, assuming itâs important to do soâ. I think itâs a pretty good treatment of that question!
But I wouldnât mentally file that under cluelessness as I understand the term, because that would also be an issue under ordinary uncertainty. To the extent you explain how cluelessness is different to garden-variety uncertainty and why we canât deal with it in the same way(s), itâs earlier in your sequence of posts, and so far I think I have not been moved away from the objection you try to address in your second post, though if you read the (long) exchange with MichaelStJules above you can see him trying to move me and at minimum succeeding in giving me a better picture of where the disagreements might be.
Edit: I guess what Iâm really saying is that the part of that sequence which seems useful and interesting to meâthe last bitâcould also have been written and would be just as important if we were merely normally-uncertain about the future, as opposed to cluelessly-uncertain.
Is there a tl;dr of the distinction youâre drawing between normal uncertainty and clueless uncertainty?