If we aren’t making the future better or worse in expectation, it’s not impacting my decision whether or not to donate to AMF. We can then safely ignore complex cluelessness for the same reason we would ignore simple cluelessness.
Saying that the long-run effects of giving to AMF are not positive or negative in expectation is not the same as saying that the long-run effects are zero in expectation. The point of complex cluelessness is that we don’t really have a well-formed expectation at all because there are so many forseeable complex factors at play.
In simple cluelessness there is symmetry across acts so we can say the long-run effects are zero in expectation, but in complex cluelessness we can’t say this. If you can’t say the long-run effects are zero in expectation, then you can’t ignore the long-run effects.
I’m not sure how to parse this ‘expectation that is neither positive nor negative or zero but still somehow impacts decisions’ concept, so maybe that’s where my confusion lies. If I try to work with it, my first thought is that not giving money to AMF would seem to have an undefined expectation for the exact same reason that giving money to AMF would have an undefined expectation; if we wish to avoid actions with undefined expectations (but why?), we’re out of luck and this collapses back to being decision-irrelevant.
I have read the paper. I’m surprised you think it’s well-explained there, since it’s pretty dense. Accordingly, I won’t pretend I understood all of it. But I do note it ends as follows (emphasis added):
It is not at all obvious on reflection, however, what the phenomenon of cluelessness really amounts to. In particular, it (at least at first sight) seems difficult to capture within an orthodox Bayesian model, according to which any given rational agent simply settles on some particular precise credence function, and the subjective betterness facts follow. Here, I have explored various possibilities within an ‘imprecise-credence’ model. Of these, the most promising account – on the assumption that the phenomenon of cluelessness really is a genuine and deep one – involved a ‘supervaluational’ account of the connection between imprecise credences and permissibility. It is also not at all obvious, however, how deep or important the phenomenon of cluelessness really is. In the context of effective altruism, it strikes many as compelling and as deeply problematic. However, mundane, everyday cases that have a similar structure in all respects I have considered are also ubiquitous, and few regard any resulting sense of cluelessness as deeply problematic in the latter cases. It may therefore be that the diagnosis of would-be effective altruists’ sense of cluelessness, in terms of psychology and/or the theory of rationality, lies quite elsewhere.
And of course Greaves has since said that she does think we can tractably influence the far future, which resolves the conflict I’m pointing to anyway. In other words, I’m not sure I actually disagree with Greaves-the-individual at all, just with (some of) the people who quote her work.
Perhaps we could find some other interventions for which that’s the case to a much lesser extent. If we deliberately try to beneficially influence the course of the very far future, can we find things where we more robustly have at least some clue that what we’re doing is beneficial and of how beneficial it is? I think the answer is yes.
I’m not sure how to parse this ‘expectation that is neither positive nor negative or zero but still somehow impacts decisions’ concept, so maybe that’s where my confusion lies. If I try to work with it, my first thought is that not giving money to AMF would seem to have an undefined expectation for the exact same reason that giving money to AMF would have an undefined expectation; if we wish to avoid actions with undefined expectations (but why?), we’re out of luck and this collapses back to being decision-irrelevant.
I would put it as entertaining multiple probability distributions for the same decision, with different expected values. Even if you have ranges of (so not singly defined) expected values, there can still be useful things you can say.
Suppose you have 4 different acts with EVs in the following ranges:
[-100, 100] (say this is AMF in our example)
[5, 50]
[1, 1000]
[100, 105]
I would prefer each of 2, 3 and 4 to 1, since they’re all robustly positive, while 1 is not. 4 is also definitely better in expectation than 1 and 2 (according to the probability distributions we’re considering), since its EV falls completely to the right of each’s, so this means neither 1 nor 2 is permissible. Without some other decision criteria or information, 3 and 4 would both be permissible, and it’s not clear which is better.
Thanks for the response, but I don’t think this saves it. In the below I’m going to treat your ranges as being about the far future impacts of particular actions, but you could substitute for ‘all the impacts of particular actions’ if you prefer.
In order for there to be useful things to say, you need to be able to compare the ranges. And if you can rank the ranges (“I would prefer 2 to 1” “I am indifferent between 3 and 4″, etc.), and that ranking obeys basic rules like transitivity, that seems equivalent to collapsing the all the ranges to single numbers. Collapsing two actions to the same number is fine. So in your example I could arbitrarily assign a ‘score’ of 0 to action 1, a score of 1 to action 2, and scores of 2 to each of 3 and 4.
Then my decision rule just switches from ‘do the thing with highest expected value’ to ‘do (one of) the things with highest score’, and the rest of the argument is essentially unchanged: either every possible action has the same score or it doesn’t. If some things have higher scores than others, then replacing a lower score action with a higher score action is a way to tractably make the far future better.
Therefore, claims that we cannot tractably make the far future better force all the scores among all actions being taken to be the same, and if the scores are all the same I think your scoring system is decision-irrelevant; it will never push for action A over action B.
Did I miss an out? It’s been a while since I’ve had to think about weak orderings..
Ya, it’s a weak ordering, so you can’t necessarily collapse them to single numbers, because of incomparability.
[1, 1000] and [100, 105] are incomparable. If you tried to make them equivalent, you could run into problems, say with [5, 50], which is also incomparable with [1, 1000] but dominated by [100, 105].
[5, 50] < [100, 105]
[1, 1000] incomparable to the other two
If your set of options was just these 3, then, sure, you could say [100, 105] and [1, 1000] are equivalent since neither is dominated, but if you introduce another option which dominates one but not the other, that equivalence would be broken.
Therefore, claims that we cannot tractably make the far future better force all the scores among all actions being taken to be the same, and if the scores are all the same I think your scoring system is decision-irrelevant; it will never push for action A over action B.
I think there are two ways of interpreting “make the far future better”:
compared to doing nothing/business as usual, and
compared to a specific other option.
1 implies 2, but 2 does not imply 1. It might be the case that none of the options look robustly better than doing nothing, but still some options are better than others. For example, writing their expected values as the difference with doing nothing, we could have:
[-2, 1]
[-1, 2]
0 (do nothing)
and suppose specifically that our distibutions are such that 2 always dominates 1, because of some correspondence between pairs of distributions. For example, although I can think up scenarios where the opposite might be true, it seems going out of your way to torture an animal to death (for no particular benefit) is dominated at least by killing them without torturing them. Basically, 1 looks like 2 but with extra suffering and the harms to your character.
In this scenario, we can’t reliably make the world better, compared to doing nothing, but we still have that option 2 is better than option 1.
Thanks again. I think my issue is that I’m unconvinced that incomparability applies when faced with ranking decisions. In a forced choice between A and B, I’d generally say you have three options: choose A, choose B, or be indifferent.
Incomparability in this context seems to imply that one could be indifferent between A and B, prefer C to A, yet be indifferent between C and B. That just sounds wrong to me, and is part of what I was getting at when I mentioned transitivity, curious if you have a concrete example where this feels intuitive?
For the second half, note I said among all actions being taken. If ‘business as usual’ includes action A which is dominated by action B, we can improve things by replacing A with B.
I think my issue is that I’m unconvinced that incomparability applies when faced with ranking decisions. In a forced choice between A and B, I’d generally say you have three options: choose A, choose B, or be indifferent.
I think if you reject incomparability, you’re essentially assuming away complex cluelessness and deep uncertainty. The point in this case is that there are considerations going in each direction, and I don’t know how to weigh them against one another (in particular, no evidential symmetry). So, while I might just pick an option if forced to choose between A, B and indifferent, it doesn’t reveal a ranking, since you’ve eliminated the option I’d want to give, “I really don’t know”. You could force me to choose among wrong answers to other questions, too.
That just sounds wrong to me, and is part of what I was getting at when I mentioned transitivity, curious if you have a concrete example where this feels intuitive?
B = business as usual / “doing nothing”
C= working on a cause you have complex cluelessness about, i.e. you’re not wiling to say it’s better or worse than or equivalent to B (e.g. for me, climate change is an example)
A=C but also torturing a dog that was about to be put down anyway (or maybe generally just being mean to others)
I’m willing to accept that C>A, although I could see arguments made for complex cluelessness about that comparison (e.g. through the indirect effects of torturing a dog on your work, that you already have complex cluelessness about). Torturing a dog, however, could be easily dominated by the extra effects of climate change in A or C compared to B, so it doesn’t break the complex cluelessness that we already had comparing B and C.
Some other potential examples here, although these depend on how the numbers work out.
I think if you reject incomparability, you’re essentially assuming away complex cluelessness and deep uncertainty.
That’s really useful, thanks, at the very least I now feel like I’m much closer to identifying where the different positions are coming from. I still think I reject incomparability; the example you gave didn’t strike me as compelling, though I can imagine it compelling others.
So, while I might just pick an option if forced to choose between A, B and indifferent, it doesn’t reveal a ranking, since you’ve eliminated the option I’d want to give, “I really don’t know”. You could force me to choose among wrong answers to other questions, too.
I would say it’s reality that’s doing the forcing. I have money to donate currently; I can choose to donate it to charity A, or B, or C, etc., or to not donate it. I am forced to choose and the decision has large stakes; ‘I don’t know’ is not an option (‘wait and do more research’ is, but that doesn’t seem like it would help here). I am doing a particular job as opposed to all the other things I could be doing with that time; I have made a choice and for the rest of my life I will continue to be forced to choose what to do with my time. Etc.
It feels intuitively obvious to me that those many high-stakes forced choices can and should be compared in order to determine the all-things-considered best course of action, but it’s useful to know that this intuition is apparently not shared.
if we wish to avoid actions with undefined expectations (but why?), we’re out of luck.
It’s not so much that we should avoid doing it full stop, it’s more that if we’re looking to do the most good then we should probably avoid doing it because we don’t actually know if it does good. If you don’t have your EA hat on then you can justify doing it for other reasons.
I have read the paper. I’m surprised you think it’s well-explained there, since it’s pretty dense.
I’ve only properly read it once and it was a while back. I just remember it having quite an effect on me. Maybe I read it a few times to fully grasp it, can’t quite remember. I’d be quite surprised if it immediately clicked for me to be honest. I clearly don’t remember it that well because I forgot that Greaves had that discussion about the psychology of cluelessness which is interesting.
And of course Greaves has since said that she does think we can tractably influence the far future, which resolves the conflict I’m pointing to anyway. In other words, I’m not sure I actually disagree with Greaves-the-individual at all, just with (some of) the people who quote her work.
Just to be clear I also think that we can tractably influence the far future in expectation (e.g. by taking steps to reduce x-risk). I’m not really sure how that resolves things.
I’m surprised to hear you say you’re unsure you disagree with Greaves. Here’s another quote from her (from here). I’d imagine you disagree with this?
What do we get when we put all those three observations together? Well, what I get is a deep seated worry about the extent to which it really makes sense to be guided by cost-effectiveness analyses of the kinds that are provided by meta-charities like GiveWell. If what we have is a cost-effectiveness analysis that focuses on a tiny part of the thing we care about, and if we basically know that the real calculation—the one we actually care about—is going to be swamped by this further future stuff that hasn’t been included in the cost-effectiveness analysis; how confident should we be really that the cost-effectiveness analysis we’ve got is any decent guide at all to how we should be spending our money? That’s the worry that I call ‘cluelessness’. We might feel clueless about how to spend money even after reading GiveWell’s website.
Just to be clear I also think that we can tractably influence the far future in expectation (e.g. by taking steps to reduce x-risk). I’m not really sure how that resolves things.
If you think you can tractably impact the far future in expectation, AMF can impact the far future in expectation. At which point it’s reasonable to think that those far future impacts could be predictably negative on further investigation, since we weren’t really selecting for them to be positive. I do think trying to resolve the question of whether they are negative is probably a waste of time for reasons in my first comment, and it sounds like we agree on that, but at that point it’s reasonable to say that ‘AMF could be good or bad, I’m not really sure, because I’ve chosen to focus my limited time and attention elsewhere’. There’s no deep or fundamental uncertainty here, just a classic example of triage leading us to prioritise promising-looking paths over unpromising-looking ones.
For the same reason, I don’t see anything wrong with that quote from Greaves; coming from someone who thinks we can tractably impact the far future and that the far future is massively morally relevant, it makes a lot of sense. If it came from someone who thought it was impossible to tractably impact the future, I’d want to dig into it more.
On a slightly different note, I can understand why one might not think we can tractably impact the far future, but what about the medium-term future? For example it seems that mitigating climate change is a pretty surefire way to improve the medium-term future (in expectation). Would you agree with that?
If you accept that then you might also accept that we are clueless about giving to AMF based on it’s possible medium-term climate change impacts (e.g. maybe giving to AMF will increase populations in the near to medium term, and this will increase carbon emissions). What do you think about this line of reasoning?
Medium-term indirect impacts are certainly worth monitoring, but they have a tendency to be much smaller in magnitude than primary impacts being measured, in which case they don’t pose much of an issue; to be best of my current knowledge carbon emissions from saving lives are a good example of this.
Of course, one could absolutely think that a dollar spent on climate mitigation is more valuable than a dollar spent saving the lives of the global poor. But that’s very different to the cluelessness line of attack; put harshly it’s the difference between choosing not to save a drowning child because there is another pond with even more drowning children and you have to make a hard trolley-problem-like choice, versus choosing to walk on by because who even really knows if saving that child would be good anyway. FWIW, I feel like many people effectively arguing the latter in the abstract would not actually walk on by if faced with that actual physical situation, which if I’m being honest is probably part of why I find it difficult to take these arguments seriously; we are in fact in that situation all the time, whether we realise it or not, and if we wouldn’t ignore the drowning child on our doorstep we shouldn’t entirely ignore the ones half a world away...unless we are unfortunately forced to do so by the need to save even greater numbers / prevent even greater suffering.
Medium-term indirect impacts are certainly worth monitoring, but they have a tendency to be much smaller in magnitude than primary impacts being measured, in which case they don’t pose much of an issue; to be best of my current knowledge carbon emissions from saving lives are a good example of this.
Perhaps, although I wouldn’t say it’s a priori obvious, so I would have to read more to be convinced.
I didn’t raise animal welfare concerns either which I also think are relevant in the case of saving lives. In other words I’m not sure you need to raise future effects for cluelessness worries to have bite, although I admit I’m less sure about this.
put harshly it’s the difference between choosing not to save a drowning child because there is another pond with even more drowning children and you have to make a hard trolley-problem-like choice, versus choosing to walk on by because who even really knows if saving that child would be good anyway. FWIW, I feel like many people effectively arguing the latter in the abstract would not actually walk on by if faced with that actual physical situation, which if I’m being honest is probably part of why I find it difficult to take these arguments seriously;
I certainly wouldn’t walk on by, but that’s mainly due to a mix of factoring in moral uncertainty (deontologists would think me the devil) and not wanting the guilt of having walked on by. Also I’m certainly not 100% sure about the cluelessness critique, so there’s that too. The cluelessness critique seems sufficient to me to want to search for other ways than AMF to do the most good, but not to literally walk past a drowning child.
I certainly wouldn’t walk on by, but that’s mainly due to a mix of factoring in moral uncertainty (deontologists would think me the devil) and not wanting the guilt of having walked on by.
This makes some sense, but to take a different example, I’ve followed a lot of the COVID debates in EA and EA-adjacent circles, and literally not once have I seen cluelessness brought up as a reason to be concerned that maybe saving lives via faster lockdowns or more testing or more vaccines or whatever is not actually a good thing to do. Yet it seems obvious that some level of complex cluelessness applies here if it applies anywhere, and this is a case where simply ignoring COVID efforts and getting on with your daily life (as best one can) is what most people have done, and certainly not something I would expect to leave people struggling with guilt or facing harsh critique from deontologists.
As Greaves herself notes, such situations are ubiquitous, and the fact that cluelessness worries are only being felt in a very small subset of the situations should lead to a certain degree of skepticism that they are in fact what is really going on. But I don’t want to spend too much time throwing around speculation about intentions relative to focusing the object level arguments made, so will leave this train of thought here.
This makes some sense, but to take a different example, I’ve followed a lot of the COVID debates in EA and EA-adjacent circles, and literally not once have I seen cluelessness brought up as a reason to be concerned
To be fair I would say that taking the cluelessness critique seriously is still quite fringe even within EA (my poll on Facebook provided some indication of this).
With an EA hat on I want us to sort out COVID because I think COVID is restricting our ability to do certain things that may be robustly good. With a non-EA hat on I want us to sort out COVID because lockdown is utterly boring (although it actually got me into EA and this forum a bit more which is good) and I don’t want my friends and family (or myself!) to be at risk of dying from it.
and this is a case where simply ignoring COVID efforts and getting on with your daily life (as best one can) is what most people have done, and certainly not something I would expect to leave people struggling with guilt or facing harsh critique from deontologists.
Most people have decided to obey lockdowns and be careful in how they interact with others, in order to save lives. In terms of EAs not doing more (e.g. donating money) I think this comes down to the regular argument of COVID not being that neglected and that there are probably better ways to do good. In terms of saving lives, I think deontologists require you to save a drowning child in front of you, but I’m not actually sure how far that obligation extends temporally/spatially.
As Greaves herself notes, such situations are ubiquitous, and the fact that cluelessness worries are only being felt in a very small subset of the situations should lead to a certain degree of skepticism that they are in fact what is really going on.
This is interesting and slightly difficult to think about. I think that when I encounter decisions in non-EA-life that I am complexly clueless about, that I let my personal gut feeling takes over. This doesn’t feel acceptable in EA situations because, well, EA is all about not letting personal gut feelings take over. So I guess this is my tentative answer to Greaves’ question.
For example it seems that mitigating climate change is a pretty surefire way to improve the medium-term future (in expectation). Would you agree with that?
I have complex cluelessness about the effects of climate change on wild animals, which could dominate the effects on humans and farmed animals.
Saying that the long-run effects of giving to AMF are not positive or negative in expectation is not the same as saying that the long-run effects are zero in expectation. The point of complex cluelessness is that we don’t really have a well-formed expectation at all because there are so many forseeable complex factors at play.
In simple cluelessness there is symmetry across acts so we can say the long-run effects are zero in expectation, but in complex cluelessness we can’t say this. If you can’t say the long-run effects are zero in expectation, then you can’t ignore the long-run effects.
I think all of this is best explained in Greaves’ original paper.
I’m not sure how to parse this ‘expectation that is neither positive nor negative or zero but still somehow impacts decisions’ concept, so maybe that’s where my confusion lies. If I try to work with it, my first thought is that not giving money to AMF would seem to have an undefined expectation for the exact same reason that giving money to AMF would have an undefined expectation; if we wish to avoid actions with undefined expectations (but why?), we’re out of luck and this collapses back to being decision-irrelevant.
I have read the paper. I’m surprised you think it’s well-explained there, since it’s pretty dense. Accordingly, I won’t pretend I understood all of it. But I do note it ends as follows (emphasis added):
And of course Greaves has since said that she does think we can tractably influence the far future, which resolves the conflict I’m pointing to anyway. In other words, I’m not sure I actually disagree with Greaves-the-individual at all, just with (some of) the people who quote her work.
I would put it as entertaining multiple probability distributions for the same decision, with different expected values. Even if you have ranges of (so not singly defined) expected values, there can still be useful things you can say.
Suppose you have 4 different acts with EVs in the following ranges:
[-100, 100] (say this is AMF in our example)
[5, 50]
[1, 1000]
[100, 105]
I would prefer each of 2, 3 and 4 to 1, since they’re all robustly positive, while 1 is not. 4 is also definitely better in expectation than 1 and 2 (according to the probability distributions we’re considering), since its EV falls completely to the right of each’s, so this means neither 1 nor 2 is permissible. Without some other decision criteria or information, 3 and 4 would both be permissible, and it’s not clear which is better.
Thanks for the response, but I don’t think this saves it. In the below I’m going to treat your ranges as being about the far future impacts of particular actions, but you could substitute for ‘all the impacts of particular actions’ if you prefer.
In order for there to be useful things to say, you need to be able to compare the ranges. And if you can rank the ranges (“I would prefer 2 to 1” “I am indifferent between 3 and 4″, etc.), and that ranking obeys basic rules like transitivity, that seems equivalent to collapsing the all the ranges to single numbers. Collapsing two actions to the same number is fine. So in your example I could arbitrarily assign a ‘score’ of 0 to action 1, a score of 1 to action 2, and scores of 2 to each of 3 and 4.
Then my decision rule just switches from ‘do the thing with highest expected value’ to ‘do (one of) the things with highest score’, and the rest of the argument is essentially unchanged: either every possible action has the same score or it doesn’t. If some things have higher scores than others, then replacing a lower score action with a higher score action is a way to tractably make the far future better.
Therefore, claims that we cannot tractably make the far future better force all the scores among all actions being taken to be the same, and if the scores are all the same I think your scoring system is decision-irrelevant; it will never push for action A over action B.
Did I miss an out? It’s been a while since I’ve had to think about weak orderings..
Ya, it’s a weak ordering, so you can’t necessarily collapse them to single numbers, because of incomparability.
[1, 1000] and [100, 105] are incomparable. If you tried to make them equivalent, you could run into problems, say with [5, 50], which is also incomparable with [1, 1000] but dominated by [100, 105].
[5, 50] < [100, 105]
[1, 1000] incomparable to the other two
If your set of options was just these 3, then, sure, you could say [100, 105] and [1, 1000] are equivalent since neither is dominated, but if you introduce another option which dominates one but not the other, that equivalence would be broken.
I think there are two ways of interpreting “make the far future better”:
compared to doing nothing/business as usual, and
compared to a specific other option.
1 implies 2, but 2 does not imply 1. It might be the case that none of the options look robustly better than doing nothing, but still some options are better than others. For example, writing their expected values as the difference with doing nothing, we could have:
[-2, 1]
[-1, 2]
0 (do nothing)
and suppose specifically that our distibutions are such that 2 always dominates 1, because of some correspondence between pairs of distributions. For example, although I can think up scenarios where the opposite might be true, it seems going out of your way to torture an animal to death (for no particular benefit) is dominated at least by killing them without torturing them. Basically, 1 looks like 2 but with extra suffering and the harms to your character.
In this scenario, we can’t reliably make the world better, compared to doing nothing, but we still have that option 2 is better than option 1.
Thanks again. I think my issue is that I’m unconvinced that incomparability applies when faced with ranking decisions. In a forced choice between A and B, I’d generally say you have three options: choose A, choose B, or be indifferent.
Incomparability in this context seems to imply that one could be indifferent between A and B, prefer C to A, yet be indifferent between C and B. That just sounds wrong to me, and is part of what I was getting at when I mentioned transitivity, curious if you have a concrete example where this feels intuitive?
For the second half, note I said among all actions being taken. If ‘business as usual’ includes action A which is dominated by action B, we can improve things by replacing A with B.
I think if you reject incomparability, you’re essentially assuming away complex cluelessness and deep uncertainty. The point in this case is that there are considerations going in each direction, and I don’t know how to weigh them against one another (in particular, no evidential symmetry). So, while I might just pick an option if forced to choose between A, B and indifferent, it doesn’t reveal a ranking, since you’ve eliminated the option I’d want to give, “I really don’t know”. You could force me to choose among wrong answers to other questions, too.
B = business as usual / “doing nothing”
C= working on a cause you have complex cluelessness about, i.e. you’re not wiling to say it’s better or worse than or equivalent to B (e.g. for me, climate change is an example)
A=C but also torturing a dog that was about to be put down anyway (or maybe generally just being mean to others)
I’m willing to accept that C>A, although I could see arguments made for complex cluelessness about that comparison (e.g. through the indirect effects of torturing a dog on your work, that you already have complex cluelessness about). Torturing a dog, however, could be easily dominated by the extra effects of climate change in A or C compared to B, so it doesn’t break the complex cluelessness that we already had comparing B and C.
Some other potential examples here, although these depend on how the numbers work out.
That’s really useful, thanks, at the very least I now feel like I’m much closer to identifying where the different positions are coming from. I still think I reject incomparability; the example you gave didn’t strike me as compelling, though I can imagine it compelling others.
I would say it’s reality that’s doing the forcing. I have money to donate currently; I can choose to donate it to charity A, or B, or C, etc., or to not donate it. I am forced to choose and the decision has large stakes; ‘I don’t know’ is not an option (‘wait and do more research’ is, but that doesn’t seem like it would help here). I am doing a particular job as opposed to all the other things I could be doing with that time; I have made a choice and for the rest of my life I will continue to be forced to choose what to do with my time. Etc.
It feels intuitively obvious to me that those many high-stakes forced choices can and should be compared in order to determine the all-things-considered best course of action, but it’s useful to know that this intuition is apparently not shared.
It’s not so much that we should avoid doing it full stop, it’s more that if we’re looking to do the most good then we should probably avoid doing it because we don’t actually know if it does good. If you don’t have your EA hat on then you can justify doing it for other reasons.
I’ve only properly read it once and it was a while back. I just remember it having quite an effect on me. Maybe I read it a few times to fully grasp it, can’t quite remember. I’d be quite surprised if it immediately clicked for me to be honest. I clearly don’t remember it that well because I forgot that Greaves had that discussion about the psychology of cluelessness which is interesting.
Just to be clear I also think that we can tractably influence the far future in expectation (e.g. by taking steps to reduce x-risk). I’m not really sure how that resolves things.
I’m surprised to hear you say you’re unsure you disagree with Greaves. Here’s another quote from her (from here). I’d imagine you disagree with this?
If you think you can tractably impact the far future in expectation, AMF can impact the far future in expectation. At which point it’s reasonable to think that those far future impacts could be predictably negative on further investigation, since we weren’t really selecting for them to be positive. I do think trying to resolve the question of whether they are negative is probably a waste of time for reasons in my first comment, and it sounds like we agree on that, but at that point it’s reasonable to say that ‘AMF could be good or bad, I’m not really sure, because I’ve chosen to focus my limited time and attention elsewhere’. There’s no deep or fundamental uncertainty here, just a classic example of triage leading us to prioritise promising-looking paths over unpromising-looking ones.
For the same reason, I don’t see anything wrong with that quote from Greaves; coming from someone who thinks we can tractably impact the far future and that the far future is massively morally relevant, it makes a lot of sense. If it came from someone who thought it was impossible to tractably impact the future, I’d want to dig into it more.
On a slightly different note, I can understand why one might not think we can tractably impact the far future, but what about the medium-term future? For example it seems that mitigating climate change is a pretty surefire way to improve the medium-term future (in expectation). Would you agree with that?
If you accept that then you might also accept that we are clueless about giving to AMF based on it’s possible medium-term climate change impacts (e.g. maybe giving to AMF will increase populations in the near to medium term, and this will increase carbon emissions). What do you think about this line of reasoning?
Medium-term indirect impacts are certainly worth monitoring, but they have a tendency to be much smaller in magnitude than primary impacts being measured, in which case they don’t pose much of an issue; to be best of my current knowledge carbon emissions from saving lives are a good example of this.
Of course, one could absolutely think that a dollar spent on climate mitigation is more valuable than a dollar spent saving the lives of the global poor. But that’s very different to the cluelessness line of attack; put harshly it’s the difference between choosing not to save a drowning child because there is another pond with even more drowning children and you have to make a hard trolley-problem-like choice, versus choosing to walk on by because who even really knows if saving that child would be good anyway. FWIW, I feel like many people effectively arguing the latter in the abstract would not actually walk on by if faced with that actual physical situation, which if I’m being honest is probably part of why I find it difficult to take these arguments seriously; we are in fact in that situation all the time, whether we realise it or not, and if we wouldn’t ignore the drowning child on our doorstep we shouldn’t entirely ignore the ones half a world away...unless we are unfortunately forced to do so by the need to save even greater numbers / prevent even greater suffering.
Perhaps, although I wouldn’t say it’s a priori obvious, so I would have to read more to be convinced.
I didn’t raise animal welfare concerns either which I also think are relevant in the case of saving lives. In other words I’m not sure you need to raise future effects for cluelessness worries to have bite, although I admit I’m less sure about this.
I certainly wouldn’t walk on by, but that’s mainly due to a mix of factoring in moral uncertainty (deontologists would think me the devil) and not wanting the guilt of having walked on by. Also I’m certainly not 100% sure about the cluelessness critique, so there’s that too. The cluelessness critique seems sufficient to me to want to search for other ways than AMF to do the most good, but not to literally walk past a drowning child.
This makes some sense, but to take a different example, I’ve followed a lot of the COVID debates in EA and EA-adjacent circles, and literally not once have I seen cluelessness brought up as a reason to be concerned that maybe saving lives via faster lockdowns or more testing or more vaccines or whatever is not actually a good thing to do. Yet it seems obvious that some level of complex cluelessness applies here if it applies anywhere, and this is a case where simply ignoring COVID efforts and getting on with your daily life (as best one can) is what most people have done, and certainly not something I would expect to leave people struggling with guilt or facing harsh critique from deontologists.
As Greaves herself notes, such situations are ubiquitous, and the fact that cluelessness worries are only being felt in a very small subset of the situations should lead to a certain degree of skepticism that they are in fact what is really going on. But I don’t want to spend too much time throwing around speculation about intentions relative to focusing the object level arguments made, so will leave this train of thought here.
To be fair I would say that taking the cluelessness critique seriously is still quite fringe even within EA (my poll on Facebook provided some indication of this).
With an EA hat on I want us to sort out COVID because I think COVID is restricting our ability to do certain things that may be robustly good. With a non-EA hat on I want us to sort out COVID because lockdown is utterly boring (although it actually got me into EA and this forum a bit more which is good) and I don’t want my friends and family (or myself!) to be at risk of dying from it.
Most people have decided to obey lockdowns and be careful in how they interact with others, in order to save lives. In terms of EAs not doing more (e.g. donating money) I think this comes down to the regular argument of COVID not being that neglected and that there are probably better ways to do good. In terms of saving lives, I think deontologists require you to save a drowning child in front of you, but I’m not actually sure how far that obligation extends temporally/spatially.
This is interesting and slightly difficult to think about. I think that when I encounter decisions in non-EA-life that I am complexly clueless about, that I let my personal gut feeling takes over. This doesn’t feel acceptable in EA situations because, well, EA is all about not letting personal gut feelings take over. So I guess this is my tentative answer to Greaves’ question.
I have complex cluelessness about the effects of climate change on wild animals, which could dominate the effects on humans and farmed animals.