if we wish to avoid actions with undefined expectations (but why?), we’re out of luck.
It’s not so much that we should avoid doing it full stop, it’s more that if we’re looking to do the most good then we should probably avoid doing it because we don’t actually know if it does good. If you don’t have your EA hat on then you can justify doing it for other reasons.
I have read the paper. I’m surprised you think it’s well-explained there, since it’s pretty dense.
I’ve only properly read it once and it was a while back. I just remember it having quite an effect on me. Maybe I read it a few times to fully grasp it, can’t quite remember. I’d be quite surprised if it immediately clicked for me to be honest. I clearly don’t remember it that well because I forgot that Greaves had that discussion about the psychology of cluelessness which is interesting.
And of course Greaves has since said that she does think we can tractably influence the far future, which resolves the conflict I’m pointing to anyway. In other words, I’m not sure I actually disagree with Greaves-the-individual at all, just with (some of) the people who quote her work.
Just to be clear I also think that we can tractably influence the far future in expectation (e.g. by taking steps to reduce x-risk). I’m not really sure how that resolves things.
I’m surprised to hear you say you’re unsure you disagree with Greaves. Here’s another quote from her (from here). I’d imagine you disagree with this?
What do we get when we put all those three observations together? Well, what I get is a deep seated worry about the extent to which it really makes sense to be guided by cost-effectiveness analyses of the kinds that are provided by meta-charities like GiveWell. If what we have is a cost-effectiveness analysis that focuses on a tiny part of the thing we care about, and if we basically know that the real calculation—the one we actually care about—is going to be swamped by this further future stuff that hasn’t been included in the cost-effectiveness analysis; how confident should we be really that the cost-effectiveness analysis we’ve got is any decent guide at all to how we should be spending our money? That’s the worry that I call ‘cluelessness’. We might feel clueless about how to spend money even after reading GiveWell’s website.
Just to be clear I also think that we can tractably influence the far future in expectation (e.g. by taking steps to reduce x-risk). I’m not really sure how that resolves things.
If you think you can tractably impact the far future in expectation, AMF can impact the far future in expectation. At which point it’s reasonable to think that those far future impacts could be predictably negative on further investigation, since we weren’t really selecting for them to be positive. I do think trying to resolve the question of whether they are negative is probably a waste of time for reasons in my first comment, and it sounds like we agree on that, but at that point it’s reasonable to say that ‘AMF could be good or bad, I’m not really sure, because I’ve chosen to focus my limited time and attention elsewhere’. There’s no deep or fundamental uncertainty here, just a classic example of triage leading us to prioritise promising-looking paths over unpromising-looking ones.
For the same reason, I don’t see anything wrong with that quote from Greaves; coming from someone who thinks we can tractably impact the far future and that the far future is massively morally relevant, it makes a lot of sense. If it came from someone who thought it was impossible to tractably impact the future, I’d want to dig into it more.
On a slightly different note, I can understand why one might not think we can tractably impact the far future, but what about the medium-term future? For example it seems that mitigating climate change is a pretty surefire way to improve the medium-term future (in expectation). Would you agree with that?
If you accept that then you might also accept that we are clueless about giving to AMF based on it’s possible medium-term climate change impacts (e.g. maybe giving to AMF will increase populations in the near to medium term, and this will increase carbon emissions). What do you think about this line of reasoning?
Medium-term indirect impacts are certainly worth monitoring, but they have a tendency to be much smaller in magnitude than primary impacts being measured, in which case they don’t pose much of an issue; to be best of my current knowledge carbon emissions from saving lives are a good example of this.
Of course, one could absolutely think that a dollar spent on climate mitigation is more valuable than a dollar spent saving the lives of the global poor. But that’s very different to the cluelessness line of attack; put harshly it’s the difference between choosing not to save a drowning child because there is another pond with even more drowning children and you have to make a hard trolley-problem-like choice, versus choosing to walk on by because who even really knows if saving that child would be good anyway. FWIW, I feel like many people effectively arguing the latter in the abstract would not actually walk on by if faced with that actual physical situation, which if I’m being honest is probably part of why I find it difficult to take these arguments seriously; we are in fact in that situation all the time, whether we realise it or not, and if we wouldn’t ignore the drowning child on our doorstep we shouldn’t entirely ignore the ones half a world away...unless we are unfortunately forced to do so by the need to save even greater numbers / prevent even greater suffering.
Medium-term indirect impacts are certainly worth monitoring, but they have a tendency to be much smaller in magnitude than primary impacts being measured, in which case they don’t pose much of an issue; to be best of my current knowledge carbon emissions from saving lives are a good example of this.
Perhaps, although I wouldn’t say it’s a priori obvious, so I would have to read more to be convinced.
I didn’t raise animal welfare concerns either which I also think are relevant in the case of saving lives. In other words I’m not sure you need to raise future effects for cluelessness worries to have bite, although I admit I’m less sure about this.
put harshly it’s the difference between choosing not to save a drowning child because there is another pond with even more drowning children and you have to make a hard trolley-problem-like choice, versus choosing to walk on by because who even really knows if saving that child would be good anyway. FWIW, I feel like many people effectively arguing the latter in the abstract would not actually walk on by if faced with that actual physical situation, which if I’m being honest is probably part of why I find it difficult to take these arguments seriously;
I certainly wouldn’t walk on by, but that’s mainly due to a mix of factoring in moral uncertainty (deontologists would think me the devil) and not wanting the guilt of having walked on by. Also I’m certainly not 100% sure about the cluelessness critique, so there’s that too. The cluelessness critique seems sufficient to me to want to search for other ways than AMF to do the most good, but not to literally walk past a drowning child.
I certainly wouldn’t walk on by, but that’s mainly due to a mix of factoring in moral uncertainty (deontologists would think me the devil) and not wanting the guilt of having walked on by.
This makes some sense, but to take a different example, I’ve followed a lot of the COVID debates in EA and EA-adjacent circles, and literally not once have I seen cluelessness brought up as a reason to be concerned that maybe saving lives via faster lockdowns or more testing or more vaccines or whatever is not actually a good thing to do. Yet it seems obvious that some level of complex cluelessness applies here if it applies anywhere, and this is a case where simply ignoring COVID efforts and getting on with your daily life (as best one can) is what most people have done, and certainly not something I would expect to leave people struggling with guilt or facing harsh critique from deontologists.
As Greaves herself notes, such situations are ubiquitous, and the fact that cluelessness worries are only being felt in a very small subset of the situations should lead to a certain degree of skepticism that they are in fact what is really going on. But I don’t want to spend too much time throwing around speculation about intentions relative to focusing the object level arguments made, so will leave this train of thought here.
This makes some sense, but to take a different example, I’ve followed a lot of the COVID debates in EA and EA-adjacent circles, and literally not once have I seen cluelessness brought up as a reason to be concerned
To be fair I would say that taking the cluelessness critique seriously is still quite fringe even within EA (my poll on Facebook provided some indication of this).
With an EA hat on I want us to sort out COVID because I think COVID is restricting our ability to do certain things that may be robustly good. With a non-EA hat on I want us to sort out COVID because lockdown is utterly boring (although it actually got me into EA and this forum a bit more which is good) and I don’t want my friends and family (or myself!) to be at risk of dying from it.
and this is a case where simply ignoring COVID efforts and getting on with your daily life (as best one can) is what most people have done, and certainly not something I would expect to leave people struggling with guilt or facing harsh critique from deontologists.
Most people have decided to obey lockdowns and be careful in how they interact with others, in order to save lives. In terms of EAs not doing more (e.g. donating money) I think this comes down to the regular argument of COVID not being that neglected and that there are probably better ways to do good. In terms of saving lives, I think deontologists require you to save a drowning child in front of you, but I’m not actually sure how far that obligation extends temporally/spatially.
As Greaves herself notes, such situations are ubiquitous, and the fact that cluelessness worries are only being felt in a very small subset of the situations should lead to a certain degree of skepticism that they are in fact what is really going on.
This is interesting and slightly difficult to think about. I think that when I encounter decisions in non-EA-life that I am complexly clueless about, that I let my personal gut feeling takes over. This doesn’t feel acceptable in EA situations because, well, EA is all about not letting personal gut feelings take over. So I guess this is my tentative answer to Greaves’ question.
For example it seems that mitigating climate change is a pretty surefire way to improve the medium-term future (in expectation). Would you agree with that?
I have complex cluelessness about the effects of climate change on wild animals, which could dominate the effects on humans and farmed animals.
It’s not so much that we should avoid doing it full stop, it’s more that if we’re looking to do the most good then we should probably avoid doing it because we don’t actually know if it does good. If you don’t have your EA hat on then you can justify doing it for other reasons.
I’ve only properly read it once and it was a while back. I just remember it having quite an effect on me. Maybe I read it a few times to fully grasp it, can’t quite remember. I’d be quite surprised if it immediately clicked for me to be honest. I clearly don’t remember it that well because I forgot that Greaves had that discussion about the psychology of cluelessness which is interesting.
Just to be clear I also think that we can tractably influence the far future in expectation (e.g. by taking steps to reduce x-risk). I’m not really sure how that resolves things.
I’m surprised to hear you say you’re unsure you disagree with Greaves. Here’s another quote from her (from here). I’d imagine you disagree with this?
If you think you can tractably impact the far future in expectation, AMF can impact the far future in expectation. At which point it’s reasonable to think that those far future impacts could be predictably negative on further investigation, since we weren’t really selecting for them to be positive. I do think trying to resolve the question of whether they are negative is probably a waste of time for reasons in my first comment, and it sounds like we agree on that, but at that point it’s reasonable to say that ‘AMF could be good or bad, I’m not really sure, because I’ve chosen to focus my limited time and attention elsewhere’. There’s no deep or fundamental uncertainty here, just a classic example of triage leading us to prioritise promising-looking paths over unpromising-looking ones.
For the same reason, I don’t see anything wrong with that quote from Greaves; coming from someone who thinks we can tractably impact the far future and that the far future is massively morally relevant, it makes a lot of sense. If it came from someone who thought it was impossible to tractably impact the future, I’d want to dig into it more.
On a slightly different note, I can understand why one might not think we can tractably impact the far future, but what about the medium-term future? For example it seems that mitigating climate change is a pretty surefire way to improve the medium-term future (in expectation). Would you agree with that?
If you accept that then you might also accept that we are clueless about giving to AMF based on it’s possible medium-term climate change impacts (e.g. maybe giving to AMF will increase populations in the near to medium term, and this will increase carbon emissions). What do you think about this line of reasoning?
Medium-term indirect impacts are certainly worth monitoring, but they have a tendency to be much smaller in magnitude than primary impacts being measured, in which case they don’t pose much of an issue; to be best of my current knowledge carbon emissions from saving lives are a good example of this.
Of course, one could absolutely think that a dollar spent on climate mitigation is more valuable than a dollar spent saving the lives of the global poor. But that’s very different to the cluelessness line of attack; put harshly it’s the difference between choosing not to save a drowning child because there is another pond with even more drowning children and you have to make a hard trolley-problem-like choice, versus choosing to walk on by because who even really knows if saving that child would be good anyway. FWIW, I feel like many people effectively arguing the latter in the abstract would not actually walk on by if faced with that actual physical situation, which if I’m being honest is probably part of why I find it difficult to take these arguments seriously; we are in fact in that situation all the time, whether we realise it or not, and if we wouldn’t ignore the drowning child on our doorstep we shouldn’t entirely ignore the ones half a world away...unless we are unfortunately forced to do so by the need to save even greater numbers / prevent even greater suffering.
Perhaps, although I wouldn’t say it’s a priori obvious, so I would have to read more to be convinced.
I didn’t raise animal welfare concerns either which I also think are relevant in the case of saving lives. In other words I’m not sure you need to raise future effects for cluelessness worries to have bite, although I admit I’m less sure about this.
I certainly wouldn’t walk on by, but that’s mainly due to a mix of factoring in moral uncertainty (deontologists would think me the devil) and not wanting the guilt of having walked on by. Also I’m certainly not 100% sure about the cluelessness critique, so there’s that too. The cluelessness critique seems sufficient to me to want to search for other ways than AMF to do the most good, but not to literally walk past a drowning child.
This makes some sense, but to take a different example, I’ve followed a lot of the COVID debates in EA and EA-adjacent circles, and literally not once have I seen cluelessness brought up as a reason to be concerned that maybe saving lives via faster lockdowns or more testing or more vaccines or whatever is not actually a good thing to do. Yet it seems obvious that some level of complex cluelessness applies here if it applies anywhere, and this is a case where simply ignoring COVID efforts and getting on with your daily life (as best one can) is what most people have done, and certainly not something I would expect to leave people struggling with guilt or facing harsh critique from deontologists.
As Greaves herself notes, such situations are ubiquitous, and the fact that cluelessness worries are only being felt in a very small subset of the situations should lead to a certain degree of skepticism that they are in fact what is really going on. But I don’t want to spend too much time throwing around speculation about intentions relative to focusing the object level arguments made, so will leave this train of thought here.
To be fair I would say that taking the cluelessness critique seriously is still quite fringe even within EA (my poll on Facebook provided some indication of this).
With an EA hat on I want us to sort out COVID because I think COVID is restricting our ability to do certain things that may be robustly good. With a non-EA hat on I want us to sort out COVID because lockdown is utterly boring (although it actually got me into EA and this forum a bit more which is good) and I don’t want my friends and family (or myself!) to be at risk of dying from it.
Most people have decided to obey lockdowns and be careful in how they interact with others, in order to save lives. In terms of EAs not doing more (e.g. donating money) I think this comes down to the regular argument of COVID not being that neglected and that there are probably better ways to do good. In terms of saving lives, I think deontologists require you to save a drowning child in front of you, but I’m not actually sure how far that obligation extends temporally/spatially.
This is interesting and slightly difficult to think about. I think that when I encounter decisions in non-EA-life that I am complexly clueless about, that I let my personal gut feeling takes over. This doesn’t feel acceptable in EA situations because, well, EA is all about not letting personal gut feelings take over. So I guess this is my tentative answer to Greaves’ question.
I have complex cluelessness about the effects of climate change on wild animals, which could dominate the effects on humans and farmed animals.