People might be interested in this discussion of flow through effects on the EA Facebook group. It prompted a length comment thread there already, but it’s worth flagging this comment from Holden of GiveWell (whose views are similar to my own):
I disagree with Eliezer’s comment, to what seems like a greater degree than other commenters. I’m not hoping to get into a long debate and not attempting to give a full defense, but here’s an outline of what I think. I focus on why I reject Eliezer’s first sentence; I’ve deliberately stayed away from the question of what Holden should be doing, what current factory-farming-focused people should be doing, etc. and instead focused on whether there are imaginable people who are rational to invoke flow-through effects as a reason for working on general short/medium-term empowerment even when they value future generations.
*I think if you tried to list all the people who, with at least 20 years of hindsight, seem to have done the most good for the people of the very far future (or more so, people of the year 2100), you would end up feeling that statements like “In general, when somebody who cares about Y designed their project X to mainly impact Y, it’s very unlikely that X is also the best way to accomplish some unrelated goal Z.” are not right.
*I think it is great that there are people trying to make their best guesses about how the next 100 years and beyond are likely to play out, and come up with the best possible interventions for future people based on those guesses. I count myself among such people. But I think we also need to bear in mind that such endeavors are historically not very successful, that making such predictions in a helpful way may just not be something we’re able to do, and that there does seem to be a systematic tendency for actions that increase human empowerment to have better results than anticipated at the time. Thus, I believe there is a very real case for “Solve problems and do good things that you have an opportunity to do well; don’t worry too much about where it’s all going; and certainly don’t feel that just because you have, say, a belief that you have a zero discount rate or a belief that pigs have nontrivial moral value, this is sufficient to say you’re blowing it if you’re not working directly on AI risk or factory farming related issues.” I believe that all of the work we are trying to do stands on the shoulders of a very large number of people who took something much closer to the latter attitude than to Eliezer’s. It’s certainly true that your impact on x-risk is very diffused if it comes through general empowerment, but I think there are plenty of people who shouldn’t rationally believe they can get a larger impact by aiming directly.
So I think I’m defending some version of the “bailey” here. I’m sure there are all sorts of ridiculous ways to take this line of reasoning too far, and I can see how—taken to the limit—it just comes down to “don’t try to do the most good, just do what you feel,” and I’m not defending that. I’m certainly not saying that antipoverty interventions have anything other than miniscule impacts on x-risk (though many people aren’t in a position to believe that more-than-miniscule is an option); I’m not endorsing anything like the symphony comments, and I don’t believe I’m on a slippery slope to doing so. But I think there are cases where tractability trumps importance … even when our best back-of-the-envelope calculations don’t seem to say so. I still think that if all far-future-focused donation options look terrible to person X, person X is being reasonable to support strong antipoverty orgs instead even assuming that person X cares about future people too. There are other contexts in which I could imagine invoking this argument as well, w/r/t e.g. career choice. I wouldn’t invoke it in the way Eliezer quotes it.
Stepping back to the wider significance of this debate. Certain strains/people in EA give the impression that the ~whole world’s attempts at doing good have expected value that amounts to a rounding error, when put alongside the good being done by a very small number of people (mostly in the community) working on particular highly specific paths to impact. This bothers me a lot; that’s partly because of how I think it makes EA look to others, but I’d be OK with that if I were intellectually on board with the belief. I’m not. I think Eliezer is, and that that’s where his comments are coming from. That’s fine—if I shared Eliezer’s views and confidence in those views re: the far future, I would agree with what he says here. But I think most of the people agreeing with him here shouldn’t be. (Again, I’m not defending the exact arguments he quotes—I’m disagreeing with his first sentence.)
People might be interested in this discussion of flow through effects on the EA Facebook group. It prompted a length comment thread there already, but it’s worth flagging this comment from Holden of GiveWell (whose views are similar to my own):