Are GiveWell Top Charities Too Speculative?
Cross-posted to my blog.
The common claim: Unlike more speculative interventions, GiveWell top charities have really strong evidence that they do good.
The problem: Thanks to flow-through effects, GiveWell top charities could be much better than they look or they could be actively harmful, and we have no idea how big their actual impact is or if it’s even net positive.
Flow-Through Effects
Take the Against Malaria Foundation. It has the direct effect of preventing people from getting malaria, but it might have much larger flow-through effects. Here are some effects AMF might have:
Increasing human population size by preventing deaths
Decreasing human population size by accelerating the demographic transition
Increasing people’s economic welfare, which causes them to eat more animals
Increasing people’s economic welfare, which causes them to reduce wild animal populations
Increasing population might be good simply because there are more people alive with lives worth living. Accelerating the demographic transition (i.e. reducing population) might be good because it might make a country more stable, increasing international cooperation. This could be a very good thing. On the other hand, making a country more stable means there are more major players on the global stage, which could make cooperation harder 1.
Some of these long-term effects will probably matter more than AMF’s immediate impact. We could say the same thing about GiveWell’s other top charities, although the long-term effects won’t be exactly the same.
Everything Is Uncertain
There’s pretty clear evidence that GiveWell top charities do a lot of direct good–but their flow-through effects are probably even bigger. If a charity like AMF has good direct effects but harmful flow-through effects, it’s probably harmful on balance. That means we can’t say with high confidence that AMF is net positive.
Among effects that are easy to document, yes, AMF is net positive (maybe). Maybe we could just ignore large long-term effects since we can’t really measure them, but I’m uncomfortable with that. If flow-through effects matter so much, is it really fair to assume that they cancel out in expectation?2 We don’t know whether AMF has very good or very bad long-term effects. I tend to think the arguments are a little stronger for AMF having good effects, but I’m wary of optimism bias, especially for such speculative questions where biases can easily overwhelm logical reasoning; and I think a lot of people are too quick to trust speculative arguments about long-term effects.
So where does this leave us? Well, a lot of people use GiveWell top charities as a “fallback” position: “I’m not convinced by the evidence in favor of any intervention with potentially bigger effects, so I’m going to support AMF.” But if AMF might have negative effects, it makes AMF look a lot weaker. Sure, you can argue that AMF has positive flow-through effects, but that’s a pretty speculative claim, so you’re not standing on any better ground than people who follow the fairly weak evidence that online ads can cost-effectively convince people to eat less meat, or people who support research on AI safety.
I don’t like speculative arguments. I much prefer dealing with questions where we have concrete evidence and understand the answer. In a lot of cases I prefer a well-established intervention over a speculative intervention with supposedly higher expected value. But it doesn’t look like we can escape speculative reasoning. For anything we do, there’s a good chance that unpredictable long-term effects have a bigger impact than any direct effects we can measure. Recently I contemplated the value of starting a happy rat farm as a way of doing good without having flow-through effects; but even a rat farm still requires buying a lot of food, which has a substantial effect on the environment that probably matters more than the rats’ direct happiness.
Nothing is certain. Everything is speculative. I have no idea what to do to make the world better. As always, more research is required.
Edited to clarify: I’m not trying to say that AMF is too speculative, and therefore we should give up and do nothing. I strongly encourage more people to donate to AMF. This is more meant as a response to the common claim that existential risk or factory farming interventions are too speculative, so we should support global poverty instead. In fact, everything is speculative, so trying to follow robust evidence only doesn’t get us that far. We have to make decisions in the face of high uncertainty.
Some discussion here.
Notes
-
I recently heard Brian Tomasik make this last argument, and I had never heard it before. When factors this important can go unnoticed for so long, it makes me wary of paying too much attention to speculation about the far-future effects of present-day actions. ↩
- Global poverty could be more cost-effective than animal advocacy (even for non-speciesists) by 31 May 2016 15:02 UTC; 17 points) (
- Causal Networks Model I: Introduction & User Guide by 17 Nov 2017 14:51 UTC; 16 points) (
- 22 Dec 2015 15:44 UTC; 11 points) 's comment on Quantifying the Impact of Economic Growth on Meat Consumption by (
- 1 May 2020 14:35 UTC; 10 points) 's comment on How good is The Humane League compared to the Against Malaria Foundation? by (
- Has anyone analyzed the trade-offs between mosquito and human welfare? by 2 Sep 2019 23:27 UTC; 7 points) (
- 9 Apr 2020 1:05 UTC; 7 points) 's comment on If you value future people, why do you consider near term effects? by (
- 25 Jan 2020 19:58 UTC; 4 points) 's comment on Doing good is as good as it ever was by (
- 17 Feb 2016 15:33 UTC; 4 points) 's comment on If tech progress might be bad, what should we tell people about it? by (
I broadly agree with this, but I’d put it a little differently.
If you think what most matters about your actions is their effect on the long-run future (due to Bostrom-style arguments), then GiveWell recommended charities aren’t especially “proven”, because we have little idea what their long-run effects are. And they weren’t even selected for having good long-run effects in the first place.
One response to this is to argue that the best proxy for having a good long-run impact is having a good short-run impact (e.g. via boosting economic growth).
Another response is to argue that we never have good information about long-run effects, so the best we can do is to focus on the things with the best short-run effects.
I also still think it’s fair to say GiveWell recommended charities are a “safe bet” in the sense that donating to them is very likely to do much more good than spending the money on your own consumption.
At the risk of sounding like a broken record, this is still a speculative claim, so if you make it, you can no longer say you’re following robust evidence only.
Yes I totally agree. I was just saying what the most common responses are, not agreeing with them.
cf http://effective-altruism.com/ea/qx/two_observations_about_skeptical_vs_speculative/
I’ve heard this “the best proxy for having a good long-run impact is having a good short-run impact” a couple of times now but I haven’t seen anyone make any argument for it. Could someone provide a link or something? To me it’s not even clear why impact on economy of different charities like Give Directly and AMF should be proportional to their short-term impact.
It’s a controversial claim, and I don’t endorse it. One attempt is this: http://blog.givewell.org/2013/05/15/flow-through-effects/ Which argues that general economic growth and human empowerment has lots of good long-run side effects, so that boosting these is a good thing to do. The main response to this is that that’s true in the past, but if technological progress causes new xrisks, it’s not clear whether it’ll be true in the future.
Another strand of argument is to look at what rules of thumb people who had lots of impact in the past followed, and argue that something like “take really good opportunities to have a lot of short-run impact” seems like a better rule of thumb than “try to figure out what’s going to happen in the long-run future and how you can shape it). I haven’t seen this argued for in writing though.
Also there have been arguments that the best way to shape the long-run future might be through “broad” interventions rather than “narrow” ones, and broad interventions are often things that involve doing short-term common sense good, like making people better educated. http://lesswrong.com/lw/hjb/a_proposed_adjustment_to_the_astronomical_waste/ http://effective-altruism.com/ea/r6/what_is_a_broad_intervention_and_what_is_a_narrow/
As I said on facebook, I think this mostly goes away (leaving a rather non-speculative case) if one puts even a little weight on special obligations to people in our generation:
I was just thinking about this again and I don’t believe it works.
Suppose we want to maximize expected value over multiple value systems. Let’s say there’s a 10% chance that we should only care about the current generation, and a 90% chance that generational status isn’t morally relevant (obviously this is a simplification but I believe the result generalizes). Then the expected utility of AMF is
Far future effects still dominate.
You could say it’s wrong to maximize expected utility across multiple value systems, but I don’t see how you can make reasonable decisions at all if you’re not trying to maximize expected utility. If you’re trying to “diversify” across multiple value systems then you’re doing something that’s explicitly bad according to a linear consequentialist value system, and you’d need some justification for why diversifying across value systems is better than maximizing expected value over value systems.
The scaling factors there are arbitrary. I can throw in theories that claim things are infinitely important.
This view is closer to ‘say that views you care about got resources in proportion to your attachment to/credence in them, then engage in moral trade from that point.’
Hi Carl,
I am not familiar with the moral uncertainty literature, but in my mind it would make sense to define the utility scale of each welfare theory such that the difference in utility between the best and worst possible state is always the same. For example, always assigning 1 to the best possible state, and −1 to the worst possible state. In this case, the weights of each welfare theory would represent their respective strength/plausibility, and therefore not be arbitrary?
Okay, can you tell me if I’m understanding this correctly?
Say my ethical probability distribution is 10% prior existence utilitarianism and 90% total utilitarianism. Then the prior existence segment (call it P) gets $1 and the total existence segment (call it T) gets $9. P wants me to donate everything to AMF and T wants me to donate everything to MIRI, so I should donate $1 to AMF and $9 to MIRI. So that means people are justified in donating some portion of their budget to AMF, but not all unless they believe AMF also is the best charity for helping future generations.*
This is a nice idea but I worry it won’t work.
Even with healthy moral uncertainty, I think we should attach very little weight to moral theories that give future people’s utility negligible moral weight. For the kinds of reasons that suggest we can attach them less weight don’t go any way to suggesting that we can ignore them. To do this they’d have to show that future people’s moral weight was (more than!) inversely proportional to their temporal distance from us. But the reasons they give tend to show that we have special obligations to people in our generation, and say nothing about our obligations to people living in the year 3000AD vs people living in the year 30,000AD. [Maybe i’m missing an argument here?!] Thus any plausible moral theory will such that the calculation will be dominated by very long term effects, and long term effects will dominate our decision making process.
Why would we put more weight on current generations, though? I’ve never seen a good argument for that. Surely there’s no meaningful moral difference between faraway, distant, unknown people alive today and faraway, distant, unknown people alive tomorrow. I can’t think of any arguments for charitable distribution which would fall apart in the case of people living in a different generation, or any arguments for agent relative moral value which depend specifically on someone living at the same time as you, or anything of the sort. Even if you believe that moral uncertainty is a meaningful issue, you still need reasons to favor one possibility over countervailing possibilities that cut in opposite directions.
If we assign value to future people then it could very well be an exceptional way to make the long run worse. We don’t even have to give future people equal value, we just have to let future people’s value have equal potential to aggregate, and you have the same result.
Morality only provides judgements of one act or person over another. Morality doesn’t provide any appeal to a third, independent “value scale”, so it doesn’t make sense to try to cross-optimize across multiple moral systems. I don’t think there is any rhyme or reason to saying that it’s okay to have 1 unit of special obligation moral value at the expense of 10 units of time-egalitarian moral value, or 20 units, or anything of the sort.
So you’re saying that basically “this action is really good according to moral system A, and only a little bit bad according to moral system B, so in this case moral system A dominates.” But these descriptors of something being very good or slightly bad only mean anything in reference to other moral outcomes within that moral system. It’s like saying “this car is faster than that car is loud”.
Carl’s point, though not fully clarified above, is that you can just pick a different intervention that does well on moral system B and is only a little bit bad according to A, pair it off with AMF, and now you have a portfolio that is great according to both systems. For this not to work AMF would have to be particularly bad according to B (bad enough that we can’t find something to cancel it out), rather than just a little bit bad. Which a priori is rather unlikely.
AMF being a little bad for x risk could mean an EV of thousands or millions of people not living. The problem is that your assumptions of something being “a little bit bad” or “very bad” are only meaningful in reference to that moral system.
My point is that it’s not coherent to try to optimize multiple moral systems because there is no third scale of meta morality to compare things to. If you want you can assign greater weight to existing people to account for your uncertainty about their moral value, but in no case do you maximize moral value by splitting into multiple causes. If AMF maximizes moral value then it wouldn’t make sense to maximize anything else, whereas if AMF doesn’t maximize moral value then you shouldn’t give it any money at all. So yes it will work but it won’t be the morally optimal thing to do.
Nope, ‘little bit bad’ is just relative to other interventions designed to work through that moral framework. No judgement about which system is better or more important is necessary.
Sure, but once you choose to act within a single moral framework, does pairing charities off into portfolios make any sense at all? Nope.
My donations are joint with my partner. We have different moral frameworks. EA at large has a wide variety of moral frameworks. And my moral framework now is likely different to my moral framework 10 years down the line which will in turn be different to my framework 20 years down the line.
Once you’re looking at any set of donations which you cannot entirely control (which in fact includes your own donations, accounting for different beliefs at different times), thinking in terms of portfolios, trade-offs, and balancing acts makes sense.
For a concrete example, I assign non-trivial probability to coming round the view that animal suffering is really really important within the next 10 years. So out of deference to my future self (who, all else being equal, is probably smarter and better-informed than I am) I’d like to avoid interventions that are very bad for animals, in Carl’s sense of ‘very bad’. But his argument highlights why I shouldn’t worry so much about AMF being just a little bit bad on that front, relative to interventions designed to work in that field, because in the event that I do come round to that point of view I’ll be able to overwhelm that badness with relatively small donations to animal charities that my future self will presumably want to make anyway. And, most importantly, this will very likely continue to be true regardless of whether AMF turns out to be net positive or net negative for overall suffering according to my future self.
That’s one actual real-world example of why I think in these terms. I could come up with many others if so desired; the framework is powerful.
Yes, and this is a special case of people with different goals trying to fit together. My point was about individual agents’ goals.
I don’t think so. If you can’t control certain donations then they’re irrelevant to your decision.
This doesn’t seem right—if you got terminal cancer, presumably you wouldn’t consider that a good reason to suddenly ignore animals. Rather, you are uncertain about animals’ moral value. So what you should do is give your best-guess, most-informed estimate about animal value and rely on that. If you expect a high chance that you will find reasons to care about animals more, but a low chance that you will find reasons to care about animals less, then your current estimate is too low, and you should start caring more about animals right now until you have an unbiased estimator where the chances of being wrong are the same in either direction.
In such a case, you should donate to whichever charity maximizes value under this framework, and it isn’t reasonable to expect to be likely to change beliefs in any particular direction.
Sure, please do.
An old write-up of a component of this argument: http://robertwiblin.com/2012/04/14/flow-on-effects-can-be-key-to-charity/
I alluded to this concern here:
″
I believe in the overwhelming importance of shaping the long term future. In my view most causal chains that could actually matter are likely to be very long by normal standards. But they might at least have many paths to impact, or be robust (i.e. have few weak steps).
People who say they are working on broad, robust or short chains usually ignore the major uncertainties about whether the farther out regions of the chain they are a part of are positive, neutral or negative in value. I think this is dangerous and makes these plans less reliable than they superficially appear to be.
If any single step in a chain produces an output of zero, or negative expected value (e.g. your plan has many paths to increasing our forecasting ability, but it turns out that doing so is harmful), then the whole rest of that chain isn’t desirable.”
http://effective-altruism.com/ea/r6/what_is_a_broad_intervention_and_what_is_a_narrow/
“There’s pretty clear evidence that GiveWell top charities do a lot of direct good–but their flow-through effects are probably even bigger.”
You don’t make an argument for why this would be true, do you?
Without having put much thought into it, so I might easily be wrong (and happy to be convinced otherwise) it doesn’t look that way to me.
Let’s look at it from the perspective of one child not dying from Malaria due to AMF. One child being alive has an extreme positive impact on the child and its family. It seems very implausible to me that this one child will on average contribute to making the world a worse place so much that it comes even close to outweigh the benefit of the child continuing to live. I’d expect the life of the child to be far more positive than any negative outcomes.
(Same holds for positive flow through effects.)
I’d suspect that so and so many thousand children to live just doesn’t sound that great due to scope insensitivity and this is why in a comparison the sheer magnitude of good it has caused doesn’t come across that well.
I didn’t argue that AMF’s flow-through effects exceed its direct effects because (a) it’s widely (although not universally) accepted and (b) it’s hard to argue for. But this is probably worth addressing, so I’ll try and give a brief explanation of why I expect this to be true. Thanks for bringing it up. Disclaimer: these arguments are probably not the best since I haven’t thought about this much.
Small changes to global civilization have large and potentially long-lasting effects. If, for example, preventing someone from getting malaria slightly speeds up scientific progress, that could improve people’s lives for potentially millions of years into the future; or if we colonize other planets, it could affect trillions or quadrillions of people per generation.
If you believe non-human animals have substantial moral value (which I think you should), then it’s pretty clear that anything you do to affect humans has an even larger effect on non-human animals. Preventing someone from dying means they will go on to eat a lot of factory-farmed animals (although more so in emerging economies like China than poorer countries like Ghana), and the animals they eat will likely experience more suffering than they themselves would in their entire lives. Plus any effect a human has on the environment will change wild animal populations; it’s pretty unclear what sorts of effects are positive or negative here, but they’re definitely large.
Now, even if you don’t believe AMF has large flow-through effects, how robust is the evidence for this belief? My basic argument still applies here: the claim that AMF has small flow-through effects is a pretty speculative claim, so we still can’t say with high confidence how big AMF’s impact is or whether it’s even net positive.
When you say it’s widely accepted, whom do you mean?
I should have mentioned this in the original comment, but I was mostly concerned with effects on humans. I find the claim that there a big flow through effects on animals, see the poor meat eater problem, much more plausible.
(I also didn’t mean that I think that it’s implausible that AMF has high flow through effects, but that claiming that with high confidence seems quite off to me.)
That’s why I was arguing from the individual child’s perspective: The effect on the child and their family is extremely positive, while it’s very unlikely that this child will make important scientific discoveries. With the latter part of the quote you’re echoing roughly the same what I said, which is that the main effect of AMF is from causing existence to the child (and here, their descendants).
You were the one saying that the flow through effects are probably bigger than the direct impact.
I hear a lot of EAs claim that flow-through effects dominate direct effects, and I know a lot of people in person who believe this.
Let’s assume for now that flow through effects are probably smaller than the direct impact. This is still a fairly speculative claim since we don’t have strong evidence that this is true. That means there’s a possibility that AMF has large negative flow-through effects that overwhelm it’s benefits. Even if we don’t think this possibility is very likely, it still means we can’t claim there’s a robust case for AMF having a clear net positive impact. I’m not saying AMF is very likely to be net harmful, I’m just saying it’s not astronomically unlikely, which means we can’t claim with high confidence that AMF is net beneficial. Does that make sense?
What’s ‘high confidence’?
Let’s say the direct effect of a donation to AMF is +100 utilons, and the flow-through effects are normally distributed around 0 with st. dev. 50 utilons. That would seem to meet your criteria that it’s not astronomically unlikely that AMF is net harmful, but I can still claim with 97.5% confident that it’s net beneficial. Which personally I would call ‘high confidence’.
That all seems fairly obvious to me, which makes me think I’m probably not understanding your position correctly here. I also assume that your position isn’t that ‘we can never be 100% certain about anything and this is annoying’. But I’m not sure what ‘middle ground’ you are aiming for with the claim ‘everything is uncertain’.
I would also describe that scenario as “high confidence.” My best guess on the actual numbers is more like, the direct effect of a donation to AMF is +1 utilon, and flow-through effects are normally distributed around 100 with standard deviation 500. So it’s net positive in expectation but still has a high probability (~42% for the numbers given) of being net negative.
I appreciate that those would be your numbers, I’m just pointing out that you do actually need that high standard deviation (i.e. you need to believe that the flow through affects will likely be larger than the direct effects) in order to justify that claim. Which is what Denise was saying in the first place.
You appeared to think you could get away with allowing that flow through affects are probably smaller than direct impact and then make the much weaker claim ‘there’s a possibility that AMF has large negative flow-through effects that overwhelm its benefits’ to get to your conclusion that you can’t have high confidence in AMF being good. But I don’t think you can actually allow that; I’m fully capable of accepting that argument and still considering AMF ‘high confidence’ good.
Of course, you might just have a good argument for focusing on the flow-through effects, which would render this discussion moot.
Those seem really high flow through effects to me! £2000 saves one life, but you could easily see it doing as much good as saving 600!
How are you arriving at the figure? The argument that “if you value all times equally, the flow through effects are 99.99...% of the impact” would actually seem to show that they dominated the immediate effects much more than this. (I’m hoping there’s a reason why this observation is very misleading.) So what informal argument are you using?
I more or less made up the numbers on the spot. I expect flow-through effects to dominate direct effects, but I don’t know if I should assume that they will be astronomically bigger. The argument I’m making here is really more qualitative. In practice, I assume that AMF takes $3000 to save a life, but I don’t put much credence in the certainty of this number.
Denise, if you value all time periods equally, then the flow through effects are 99%+ of the total impact.
The flow-through effects then only have to be very slightly negative to outweigh the immediate benefit.
Would you similarly doubt that, on expectation, someone murdering someone else had bad consequences overall? Someone slapping you very hard in the face?
This kind of reasoning seems to bring about a universal scepticism about whether we’re doing Good. Even if you think you can pin down the long term effects, you have no idea about the very long term effects (and everything else is negligible compared to very long term effects).
For what it’s worth, I definitely don’t think we should throw our hands up and say that everything is too uncertain, so we should do nothing. Instead we have to accept that we’re going to have high levels of uncertainty, and make decisions based on that. I’m not sure it’s reasonable to say that GiveWell top charities are a “safe bet”, which means they don’t have a clear advantage over far future interventions. You could argue that we should favor GW top charities because they have better feedback loops—I discuss this here.
I think the effect of murdering someone are more robustly bad than reducing poverty (which are also probably positive, but less obviously so).
Why? What are the very long term effects of a murder?
Murdering also decreases world population and consumption, which decreases problems like global warming, overfishing, etc. and probably reduces some existential risks.
Increasing violence and expectation of violence seems to lead to worse values and a more cruel/selfish world.
Of course it’s also among the worst thing you can do under all non-consequentialist ethics.
A previous post on this topic:
On Progress and Prosperity http://effective-altruism.com/ea/9f/on_progress_and_prosperity/
For a bit more on 1, see also: https://www.reddit.com/r/IRstudies/comments/3jk0ks/is_the_economic_development_of_the_global_south/
It seems that the best approach to this sort of uncertainty is probabilistic thinking outlined by Max Harms here.
Rather than looking for certainty of evidence, we should look for sufficiency of evidence to act. Thus, we should not ask the question “will this do the most good” before acting, but rather “do I have sufficient evidence that this action will likely lead to the most good”? Otherwise, we risk falling into “analysis paralysis” and information bias, the thinking error of asking for too much information before acting.
Why is it better to look for sufficient evidence rather than maximizing expected value (keeping in mind that we can’t take expected value estimates literally)? Or are you just saying the same thing in a different way?
Because the question of sufficient evidence enables us to avoid information bias/analysis paralysis. There are high opportunity costs to not acting, and that is a very dangerous trap to avoid. The longer we deliberate, the more time slips by while we are gathering evidence. This causes us to fall into the status quo bias.
I don’t see how information bias would go away if we were only worried about sufficient evidence, and analysis paralysis doesn’t seem to be a problem with our current community. People like me and Michael might be really unsure about these things, but it doesn’t really inhibit our lives (afaik). I at least don’t spend too much time thinking about these things, but what time I do spend seems to lead towards robustly better coherence and understanding of the issues.
We might be miscommunicating about information bias. Here is a specific description of information bias: “information bias is believing that the more information that can be acquired to make a decision, the better, even if that extra information is irrelevant for the decision.”
In other words, if we have sufficient evidence to make a decision, then we shouldn’t worry about acquiring additional evidence, since that evidence is irrelevant for making decisions. This was in response to Michael’s earlier points about nothing being certain and the concerns about acting when nothing is certain.
Now, this doesn’t mean we can’t think about these issues, and try to gain robustly better coherence and understanding of the issues, as you say. It only speaks to the difference between thinking and actions. If we spend too much time thinking and gathering information, we don’t spend that time acting to advance human flourishing. Thinking is resource-intensive, and we need to understand that as an opportunity cost. It might be a very worthwhile activity, but it’s a trade-off against other worthwhile activities. That’s my whole point.
One extra flow-through effect you should mention is AMF and GiveDirectly’s effect on global consumption equality. GD’s is positive in all periods. AMF is initially negative (a family has to split their income over more children temporarily), and then eventually positive through development and fertility effects.
What’s good and bad here? I have no idea.
What sort of effects would increasing equality have that just increasing economic welfare wouldn’t?
I have to disagree with the “small effects” crowd.
Putting aside any notions of justice (which I suppose is implied), this depends on what you mean by “economic welfare.” A perfectly tuned definition of economic welfare could encompass the economy’s ability to satisfy the needs and desires of all of its people, but we tend to use simpler measures, such as GDP.
If you mean something like GDP, the manner in which such a value increases has an enormous impact on resultant human welfare; particularly on who the benefits flow to. A huge element of this is because of the marginal utility of goods; if you already make $50,000 a year, an additional $1,000 will not have nearly the same impact on your well-being than it would someone who usually subsists on $500. This is one of the central (usually implicit) premises of Give Directly, and perhaps all charity.
The disproportionate reduction of suffering, increase of satisfaction, and avoidance of alienation brought about by a more equal distribution can only be described as “small effects” by those whose material needs were long since fully served, and have little empathy for the less fortunate.
If your goal is merely to increase total economic capacity with no regard for who or what that flows to, your best bet is likely to fund first-world business ventures and research institutions. But maybe you should drop the “altruist” label at that point.
I’ve never been more worried about this movement.
I think you’re missing the important part of the question here. Of course I agree that giving $1000 to someone who makes $500/year is worth a lot more than giving it to someone who makes $50,000/year. The question is, why is increasing equality better than just giving poor people more money? Like, suppose we have two choices:
Give $1000 to someone making $500/year.
Give $1000 to someone making $500/year and also give $1000 to someone making $50,000.
It seems to me that (2) is a little better than (1), even though (1) reduces inequality more.
Less spending on security, economies of scale as more consumers buy the same goods. Again, just small effects.
Changes in e.g. social resentment.
Also more potential for e.g. inter-state competition if you don’t have a single clear global leader country, or most countries are too poor to hope to compete.
Probably small effects.
One thing that hasn’t been pointed out as far as I can see is that long-run effects can’t be easily studied in a controlled fashion the way the primary effects of GiveWell’s top interventions have been studied. Hence there are several factors that can diminish them: we may develop effective ways to counter the effects, for good or ill; the effects may become irrelevant, e.g., due to the extinction of the species that care about them; or, in the counterfactual case, the effects may’ve been caused anyway a little later. With every year, the probability that any of these occurs or would have occurred increases, something that researchers try to account for by applying different kinds of exponential discount rates. The significant uncertainty about flow-through effects today should probably further increase our uncertainty about them in the future. Hence, insofar as positive or negative flow-through effects only become significant in the far future, we should not expect them to dominate the primary effects.
This does not apply to more near-term flow-through effects. Here many more efforts at quantifying small aspects of them, like Kyle’s, will be valuable.
One flow-through long-term meta-effect that AMF and other GiveWell-recommended charities have is to cause people to be more oriented toward effective giving and using research-based evidence to support their giving choices. This effect seems to me to be a net positive in the vast majority of cases.
Again, this is an issue where it looks net positive but it’s not robustly net positive. Promoting effective giving could be harmful if effective global poverty charities are actually net negative, and promoting effective giving only causes people to cause harm more effectively. (Brian Tomasik once raised the concern that reducing existential risk could lead to astronomical future suffering, and spreading effective altruism might be bad if it causes more people to support existential risk reduction.) I believe this is probably false, but I’m not that confident that it’s false, so I can’t say with conviction that promoting effective giving is net positive.
I see your point. Ok, let’s narrow down. Would you say that encouraging people to use evidence to make their decisions, in any area including giving, is robustly net positive?
Encouraging people to use evidence still has similar concerns, e.g. they might become more effective at doing harmful things.
I do not know of a single intervention that’s robustly net positive.
Could it not plausibly be the case that supporting rigorous research explicitly into how best to reduce wild-animal suffering is robustly net positive? I say this because whenever I’m making cause-prioritization considerations, the concern that always dominates seems to be wild-animal suffering and the effect that intervention x (whether it’s global poverty or domesticated animal welfare) will have on it.
General promotion of anti-speciesism, with equal emphasis put on wild-animals, would also seem to be robustly net positive, although this general promotion would be difficult to do and may have a low success rate, so it would probably be outweighed in an expected-utility calculation by more speculative interventions such as vegan advocacy which have an unclear sign when it comes to wild-animal suffering.
Suppose we invest more into researching wild animal suffering. We might become somewhat confident that an intervention is valuable and then implement it, but this intervention turns out to be extremely harmful. WAS is sufficiently muddy that interventions might often have the opposite of the desired effect. Or perhaps research leads us to conclude that we need to halt space exploration to prevent people from spreading WAS throughout the galaxy, but in fact it would be beneficial to have more wild animals, or we would terraform new planets in a way that doesn’t cause WAS. Or, more likely, the research will just accomplish nothing.
I think the value of higher quality and more information in terms of wild animal suffering will still be a net positive, meaning that funding research in WAS could be highly valuable. I say ‘could’ only because something else might still be more valuable. But if, on expected value, it seems like the best thing to do, the uncertainties shouldn’t put us off too much, if at all.
Yes, I agree that WAS research has a high expected value. My point was that it has a non-trivial probability (say, >10%) of being harmful.