One more example to add here of a cause which may be like a “public good” within the EA community: promoting international cooperation. Many important causes are global public goods (that is, causes which benefit the whole world and thus any one nation has an incentive to free-ride on other nations’ contributions), including global poverty, climate change, x-risk reduction, and animal welfare. I know that FHI already has some research on building international cooperation. I would guess that some EAs who primarily give to global poverty would be willing to shift funding towards building international cooperation if some EAs who normally give to AI safety do the same.
sbehmer
I agree with your intuition that with what a “cooperative” cause prioritization might look like. Although I do think a lot more work would need to be done to formalize this. I also think it may not make sense to use cooperative cause prioritization: if everyone else always acts non-cooperatively, you should too.
I’m actually pretty skeptical of the idea that EA tends to fund causes which are widely valued by people as a whole. It could be true, but it seems like it would be a very convenient coincidence. EA seems to be made up of people with pretty unique value systems (this, I’d expect, is partly what leads EAs to view some causes as being orders of magnitude more important than the causes that other people choose to fund). It would be surprising if optimizing independently for the average EA value system leads to the same funding choices as would optimizing for some combination of the value systems in the general population. While I agree that global poverty work seems to be pretty broadly valued (many governments and international organizations are devoted to it), I’m unsure about things like x-risk reduction. Have you seen any evidence that that is broadly popular? Does the UN have an initiative on x-risk?
I would imagine that work which improves institutions is one cause area which would look significantly more important in the cooperative framework. As I mention in the post, governments are one of the main ways that groups of people solve collective action problems, so improving their functioning would probably benefit most value systems. This would involve improving both formal institutions (constitutions), or informal institutions (civic social norms). In the cooperative equilibrium, we could all be made better off because people of all different value systems would put a significant amount of resources towards building and maintaining strong institutions.
A (tentative) response to your second to last paragraph: the preferences of animals and future generations would probably not be directly considered when constructing the cooperative world portfolio. Gains from cooperation come from people who have control over resources working together so that they’re better off than in the case where they independently spend their resources. Animals do not control any resources, so there are no gains from cooperating with them. Just like in the non-cooperative case, the preferences of animals will only be reflected indirectly due to people who care about animals (just to be clear: I do think that we should care about animals and future people). I expect this is mostly true of future generations as well, but maybe there is some room for inter-temporal cooperation.
Thanks a lot for the comment. Here are a few points:
1. You’re right that the simple climate change example it won’t always be a prisoner’s dilemma. However, I think that’s more due to the fact that I assumed constant returns to scale for all three causes. At the bottom of this write-up I have an example with three causes that all have log returns. As long as both funders value the causes positively and don’t have identical valuations, a pareto improvement is possible through cooperation (unless I’m making a mistake in the proof, which is possible). So I think the existence of collective action problems is more general than the climate change example would make it seem.
2. It’s a very nice point that the gains from cooperation may be small in magnitude, even if they’re positive. That is definitely possible. But I’m a little skeptical that large valuation differences between the 4 ‘schools’ of EA donors means that the gains to cooperation are likely to be small. I think even within those schools there are significant disagreements among causes. For example, within the long-termist school, disagreements on whether we’re living in an extremely influential time or on how to value population increases can lead to very large disagreements in valuation of causes. Also, when people have very large differences in valuations of direct causes, the opportunity for conflict on the advocacy front seems to increase (See Phil Trammell’s post here).
I agree that it would be useful to get more of an idea of when the prisoner’s dilemma is likely to be severe. Right now I don’t think I have much more to add on that.
Thanks for the clarification. I apologize for making it sound as if 80k specifically endorsed not cooperating.
Thanks for the comment. First, I’d like to point out that I think there’s a good chance that the collective action problem within EA isn’t so bad because, as I mentioned in the post, there has been a fairly large emphasis on cooperating with others within EA. It’s when interacting with people outside of EA that I think we’re acting non-cooperatively.
However, it’s still worth discussing whether there are major unsolved collective action problems within EA. I’ll give some possible examples here, but note that I’m very unsure about many of these examples. First, here are some causes which I think benefit EAs of many different value systems and are thus would be underfunded if people were acting non-cooperatively:
1. General infrastructure including the EA forum, EA funds or EA global. This also would include the mechanisms for cooperation which I mentioned in the post. All of these things are like public goods in that that they probably benefit nearly every value system within EA. If true, this also means that the “EA meta fund” may be the most public good-like of the four EA funds.
2. The development of informal norms within the community (like being nice, not overly-stating or making misleading arguments, cooperating with others). The development and maintenance of these norms also seems to be a public good which benefits all value systems.
3. (this is the most speculative one) more long-term oriented approaches to near-term EA cause areas. An example is approaches to global development which involve building better and lasting political institutions (see this forum post). This may represent a kind of compromise between some long-termist EAs (who may normally donate to AI safety) and global development EAs (who would normally donate to short-term development initiatives like AMF).
And here are some causes which I think are viewed as harmful by some value systems and thus would be overfunded if people acted non-cooperatively:
1. Advocacy efforts to convince people to convert from other EA cause areas to your own. As I mentioned in the post, these can be valued negatively by other value systems.
2. Causes which increase (or decrease) the population. Some people disagree on whether creating more lives is on average good or bad (for example, some suffering-focused EAs may think that creating more human lives is good. Conversely, some people may think that creating more farm animal lives is on average good). This means that causes which increase (decrease) the population will be viewed as harmful by those who view population increases (decreases) as bad. Brian Tomasik’s example at the end of this post is along those lines.
So, in general, I don’t think I agree that the EA community is likely to not have major collective action problems. It seems more likely, though, that EA has solved most of its internal collective action problems through emphasizing cooperation.
Thanks for that reference! I hadn’t come across that before. I think the main difference is that for most of my post I’m considering public goods problems among people who are completely unselfish but have different moral values. But problems also exist when people have identical moral values and some level of selfishness. Paul Christiano’s post does a nice job of explaining that case. Milton Friedman also wrote about that problem (specifically, he talked about how poverty alleviation is a public good).
Thanks for the post!
For people especially interested in this topic, it might be useful to know that there’s a literature within academic economics that’s very similar called “Directed Technical Change”. See Acemoglu (2002) for a well-cited reference.
Although that literature has mostly focused on how different technological developments will impact wage inequality, the models used can be applied (I think) to a lot of the topics mentioned in your post.
Thanks for the comment. I agree that R&D costs are very important and can lead to increasing marginal returns. The HIV example is a good one, I think.
I agree that moving to explicit cost-effectiveness modeling is ideal in many situations. However, the arguments that I gave in the post also apply to the use of neglectedness for initial scoping. If neglectedness is a poor predictor of marginal impact, then it will not be useful for initial scoping.
Thanks for the response. I agree that social norms and politics are areas where increasing returns seem likely.
Thanks for this comment! The links were helpful. I have a few comments on your points:
″ Empirically, we do see systematic diminishing returns to R&D inputs across fields of scientific and technological innovation ”
After reading the introduction of that article you linked, I’m not sure that it has found evidence of diminishing returns to research, or at least that it has found the kind of diminishing returns that we would care about. They find that the number of researchers required to double GDP (or any other measure of output) has increased over time, but that doesn’t mean that the number of researchers required to increase GDP by a fixed amount has increased. In fact, if you take their Moore’s law example, we find that the number of transistors added to a computer chip per researcher per year is 58000 larger than it was in the early 70s (it takes 18 times more researchers to double the number of transistors, but that number of transistors is about a million times larger than it was in the 70s). When it comes to research on how to do the most good, I think we would care about research output in levels, rather than in percentage terms (so, I only care how many lives a health intervention would save at time t, rather than how many lives it will save as a percentage of the total amount of lives at time t).
″ In politics and public policy the literatures on lobbying and campaign finance suggest diminishing returns ”
I’m struggling to see how those articles you linked are finding diminishing returns. Is there something I’m missing? The lobbying article says that the effectiveness of lobbying is larger when an issue does not receive much public attention, but that doesn’t mean that, for the same issue, the effectiveness of lobbying spending will drop with each dollar spent. Similarly, the campaign finance article mentions studies that find no causal connection between ad-spending and winning an election for general elections and others which show a causal connection for primary and local elections. I don’t see how this means that my second dollar donated to a campaign will have less expected value than my first dollar.
As antonin_broi mentioned in another comment, political causes seem to have increasing returns built in to them. You need a majority to get a law passed or to get someone elected, so under complete certainty there would be zero (marginal) value to convincing people to vote your way until you reach the median voter. After that there will once again be zero marginal value to buying additional votes.
″ In growing new movements, there is an element of compounding returns, as new participants carry forward work (including further growth), and so influencing; this topic has been the subject of a fair amount of EA attention ”
I agree that this is important for growing new movements, and I have seen EA articles discuss a sort of “multiplier effect” (if you convince one person to join a group they will then convince other people). But none of the articles I have seen, including the one that you linked, have mentioned the possibility of increasing returns to scale. Increasing returns would arise if the cost of convincing an additional person to join decreases with the number of people that are already involved. This could arise because of changing social norms or due to increased name recognition.
″ historically the greatest successes of philanthropy, reductions in poverty, and increased prosperity have stemmed from innovation, and many EA priorities involve research and development ”
This brings up one potentially important point: in addition to scaling effects that you mentioned, another common source of increasing returns is high research and development requirements. High R&D requirements mean that the first units of output are very expensive (because in addition to the costs of production you also have to learn how to produce them) compared with following units. To apply this to an EA topic, if Givewell didn’t exist, then to do a unit of good in global health we would either have to fund less cost-effective charities (because we wouldn’t know which one was best) or pay money to create Givewell before donating to its highest recommended charities. In the second scenario, the cost of producing a unit of good within global health is very high for the first unit and significantly lower for the second. The fact that innovation seems to be one of the more effective forms of philanthropy increases the possibility that we are in a world where increasing returns to scale are relevant to doing good. However, I’m not completely sure on my reasoning here. I may be missing something.
″ Experience with successes using neglectedness (which in prioritization practice does involve looking at the reasons for neglect) thus far, at least on dimensions for which feedback has yet arrived ”
I think this would be a very important piece of evidence. Can you give me some detail about the successes so far?
Yes, PITi + ui is supposed to be the real importance and tractability. If we knew PITi + ui, then we would know a cause area’s marginal impact exactly. But instead we only know PITi.
Thanks for the comment. I agree that considering the marginal value of information is important. This may be another source of diminishing marginal total value (where total value = direct impact + value of information). It seems, though, that this is also subject to the same criticism I outline in the post. If other funders also know that neglected causes give more valuable information at the margin, then the link between neglectedness and marginal value will be weakened. The important step, then, is to determine whether other funders are considering the value of information when making decisions. This may vary by context.
Also, could you give me some more justification for why we would expect the value of information to be higher for neglected causes? That doesn’t seem obvious to me. I realize that you might learn more by trying new things, but it seems that what you learn would be more valuable if there were a lot of other funders that could act on the new information (so the information would be more valuable in crowded cause areas like climate change).
On your second point, I agree that when you’re deciding between causes and you’re confident that other funders of these causes have no significant information that you don’t, and you’re confident that there are diminishing returns, then we would expect for neglectedness to be a good signal of marginal impact. Maybe this is a common situation to be in for EA-type causes, but I’m not so sure. A lot of the causes on 80,000 Hours’ page are fairly mainstream (climate change, global development, nuclear security), so a lot of other smart people have thought about them. Alternatively, in cases where we can be confident that other funders are poorly informed or irrational, there’s the worry about increasing returns to scale.
For now, I don’t think any major changes in decisions should be made based on this. We don’t know enough about how difficult it would be to build cooperation and what the gains to cooperation would be. I guess the only concrete recommendation may be to more strongly emphasize the “not being a jerk” part of effective altruism (especially because that can often be in major conflict with the “maximize impact” part). Also I would argue that there’s a chance that cooperation could be very important and so it’s worth researching more.