Do EAs underestimate opportunities to create many small benefits?
Disclaimer: I originally wrote this for the EA Facebook group since it’s a quick and unpolished thought, but I wanted to post it here instead because it got long, I think the discussion is better here, and I like the easy way to link to the post afterwards. Let me know if you think it is better suited for the EA Facebook group.
-
Right now, EAs focus primarily on global poverty, animal welfare, reducing existential risk, and meta activities to improve the first three . However, some EAs are interested in looking for more. What other important causes might we be overlooking?
One idea I’ve been recently thinking about is the opportunity to produce many small benefits may be very neglected and produce opportunities for a “fifth focus area”.
-
EA has Many Small Benefits vs. One Large Benefit
I’m curious if people think EAs systematically underestimate the impact of creating a tiny benefit for a numerous amount of people rather than creating large benefits.
An example of creating a large benefit for a relatively small group of people is to give a $1K cash transfer to a single Kenyan. An example of creating a small benefit for a large group of people would be to improve the speed of the internet across the entire United States or to spend time improving an important open source project like Ruby on Rails. It’s possible that reducing existential risk by a tiny amount could be considered making a small benefit for many people, though I don’t really think of it as such.
Of course, it’s also certainly possible to create large benefits for a large group of people, such as by ending factory farming.
-
Why might it be underestimated?
When you’re at a crosswalk and a bus approaches, you can either wait for the bus to pass and then cross, or cross and make the bus wait for you. If you wait, you save one minute of time. If you make the bus wait, each person on that bus waits for one minute of time. If the bus has sixty people in it, that’s an hour that was just spent waiting. That sounds like a lot when framed in those terms, but it’s not something we ever think about when crossing the street. Is making the bus wait for you really that selfish?
Perhaps not. Perhaps there are still good reasons to cross the street instead of making the bus wait—mainly, that’s usually what the bus expects. However it addresses the most difficult part of utilitarianism—understanding that giving +1 unit of utility to 10,000 people is more important than giving 9,000 units of utility to one person (see “Torture vs. Dust Specks”). The “many small benefits” approach is very unintuitive. Even EAs have biases and find this unintuitive, so I would expect many people (including myself) to underestimate opportunities to create many small benefits.
Furthermore, impacting many people a small amount doesn’t fit well into the non-profit framework to deliver clear and understandable value to a clearly defined and understood population of people.
It is difficult to measure the impact of many small benefits. How much benefit does an hour of time spent developing Ruby on Rails provide to the Ruby on Rails project? How much value does Ruby on Rails provide programmers and people making technology companies? How much value do those technology companies provide people around the world? How does the sum of all that value compare to giving $100 (an hour of a typical programmer’s salary) as a cash transfer to a Kenyan?
Lastly, the “many small benefits” approach seems to be the justification for a large amount of non-EA activities , such as those who think we should improve “arts and culture”. EAs may pattern match these bad arguments for non-EA activities to people arguing for “many small benefits” from a EA perspective, and it could be hard to figure out which “many small benefits” approaches fit into the best opportunities to help the world.
Measurement difficulty and it not fitting into a traditional charity framework may be genuine reasons to favor the “large benefits” approach, but I don’t think they’re enough to make the “many small benefits” approach not worth thinking about. We should aim to study how some “many small benefits” approaches may impact the world overall more positively than our current EA activities.
-
What are some good examples of “many small benefits” that EAs could pursue?
Some examples I could think of that seem worth researching, say as part of the Open Philanthropy Project:
* Contributing to good open source projects could make it much easier to create and improve companies and/or save hundreds of thousands of hours in developer time. The entire security of the internet could be at stake and I’m not the only one who thinks open source may be uniquely neglected in our current for-profit and non-profit funding environment .
* Reducing the amount of traffic through better transportation infrastructure could save hundreds of thousands of hours in lost commuting time.
* Some changes to how VC funding works could improve the accuracy of VC capital, improving the allocation of trillions of dollars. Other high-leverage changes may be possible in other areas of finance as well.
...Of course, some of these may already receive significant funding and many of these may not, on further reflection, actually be worth it when compared to our existing focus areas. However, I think some of these are worth thinking about more when they could be important, tractable, and neglected.
I think this is great. I actually wanted to write about how the 5th cause areas should be working on providing Global Public Goods.
I also had posted this before but I think it fits: One idea I had a while ago is doing research into optimal reading. I did a quick literature review some time ago trying to find out the ideal size of fonts for fast reading, but couldn’t find any definite data. Most of the things written on speedreading seem to be completely unscientific (e.g. flashing words one by one on the screen).
An app could measure how far away you are from the screen with a webcam and then collect data on how fast you’re reading. This app could then automatically adjust the font size etc.
The idea here is not so much the app, but more that so many people are reading every day for multiple hours. Making everyone read faster (~more effective) even by 0.1% would have a lot of benefits.
FYI one of the reasons global catastrophic risk reduction may be undersupplied is that it’s a ‘global public good’.
One reason we might have this bias is that it seems harder to have strong evidence of small effects, because they’re harder to measure.
Is this actually counter-intuitive? Most people seem to have quite a strong intuition that very very broad things like ‘speeding up technological progress/discoveries’, ‘general medical research’, ‘good policy’ and so on are very good things to work on or contribute to, and they mostly work by the many small benefits dynamic. In fact, I think if you asked non-EAs with an amenable mindset what the best ways of improving the world are, they’d come up with many more broad and relatively un-targeted things than narrow and relatively targeted things.
I think this is actually a good instinct and the description of ‘global public goods’ sums up why I think it is a good instinct. It only happens to be wrong, in my view, because the multipliers on human welfare you get just by transferring from rich countries to poor countries are currently so high; an investment in (e.g.) the American internet infrastructure that delivers 50x returns basically straight up loses to Give Directly in my book*, and I think GD is some way from the most effective intervention if you’re willing to be risk-neutral.
Also, a trivial observation:
“When you’re at a crosswalk and a bus approaches, you can either wait for the bus to pass and then cross, or cross and make the bus wait for you. If you wait, you save one minute of time. If you make the bus wait, each person on that bus waits for one minute of time. If the bus has sixty people in it, that’s an hour that was just spent waiting. That sounds like a lot when framed in those terms, but it’s not something we ever think about when crossing the street.”
I actually routinely do think about this an act on it, mostly by almost never pressing the traffic light button unless not doing so will cause me to wait indefinitely because the road is too busy. It’s normally just not worth holding up the whole queue of traffic if a gap will present itself shortly, and I’ve considered this pretty obvious for as long as I can remember.
*I’m over-simplifying for effect here, but I have seen cases within EA where this point seems to just get missed or underrated. 100x payoffs are really hard to find. They should be hard to find. You should be very surprised if you stumble across one.
Proposal: kill the TSA. It makes people miserable, eats time, reduces travel, and take people to accept government intrusion. It’s relatively low hanging fruit because the baptists part of the baptist and bootlegger coalition (anti-terrorism) is so weak and the bootleggers so cartoonishly evil. It is a matter of financial competition with the TSA union and equipment manufacturers, neither of which has that much money.
Benefits of success include the object level ones, building up lobbying expertise for something harder, and a visible reversal of the reduction of liberty inside the US.
Devil’s advocacy for this proposal:
Increasing air travel means increasing carbon emissions.
It’s good for people to accept government intrusion, because a light global surveillance state (PRISM type stuff) will be desirable in the face of x-risks brought by nanotechnology, biotechnology, or other future developments. (We may as well leverage terrorist threats to get this light global surveillance state implemented, since the nanotech/biotech justifications are too weird to ever be accepted by the mainstream.)
If a successful terrorist attack involving planes is later conducted, the EA brand will suffer mightily (people are really irrational when it comes to terrorist attacks). This is related to my theory for why Obama is so drone-happy: if there is a major terrorist attack on US soil, power will go to the Republicans in the next election, so Democrats have a stronger incentive to actually prevent terrorist attacks. Obama also has less of a disincentive to take harsh measures against potential terrorists since liberals will go easy on him; see also Nixon goes to China. (I’m curious how far this argument can be generalized—does one always want to elect the politician who states the opposite of the foreign policy one wants to see implemented?) (Note: I don’t think protecting our brand should be paramount over all other considerations, but I do think we should risk reputational capital wisely.)
Aligning the EA movement politically runs the risk of alienating potential EAs who are polarized against whoever we align ourselves with.
Hmm...deworming sort of counts as “small* benefit for a large number of people.” I am definitely sympathetic to the idea that small benefits to a large number of people can accumulate. The proper analogy in epidemiology is the Rose theorem:
https://en.wikipedia.org/wiki/Geoffrey_Rose_(epidemiologist)#.E2.80.98Population_strategy.E2.80.99
*For a given value of small, of course.
I suspect that it will be hard to find neglected opportunities in this space, but we should look for situations that fall into some Tragedy of the Commons–like category where a large group of people suffers slightly but not enough to cause them to organize:
If you want to maximize your effect by spreading your resources as thin as possible over the largest possible number of people, then it is intuitive to focus on those for whom small improvements have the greatest marginal effect, e.g. or esp. beings in great poverty. But this space is subject to research already. Human poverty surely is, and we’re already aware of neglected areas like WAS, so that category of area is not neglected within EA. The reason that in human poverty “grants” (of all forms) take certain sizes is that research indicates (I’m drawing from Poor Economics here) that there are sometimes poverty traps, which can be overcome with certain amounts, earning us leverage effects. So spreading those grants more thinly would often not be a good idea.
If we want to avoid this space because we don’t consider it neglected or because it is much harder to reach millions of the poorest of the poor than millions of somewhat less poor people on the Internet, then we could focus, for example, on improving online resources, open source software, etc. But here it is much harder to find any neglected opportunities, because if something really improves someone’s quality of life in this space, then that person is able to pay for it, so countless entrepreneurs are already looking for these opportunities, and even free and open source software and things like HTTP/2 (or rather SPDY) are being developed by for-profits or their employees on company time.
What’s left (but I may be forgetting more categories here) are said Tragedy of the Commons–like situations. Groups like the EFF are already working on many of those, but there may be more. (This addon falls into that category for me. ^^)
The most common way of helping many people by a small amount is through business. Whether it’s one you’ve started or one you work for, in a large business you can make many, many people’s lives slightly (at least) better by improving upon, or inventing a new, superior service or product. Businesses can scale up far more than charities. And if you create a social-impact focused business, you can also significantly improve each of your clients’ lives. Businesses have improved the world far more than charities have, through economic growth and technological progress.
Great summary of why I hate when people walk across the road instead of running. Or when people space themselves out instead of clustering so that no cars can get by.
Peter, this is a good idea!
This relates to something I’ve been thinking about for a while, namely bringing about positive small behavior changes. Imagine the positive impact we could have if we nudge people into better behaviors.
The bias you describe points to more highly valuing the impact of Development Media International as a direct-action charity.
It also suggests that we undervalue in our meta-activities as EAs the benefits of spreading positive small behaviors within the movement itself as a capacity-building strategy, and also on spreading positive small behavior changes within society more broadly. I will think more about this and this updates me toward getting Intentional Insights more oriented toward spreading small positive behavior changes in promoting effective giving. Thanks!