Hits- or misses-based giving

I hope that this brief piece causes a very significant course adjustment in EA.

Currently, EA predominantly[1] engages in hits-based giving:

One of our core values is our tolerance for philanthropic “risk.” … In particular, if we pursue a “hits-based” approach, we will sometimes bet on ideas that contradict conventional wisdom, contradict some expert opinion, and have little in the way of clear evidential support.

Prima facie, this makes sense. Maybe, 100 entities try 100 projects, which are different from conventional bednets and albendazole. 3 or 4 succeed.[2] 97 or 96 fail.[3] The 3 or 4 successes make up for the limited to no impact of the others.

It’s okay if you think your project will probably fail, as long as the upside is big enough if you do succeed. … We’ll try to seed lots of promising new projects, and then help the best ones scale quickly.

The best project will be supported while those which turn out just ok will not be given further funding. In this way, many will be encouraged to initially shoot for the moon. Those who fall short will enjoy seed funding, learning experience, and the freedom to decide on their future course of action. Those which get the moon bring extensive benefits.

Doerr, a venture capitalist, took a very different approach. He put his cash into incredibly risky startups like Google, Amazon, and Twitter. All those ventures had a very high likelihood of failure (the vast majority of startups fail, after all), and a tiny chance of becoming massive successes — which is, of course, what happened.

When tech start-ups fail, nothing happens except that investors may lose funding. When start-ups massively scale up, their early supporters capitalize on substantial gains.

“Hits-based giving” may be thus a safe variant of “hits- or misses-based giving.”

But, it’s not safe.

Imagine that one year from now, we fund 50 promising biosecurity projects, substantially reducing our funding overhang without decreasing the cost-effectiveness bar! As expected, about 2 are huge hits, while 48 finish at the seed stage or slightly beyond. We manage to:

  1. Get fancy PPEs under $5 (less than a bednet!) to everyone, using complementary financing pools based on the momentum of COVID-19.

  2. Achieve almost universal signing and over majority ratification on the biosecurity non-proliferation treaty.

As well as:

  1. Emphasize the US public biosecurity fear to a busy leader of a nation that previously possessed tons of bioweapons and has recently engaged in a war.

  2. Inform Jesse Lotierr[4] that there are no issues with having a customized pathogen available online ordered for what he saved from his drivers’ management job to his garage in Scottsdale.

  3. Create deference to biothreats among the US public.

  4. Motivate the neighbor of a neighbor of a friend of an EA Pakistan[5] member interested in animal welfare who liked the ea_biosecurity Insta to research how to use his spare drums to make bioweapons. That neighbor shortly joins a 500-strong terrorist organization, which has so far killed at least 27 persons.

  5. Upskill 167 longtermists in biorisk preparedness, 72% of whom considers this a top career choice.

That’s unsafe. Because the $5 suits do not work for anthrax and its derivatives, due to their size. Further, increased biorisk fear, in conjuction with the Russian threats, lower the US wellbeing significantly. This causes additional domestic issues: the public boycotts new government regulations on significantly increased washing; farmers take free-range animals back to barns, due to the expense of contaminated soil exposure reduction; and preventive quarantine indirectly causes elevated blood pressure, income inequality, and raises in unemployment. The US economy plummets. Crime rate increases and restorative justice becomes a topic no effective decisionmaker engages with. Furthermore, the terrorist organization was missed at the treaty signing table and makes a pandemic threat while claiming that it has a vaccine.

Misses must be accounted for.

I am unsure if the expected value calculation (based on Effective Institutions Project research) should look like

where

WBYg denotes WELLBY gained

WBYl is WELLBY lost

n is the number of areas of impact[6]

or whether the second term of the sum should have a coefficient or an exponent, because decreasing risk is more or less important than gaining benefits.[7]

Currently, we are neglecting enormous risk with low probability.

It should not be neglected, unless we are perfectly risk-seeking.

One could thus suggest giving based on hits, no misses, where neutral or low negative or positive impact does not constitute a “miss.”

[Image of James Bond with his martini. Top: “hits” bottom: “no misses”]

Funders deemphasize their team’s capacity to influence the impact of their grantee. For instance, it could be sufficient to fund only 3 or 4 biosecurity projects that would enjoy substantial attention from the grantors. If grant managers are not experts either, needs could be deferred to the EA community. Since the community can only work so much pro bono, funding should be normally[8] available to senior experts who would have the capacity for advising due to less work focused on their specialized project.

[Image of James Bond with a cigarette. Top: “Focus” bottom: “What’s focus”]

Focus is the ability to organize one’s time to maximal impact independently, while being cognizant of various expected funding amounts associated with different endeavors.

So, instead of hits or (no) misses-based giving, we could pursue focused giving.

To prevent the decrease of enthusiasm to execute projects rather than advising them, we could hire executives commercially. Groups of people could be paid to specialize in thinking about diverse combinations of (seemingly) complementary and supplementary issues and their solutions. Inter-group communication would be necessary.

If no one wants to execute projects if they can think and advise, we would follow “Think and outsource.”

[Basketball player acting as wondering in inspiration. Top: “Think” bottom: “and outsource”]

The combination of learning human brains can effectively prevent and address risks while continuously optimizing for the most positive outcome.[9]

Due to the cases when the use of the term “outsourcing” could discourage community members from executing projects that others thought of, we should talk about “referring.” Referring should be associated with something great, such as finances, an impact certificate,[10] or actually outsourcing that referee’s work to someone else, so that they can relax. So, “referring” would refer to offering a community member to execute or advise on a project that hundreds of hours of focused thinking has been used for. “Outcourcing” would be paying people to do work that a large number of relatively untrained individuals could perform.

Refer carefully optimized ideas, outsource busy work.

Always, we should minimize the cost of a comparable outcome, so, outsource where that is the cheapest. For example, it may not be comparable to outsource ea_biosecurity graphic design to Jesse Lotierr or his housemate and an EA community member in Pakistan, who does not have weird housemates. But how do we know who is who?

We know who is risky by hearing about their project ideas, understanding their motivations, and continuously assessing their inadvertent and intentional negative impact causation capacities outside of EA. For example, if someone is motivated by being able to present ideas monologously and suggests a project of increasing biosecurity awareness, then even if they never intend to cause disasters, they may gain non-EA impulse-motivated funding to inform possibly risky actors of the unique opportunities associated with bioweapons before a robust risk mitigation infrastructure is developed. If this organization is a hit in the sense that it managed to become independent of EA funding, seeding it is a miss.

Getting to know people could reduce the risks associated with thinking about, referring, and outsourcing projects.

This suggests that we should pursue the model of “Chats and hubs,” where people would be actively getting to know each other and exchanging perspectives.

[Image of cheetah with a cub. Cheetah: “Which model do you prefer?” Focus on the cub from a distance “the one you do” Focus on the cub “chats and hubs”]

[11]If we identify a risky actor, we can either give them a less risky project, exclude them, or do nothing. We can also support them in their risky endeavors. Currently, we support people who engage in personally offensive behavior, such as weird comments, to improve. It is neither the general knowledge nor group organizers’ responsibility to suggest alternatives to risky community members or refer them to anyone, in situations other than interpersonal conflicts.

One can argue that so far, there have been no catastrophic issues, such as a terrorist group made obsessed with bioweapons or an AI executive realizing that they can actually gain a lot of money with much less work if they make everyone slightly suffer rather than trying to advocate for improved algorithmic security, which is met with many bureaucrats’ non-technical questions. They would not exactly face exclusion, because the Repugnant and Sadistic Conclusions are always up for a debate.

[Image of a multiple-choice question contestant. Question: “What do we do with a risky EA” A: Exclude, B: Nothing, C: Fund, D: Other job]

With a risky EA, we do nothing if they are risky if and only if they are funded by EA to pursue a specific endeavor. We offer them another job or fund their non-risky venture if doing nothing is risky, because of their willingness and competency to pursue their risky project outside of EA. We exclude them if including them would constitute a risk to the community and would not bring greater expected benefit. We never fund their project of expected positive net risk.

Should we have a guideline on (refraining from) attracting the attention of risky persons?

For example, if there is an essay contest and someone posts on the EA Forum the name of the 500-strong terrorist organization, which is connected to a larger organization, and might have a bot that would be looking for mentions of the group’s name online. Or, perhaps bots also follow links. We support individuals in making a high positive impact. We may also support non-EA branded groups in advancing their objectives, for instance if they mean to effectively care for a large number of individuals. Participating in an EA Virtual Program is the best way for an individual to see what resources may be a great fit for them.

We should attract any individual who is not risky if and only if they are in EA.

[Image with a person with “No! No! No!” and “Risky individuals” next to it and below the same person with “Yes! Yes! Yes! Yes!” and “Risk reduction” next to it]

We cannot have risk reduction without connecting with individuals who would otherwise be risky. If there are multiple individuals who can reduce equivalent risk, we should connect with the least risky. That is the “Least risk” model. We do not have the capacity to create connections with every actor who is non-risky only in EA, so we should go in order of the lowest risk and the greatest learning opportunity as well as the greatest impact. Ideally, we would first run scenarios. Persons who cause direct harm but would not benefit from the inclusion in EA should be supported by other entities, such as the Disarmament, Demobilization, and Reintegration (DDR) programs.

It is ok if currently, no individuals are not risky if and only if they are in EA.

Should we have any special channels and guidelines for discussing risky topics?

For example, is it possible to access messages from the Biosecurity Slack? How should we talk about possibly risky AI decisionmakers? E-mails are ok?

We can also train people. If we need to develop the capacity to advance projects, which fulfill the unfulfilled society’s potential, we can cover the cost of training. Ideally, we would have widely shareable materials that develop as the community’s capacity and needs expected at the end of training develop. Currently, we have about 4:1 participant-facilitator ratio Virtual Programs that introduce and somewhat elaborate on some EA-related thoughts and select cause areas. We also have paid fellowships for relatively few highly privileged individuals which can be interpreted as a careful suggestion of the focus on high impact. It is not that everyone is enthusiastically exchanging ideas on solutions, picking up from where others left off. There should be the ability to do so, so that anyone who is already interested in high impact does not have to go through the convincing-bednets-work-and-future-is-big journey, which can take years, just to be where others have been decades, if not millennia, ago.

Training people would reduce the miss rate.

[An image of a person with “I thought that 23 years of bednets make the most impact” subtitled “Someone has not received optimal training and thought that bednets are the best he can do, for 23 years. After that, he started learning.”]

This would be the “Training model.”

[Image of a person. Top: “trained for 2 years” bottom” can mitigate risk and address beneficiary priorities” subtitled: Cassie trained for 2 years in the Berlin Hub. Her exchanges were in Singapore, Lagos, and Toronto.]

Best if we combine all these approaches together.

  • Hits, but no misses

  • Think, refer, outsource

  • Chats and hubs

  • Least risk connections

  • Risk practices guidelines

  • Training

That is the OTTERR approach.

Seriously, you won’t even think of coming back to hits-based once you try otterr.

Also, if you do not try otterr, and continue to approach risk in the hits-based way, you will get both misses defined by failing to notice an important risk and increasing a risk by your actions. The same outcome will be obtained in the business-as-usual scenario.

[Shooting computer game with “two hits” text, subtitled “A player hit an opponent while she was hit once.”]

Or, no one loves otterr, and we all prefer hits-based?

  1. ^

    Personal estimate based on perceptions about funding allocation. Considers funding amounts, not the number of donations.

  2. ^

    Who would have betted that writing a few articles about the vape lobbies would kick-start the virtuous cycle of regulation-adaptation and thus decrease tobacco harm. This stones&sticks packages with 33 games for >3 children had also a miniscule chance of sustainably improving early childhood development in Nepal and beyond. Thousands, if not millions, of organizations are for “Improving Science and Increasing Trust.” This team enjoyed a 0.001% probability to substantially elevate scientific standards.

  3. ^

    For instance, program participants do not score significantly better in the social relationship domain than non-participants. Apiculture boost in India fails to deliver the 7% agricultural GDP growth. LGBTQ+ rights in developing countries are not influenced by the activity of this organization. Target managers’ productivity remains unchanged despite this intervention.

  4. ^

    fictional character

  5. ^

    I chose this nation by intersecting the top of the Global Terrorism Index country list with that of nations with established EA groups. I did not use the example of Nigeria, because I already discussed this topic with the national EA group’s organizer and was assured of safety. Thus, I have not engaged in type 1 reasoning to make this example. Further, it should be read that this example was a coincidence based on individuals, as elaborated further in the text (the US person turns out to be risky while the Pakistani community member not). Thus, I am not alluding to a stereotype.

  6. ^

    For example, a protective suits project can impact the risk extinction and agriculture→manufacturing transition areas.

  7. ^

    For instance, if we expect that increased public biosecurity awareness in Pakistan will bring 1 million WELLBYs with 1 percent probability due to the prevention of an epidemic spread which would reduce the population by 10% but at the same time increase the risk of India-Pakistan war, which would cause 1 million WELLBYs reduction majorly due to wounds to the present generation, by 1%, is the expected value zero?

  8. ^

    The EAIF expressed interest in funding

    [t]eaching buy-outs, scholarships, and top-up funding for students and PhD candidates in relevant areas to free up more time for research

    but, based on a brief review of some of the Fund’s reports, little, if any, community members were supported to free some of their time to advise others.

  9. ^

    One can argue that this is already happening, but informally: more knowledgeable community members with greater background in thinking about relevant issues advise more junior members, who enjoy executing projects and learning alongside. However, it can also be that senior members focus on executing their own projects and junior people can hear about it.

  10. ^

    Thinking really carefully in a coordinated way about project that should be referred to impact certificates would prevent the risks associated with uncontrolled emergence of different persons’ programs as well as the use of certificates for popularized but underperforming projects, such as PlayPumps.

  11. ^

    No one should be objectified, by default, however, higher impact and no comparable alternatives may justify it.

No comments.