I’ve had a sense for a while that EA is too risk averse, and should be focused more on a broader class of projects most of which it expects to fail. As part of that, I’ve been trying to collect existing arguments related to either side of this debate (in a broader sense, but especially within the EA community), to both update my own views as well as make sure I address any important arguments on either side.
I would appreciate if people could link me to other sources that are important. I’m especially interested in people making arguments for more experimentation, as I mostly found the opposite.
1: 80k’s piece on accidental harm: https://80000hours.org/articles/accidental-harm/#you-take-on-a-challenging-project-and-make-a-mistake-through-lack-of-experience-or-poor-judgment
2. How to avoid accidentally having a negative impact with your project, by Max Dalton and Jonas Volmer: https://www.youtube.com/watch?v=RU168E9fLIM&t=519s
3. Steelmanning the case against unquantifiable interventions, By David Manheim: https://forum.effectivealtruism.org/posts/cyj8f5mWbF3hqGKjd/steelmanning-the-case-against-unquantifiable-interventions
4. EA is Vetting Constrained: https://forum.effectivealtruism.org/posts/G2Pfpkcwv3bJNF8o9/ea-is-vetting-constrained
5. How X-Risk Projects are different from Startups by Jan Kulveit:
Kelsey Piper’s “On ‘Fringe’ Ideas” makes a pro-risk argument in a certain sense (that we should be kind and tolerant to people whose ideas seem strange and wasteful).
I’m not sure if this is written up anywhere, but one simple argument you can make is that many current EA projects were risky when they were started. GiveWell featured two co-founders with no formal experience in global health evaluating global health charities, and nearly collapsed in scandal within its first year. 80,000 Hours took on an impossibly broad task with a small staff (I don’t know whether any had formal career advisement experience). And yet, despite various setbacks, both projects wound up prospering, without doing permanent damage to the EA brand (maybe a few scrapes in the case of 80K x Earning to Give, but that seems more about where the media’s attention was directed than what 80K really believed).
Was thinking a bit about the how to make it real for people that the quarantine depressing the economy kills people just like Coronavirus does.
Was thinking about finding a simple good enough correlation between economic depression and death, then creating a “flattening the curve” graphic that shows how many deaths we would save from stopping the economic freefall at different points. Combining this was clear narratives about recession could be quite effective.
On the other hand, I think it’s quite plausible that this particular problem will take care of itself. When people begin to experience depression, will the young people who are the economic engine of the country really continue to stay home and quarantine themselves? It seems quite likely that we’ll simply become stratified for a while where young healthy people break quarantine, and the older and immuno-compromised stay home.
But getting the time of this right is everything. Striking the right balance of “deaths from economic freefall” and “deaths from an overloaded medical system” is a balancing act, going too far in either direction results in hundreds of thousands of unnecessary deaths.
Then I got to thinking about the effect of a depressed economy on x-risks from AI. Because the funding for AI safety is
1. Mostly in non-profits
2. Orders of magnitude smaller than funding for AI capabilities
It’s quite likely that the funding for AI safety is more inelastic in depressions than than the funding for AI capabilities. This may answer the puzzle of why more EAs and rationalists aren’t speaking cogently about the tradeoffs between depression and lives saved from Corona—they have gone through this same train of thought, and decided that preventing a depression is an information hazard.
It’s been pointed out to me on Lesswrong that depressions actually save lives. Which makes the “two curves” narrative much harder to make.
Maybe also that the talk of preventing a depression is an information hazard when we are at the stage of the pandemic where all-out lockdown is the biggest priority for most of the richest countries. In a few weeks when the epidemics in the US and Western Europe are under control, and lockdown can be eased with massive testing, tracing and isolating of cases, then it would make more sense to freely talk about boosting the economy again (in the mean time, we should be calling for governments to take up the slack with stimulus packages. Which they seem to be doing already).
This argument has the same problem as recommending people don’t wear masks though, if you go from “save lives save lives don’t worry about economic impacts” to “worry about economics impacts it’s as important as quarantine” you lose credibility.
You have to find a way to make nuance emotional and sticky enough to hit, rather than forgoing nuance as an information hazard, otherwise you lose the ability to influence at all.
This was the source of my “two curves” narrative, and I assume would be the approach that others would take if that was the reason for their reticence to discuss.
Is there much EA work into tail risk from GMOs ruining crops or ecosystems?
If not, why not?
It’s not on the 80k list of “other global issues”, and doesn’t come up on a quick search of Google or this forum, so I’d guess not. One reason might be that the scale isn’t large enough—it seems much harder to get existential risk from GMOs than from, say, engineered pandemics.
Yeah, I’d expect it to be a global catastrophic risk rather than existential risk.
Some of the agricultural catastrophes that the solutions that the Alliance to Feed the Earth in Disasters (ALLFED) are working on address include super crop disease, bacterium that out competes beneficial bacteria, and super crop pest (animal), all of which could be related to genetic modification.
Something else in the vein of “things EAs and rationalists should be paying attention to in regards to Corona.”
There’s a common failure mode in large human systems where one outlier causes us to create a rule that is a worse equilibrium. In the PersonalMBA, Josh Kaufman talks about someone taking advantage of a “buy any book you want” rule that a company has—so you make it so that you can no longer get any free books.
This same pattern has happened before in the US, after 9-11 - We created a whole bunch of security theater, that caused more suffering for everyone, and gave government way more power and way less oversight than is safe, because we over-reacted to prevent one bad event, not considering the counterfactual invisible things we would be losing.
This will happen again with Corona, things will be put in place that are maybe good at preventing pandemics (or worse, making people think they’re safe from pandemics), but create a million trivial conveniences every day that add up to more strife than they’re worth.
These types of rules are very hard to repeal after the fact because of absence blindness—someone needs to do the work of calculating the cost/benefit ratio BEFORE they get implemented, then build a convincing enough narrative to what seems obvious/common sense measures given the climate/devastation.