Here’s an analysis by 80k. https://80000hours.org/problem-profiles/improving-institutional-decision-making/
Halffull
Was thinking a bit about the how to make it real for people that the quarantine depressing the economy kills people just like Coronavirus does.
Was thinking about finding a simple good enough correlation between economic depression and death, then creating a “flattening the curve” graphic that shows how many deaths we would save from stopping the economic freefall at different points. Combining this was clear narratives about recession could be quite effective.
On the other hand, I think it’s quite plausible that this particular problem will take care of itself. When people begin to experience depression, will the young people who are the economic engine of the country really continue to stay home and quarantine themselves? It seems quite likely that we’ll simply become stratified for a while where young healthy people break quarantine, and the older and immuno-compromised stay home.
But getting the time of this right is everything. Striking the right balance of “deaths from economic freefall” and “deaths from an overloaded medical system” is a balancing act, going too far in either direction results in hundreds of thousands of unnecessary deaths.
Then I got to thinking about the effect of a depressed economy on x-risks from AI. Because the funding for AI safety is
1. Mostly in non-profits
and
2. Orders of magnitude smaller than funding for AI capabilities
It’s quite likely that the funding for AI safety is more inelastic in depressions than than the funding for AI capabilities. This may answer the puzzle of why more EAs and rationalists aren’t speaking cogently about the tradeoffs between depression and lives saved from Corona—they have gone through this same train of thought, and decided that preventing a depression is an information hazard.
I think this is actually quite a complex question. I think it’s clear that there’s always a chance of value drift, so you can never put the chance of “giving up” at 0. If the chance is high enough, it may in fact be prudent to front-load your donations, so that you can get as much out of yourself with your current values as possible.
If we take the data from here with 0 grains of salt, you’re actually less likely to have value drift at 50% of income (~43.75% chance of value drift) than 10% (~63.64% of value drift). There are many reasons this might be, such as consistency and justification effects, but the point is the object level question is complicated :).
I’ve had a sense for a while that EA is too risk averse, and should be focused more on a broader class of projects most of which it expects to fail. As part of that, I’ve been trying to collect existing arguments related to either side of this debate (in a broader sense, but especially within the EA community), to both update my own views as well as make sure I address any important arguments on either side.
I would appreciate if people could link me to other sources that are important. I’m especially interested in people making arguments for more experimentation, as I mostly found the opposite.
1: 80k’s piece on accidental harm: https://80000hours.org/articles/accidental-harm/#you-take-on-a-challenging-project-and-make-a-mistake-through-lack-of-experience-or-poor-judgment
2. How to avoid accidentally having a negative impact with your project, by Max Dalton and Jonas Volmer: https://www.youtube.com/watch?v=RU168E9fLIM&t=519s
3. Steelmanning the case against unquantifiable interventions, By David Manheim: https://forum.effectivealtruism.org/posts/cyj8f5mWbF3hqGKjd/steelmanning-the-case-against-unquantifiable-interventions
4. EA is Vetting Constrained: https://forum.effectivealtruism.org/posts/G2Pfpkcwv3bJNF8o9/ea-is-vetting-constrained
5. How X-Risk Projects are different from Startups by Jan Kulveit:
Halffull’s Quick takes
I think catch-up growth in developing countries, based on adopting existing technologies, would have positive effects on climate change, AI risk, etc. I think catch-up growth in developing countries, based on adopting existing technologies, would have positive effects on climate change, AI risk, etc.
I’m curious about the intuitions behind this. I think developing countries with fast growth have historically had quite high pollution and carbon output. I also think that more countries joining the “developed” category could quite possibly make coordination around technological risks harder.
I think what you’re saying is plausible but I don’t know of the arguments for that case.
I’m quite excited to see an impassioned case for more of a focus on systemic change in EA.
I used to be quite excited about interventions targeting growth or innovation, but I’ve recently been more worried about accelerating technological risks. Specific things that I expect accelerated growth to effect negatively include:
Climate Change
AGI Risk
Nuclear and Biological Weapons Research
Cheaper weapons in general
Curious about your thoughts on the potential harm that could come if the growth interventions are indeed successful.
- Mar 18, 2020, 3:37 AM; 17 points) 's comment on AMA: Toby Ord, author of “The Precipice” and co-founder of the EA movement by (
This work is excellent and highly important.
I would love to see this same setup experimented with for Grant giving.
Found elsewhere on the thread, a list of weird beliefs that Buck holds: http://shlegeris.com/2018/10/23/weirdest
I’d be curious about your own view on unquantifiable interventions, rather than just the Steelman of this particular view.
I think there’s a clear issue here with measurability bias. The fact of the matter is that the most promising opportunities will be the hardest to measure (see for instance investing in a startup vs. buying stocks in an established business) - The very fact that opportunities are easy to measure and obvious makes them less likely to be neglected.
The proper way to evaluate new and emerging projects is to understand the landscape, and do a systems level analysis of the product, process, and team to see if you think the ROI will be high compared to other hard to measure project. This is what I attempted to do with the EA hotel here: https://www.lesswrong.com/posts/tCHsm5ZyAca8HfJSG/the-case-for-the-ea-hotel
- Oct 31, 2019, 8:40 PM; 3 points) 's comment on EA Hotel Fundraiser 5: Out of runway! by (
Tobacco taxes are pigouvian under state sponsored healthcare.
Hmm that’s odd, I tested both in incognito mode and they seemed to work.
You shouldn’t, it’s an evernote public sharing link that doesn’t require sign in. Note also that I tried to embed the image directly in my comment, but apparently the markdown for images doesn’t work in comments?
I timeboxed 30 minutes to manually transfer this to yED. I’m fairly certain there’s one or two missing edges here’s what I got:
Here’s the yED file, if anyone wants to try their hand at other layout algorithms:
Small suggestion for future projects like this. I used to use graphviz for diagramming, but since found yED and never looked back. Its edge-routing and placement algorithms are much better, and can be tweaked with WYSIWYG editing after the fact.
I tend to think this is also true of any analysis which includes only one way interactions or one way causal mechanisms, and ignores feedback loops and complex systems analysis. This is true even if each of parameters is estimaed using probability distributions.
I upvote if I think the post is contributing to the current conversation, and strong upvote if I think the post will contribute to future and ongoing conversations (IE, its’ a comment or post that people should see when browsing the site, aka Stock vs. Flow).
Occasionally, I’ll strong upvote/downvote strategically to get a comment more in line with what I think it “deserves”, trying to correct a perceived bias of other votes.
I’m sad because I really enjoyed EAGx nordics :). In my view the main benefits of conferences are the networks and idea-sex that come out of them, and I think it did a great job at both of those. I’m curious if you think the conference “made back its’ money” in terms of value to participants, which is seperate from the question of counterfactual value you pose here.
This argument has the same problem as recommending people don’t wear masks though, if you go from “save lives save lives don’t worry about economic impacts” to “worry about economics impacts it’s as important as quarantine” you lose credibility.
You have to find a way to make nuance emotional and sticky enough to hit, rather than forgoing nuance as an information hazard, otherwise you lose the ability to influence at all.
This was the source of my “two curves” narrative, and I assume would be the approach that others would take if that was the reason for their reticence to discuss.