We truly do live in interesting times
Thanks, fixed. No that’s not the post I’m thinking of.
Neat paper. One reservation I have (aside from whether x-risk depends on aggregate consumption or on tech/innovation, which has already been brought up) is the assumption of the world allocating resources optimally (if impatiently). I don’t know if mere underinvestment in safety would overturn the basic takeaways here, but my worry is more that a world with competing nation-states or other actors could have competitive dynamics that really change things.
Thanks! Great find. Having read through it, I gather that positive economic shocks increase X-risk but indefinite increases in the rate of economic growth decrease it. I’m not sure if I trust the model tho.
Assume that a social transition is expected in 40 years and the post transition society has 4x times as much welfare as a pre-transition society. Also assume that society will last for 1000 more years.
Increasing the rate of economic growth by a few percent might increase our welfare pre-transition by 5% and move up the transition by 2 years.
Then the welfare gain of the economic acceleration is (0.05*35)+(3*2)=8.
Future welfare without the acceleration is 40+(4*1000)=4040, so a gain of 8 is like reducing 0.2% existential risk.
Obviously the numbers are almost arbitrary but you should see the concepts at play.
Then if you think about a longer run future then the tradeoff becomes very different, with existential risk being far more important.
If society lasts for 1 million more years then the equivalent is 0.0002% X-risk.
Hm. (Sorry for delay, I wasn’t checking the EA forum) the link seems to work and I haven’t moved it.
Do you not have a Microsoft account? maybe if you’re not logged in, you won’t be able to use OneDrive. I can email a copy to you if you wish
Hmm, I don’t think you can read into the tea leaves of Open Phil’s donations like that. They will donate to fill funding gaps, a large donation doesn’t mean that ADDITIONAL money will be more or less valuable to that organization. And how recently they donated might be due to how recently they were discovered, or some other unimportant consideration. (But if an org hasn’t received Open Phil money in many years, perhaps they are not effective or funding-constrained anymore.)
Out of all the Open Phil grantees, just try to pick the recent one that seems most important or most neglected.
For criminal justice, I think this is straightforward. These causes are getting a lot of attention from liberals and Black Lives Matter, especially given the current surge in interest. So a charity which is a little less appealing to these people will probably be more neglected these days. Looking at a glance, the American Conservative Union’s Center for Criminal Justice Reform seems like one that will be more neglected—liberals and BLM won’t want to donate to a conservative foundation. I’m not saying this is necessarily the right choice, but it’s an example of how I would think about the matter. Yes it is very hard to fully estimate the cost-effectiveness of an organization, but if you have a good suspicion that other donors are biased in a certain way, you can go in the opposite direction to find the more neglected charities.
If you have no idea which charities might be best, you can always just pick at random, or split your donation, or donate to whichever one you like best for small reasons (e.g. you personally appreciate their research or something like that).
Shouldn’t we collect a sort of encyclopedia or manual of organizational best practices, to help EA organizations? A combination of research, and things we have learned?
It’s pretty straightforward: donate to wherever your money can do the most good at the moment. If this month it’s Org A then you donate to Org A, and if next month it’s Org B then you should switch. Cost-effectiveness rankings can change. This is not about ecosystems in particular. Sometimes we gain new information about charity effectiveness, sometimes a charity fills its funding needs and no longer needs more money.
Glancing at that Open Phil page, it looks like they are saying that they don’t only look at how much good an organization is directly doing, but they also look at how effective they are when considering the more general needs of their sector of the nonprofit industry.
I don’t know if it’s common that Open Phil or anyone correctly identifies an ecosystem consideration that substantially changes the cost-effectiveness of a particular charity, but if you have identified such a consideration, of course you shouldn’t simply ignore it from your analysis. If it means the charity does more or less good, of course you should pay attention to it.
Here’s my cost-benefit analysis. (I also posted it to my shortform, but I don’t see a way to link directly to a shortform post.)
I just noticed this post and the ensuing discussion. I want to share a model I recently made which seeks to answer the question: are these protests beneficial or harmful.
The expected deaths caused by COVID spread outnumber the expected lives saved from reducing police brutality by a factor of 16.
If we adjust for QALYs (COVID mainly kills older folks), the COVID mortality is still worse than the reduction in police killings, though only by a factor of 5.
When I estimate a general positive impact of these protests upon America’s political system—specifically, that they’ll increase Democratic voteshare this November—it seems that the protests are neutral as far as American citizens are concerned, but (more importantly of course) positive when we include foreigners and animals.
I want to say upfront that this doesn’t mean I endorse the protests, I still feel a bit negative about them due to the prisoner’s dilemma at play (as in Larks’ highly upvoted comment in the other thread, I also came up with the same point).
Besides, I think someone should deeply think about how EAs should react to the possibility of social changes – when we are more likely to reach a tipping point leading to a very impactful event (or, in a more pessimistic tone, where it can escalate into catastrophe).
In my head I am playing with the idea of a network/organization that could loosely, informally represent the general EA community and make some kind of public statement, like an open letter or petition, on our general behalf. It would be newsworthy and send a strong signal to policymakers, organizations etc.
Of course it would have to be carried out to high epistemic standards and with caution that we don’t go making political statements willy nilly or against the views of significant numbers of EAs. But it could be very valuable if used responsibly.
(C) Social cost of carbon is usually computed from an IAM, a practice which has been described as such:
“IAMs can be misleading – and are inappropriate – as guides for policy, and yet they have been used by the government to estimate the social cost of carbon (SCC) and evaluate tax and abatement policies.” [Pindyck, 2017, The Use and Misuse of Models for Climate Policy]
You can also use economists’ subjective estimates ( https://policyintegrity.org/files/publications/ExpertConsensusReport.pdf ) or model cross validation ( https://www.rff.org/publications/working-papers/the-gdp-temperature-relationship-implications-for-climate-change-damages/ ) and the results are not dissimilar to the IAMs by Nordhaus and Howard & Sterner. (it’s 2-10% of GWP for about three degrees of warming regardless.)
In any case I think that picking a threshold (based on what exactly??) and doing whatever it takes to get there will have more problems than IAMs do.
I see that you use GWWC’s estimate of tonnes of CO2 per life saved. I critiqued GWWC’s approach in this previous post.
Nice, that looks like a good noteworthy post. I will look at it in more detail (would take a while). Until then I’m revising from 258,000 tons down to 40,000 (geometric mean of their estimate and your 15,620 but biased a little towards you).
“40% of Earth’s population lives in the tropics, with 50% projected by 2050 (State of the Tropics 2014) so we estimate 6 billion people affected (climate impacts will last for multiple generations).”—The world population is expected to be ~10 billion by 2050, so 50% would be 5 billion. How are you accounting for multiple generations?
I figured many people will be wealthy and industrialized enough to generally avoid serious direct impacts, so it wasn’t an estimate of how many people will live in warming tropical conditions. But looking at it now, I think that’s the wrong way to estimate it because of the ambiguity that you raise. I’m switching to all people potentially affected (12 billion), with a lower average QALY loss.
“We discount this to 2 billion to account for the meat eater problem”—What is the meat eater problem?
Described in “short-run, robust welfare” section of “issue weight metrics,” it’s the fact that increases in wealth for middle-income consumers may be net neutral or harmful in the short run because they increase their meat consumption.
“If each of them suffers −1 QALY over their lifetime from climate change on average”—why did you choose −1 QALY?
Subjective guess. Do you think it is too high or too low? Severely too high, severely too low?
Why did you choose to multiply 550 by ~3.9?
Arbitrary guess based on the quoted factors. Do you feel that is too low or too high.
I agree that this is a plausible possibility, but not one which I’d like to have to rely on.
I’m not saying to rely on it. I’m saying your estimates of climate damages cannot rely on geoengineering not happening. The chance that we see “full” geoengineering by 2100 (restoring the globe to optimal or preindustrial temperature levels) is, hmm 25%? Higher probability for less ambitious measures.
If we were in in the 1980s it would be improper to write a model which assumed that cheap renewable energy would never be developed.
Based on these changes I’ve increased the weight of air pollution from 15.2 to 16. (It’s not much because most of the weight comes from the long run damage, not the short run robust impacts. I’ve increased short run impact from 2.15 million QALYs to 3 million.)
I already did that: “Review of Climate Cost-Effectiveness Analyses”. I would love to get your feedback on that post.
Yes I will look into that and update things accordingly.
I find this whole genre of post tedious and not very useful. If you think climate change is a good cause area, just write an actual cause prioritization analysis directly comparing it to other cause areas, and show how it’s better! If that’s beyond your reach, you can take an existing one and tweak it. This reads like academic turf warring, a demand that your cause area should get more prestige, instead of a serious attempt to help us decide which cause areas are actually most important.
1) There is a lack of evidence for the more severe impacts of climate change, rather than evidence that the impacts will not be severe.
OK, but I don’t know if anyone here was previously assuming that the impacts will definitely not be severe. The EA community has long recognized the risks of more severe impact. So this doesn’t seem like a point that challenges what we currently believe.
One of the central ideas in effective altruism is that some interventions are orders of magnitude more effective than others. There remain huge uncertainties and unknowns which make any attempt to compute the cost effectiveness of climate change extremely challenging. However, the estimates which have been completed so far don’t make a compelling case that mitigating climate change is actually order(s) of magnitude less effective compared to global health interventions, with many of the remaining uncertainties making it very plausible that climate change interventions are indeed much more effective.
I haven’t read those previous posts you’ve written, but the burden of argument is on showing that a cause is effective, not proving that it’s ineffective. We have many causes to choose from, and the Optimizer’s Curse means we must focus on ones where we have pretty reliable arguments. Merely speculating “what if climate change is worse than the best evidence suggests???” does nothing to show that we’ve neglected it. It just shows that further cause prioritization analysis could be warranted.
The EA importance, tractability, neglectedness (ITN) framework discounts climate change because it is not deemed to be neglected (e.g. scoring 2⁄12 on 80K Hours). I have previously disagreed with this position because it ignores whether the current level of action on climate change is anywhere close to what is actually required to solve the problem (it’s not).
This criticism doesn’t make sense to me. The mere fact that a problem will be unsolved doesn’t mean it’s more important for us to work on it. What matters is how much we can actually accomplish by trying to solve it.
The 80K Hours problem profile makes no mention of the concept of a carbon budget—the amount of of carbon which we can emit before we are committed to a particular level of warming.
That’s fine. Marginal/social cost of carbon is the superior way to think about the problem.
4) EA often ignores or downplays the impact of mainstream climate change, focusing on the tail risk instead
I’ve seen EAs talk about ‘mainstream’ costs many times. GWWC’s early analysis on climate change did this in detail. In any case, my estimate of the long-term economic costs of climate change (detailed writeup in Candidate Scoring System: http://bit.ly/ea-css ) aggregates over the various scenarios.
5) EA appears to dismiss climate change because it is not an x-risk
This phrasing suggests to me that you didn’t read, or perhaps don’t care, what is actually in many of the links that you’re citing. We do not believe that climate change is irrelevant because it’s not an x-risk. We do, however, believe that the arguments in favor of mitigating x-risks do not apply to climate change. So that provides one reason to prioritize x-risks over climate change. This is clearly a correct conclusion and you haven’t provided arguments to the contrary.
6) EA is in danger of making itself a niche cause by loudly focusing on topics like x-risk
If you think that people will like EA more when they see us addressing on climate change, why don’t you highlight all the examples of EAs actually addressing climate change (there are many examples) instead of writing (yet another, we’ve had many) post making the accusation that we neglect it?
7) EA tries to quantify problems using simple models, leading to undervaluing of action on climate change
Other problems have complex, far-reaching negative consequences too, so it’s not obvious that simplistic modeling leads to an under-prioritization of climate change. It is very easy to think of analogous secondary effects for things like poverty.
In any case, estimating the damages of climate change upon the human economy has already addressed by multiple economic metanalyses. Estimating the short- and medium-term deaths has been done by GWWC. Estimating the impacts on wildlife is generally sidelined because we have no idea if they are net positive or net negative for wild animal welfare.
Global health interventions have a climate footprint, which I’ve never seen accounted for in EA cost effectiveness calculations.
I briefly addressed it in Candidate Scoring System, and determined that it was very small. If you look at CO2 emissions per person and compare it to the social cost of carbon, you can see that it’s not much for a person in the United States, let alone for people in (much-lower-emissions) developing countries.
Climate change is a problem which is getting worse with time and is expected to persist for centuries. Limiting warming to a certain level gets harder with every year that action is not taken. Many of the causes compared by EA don’t have the same property. For example, if we fail to treat malaria for another ten years, that won’t commit humanity to live with malaria for centuries to come. However, within less than a decade, limiting warming to 1.5C will become impossible.
Climate change being expected to persist for centuries is conditional upon the absence of major geoengineering. But we could quite plausibly see that in the later 21st century or anytime in the 22nd century.
Failing to limit warming to a certain level is a poor way of defining the problem. If we can’t stay under 1.5C, we might stay under 2.0C, which is not that much worse. The right way to frame the problem is to estimate how much accumulated damage will be caused by some additional GHGs hanging around the atmosphere for, probably, a century or more. That is indeed a long term cost.
But other cause areas also have major long-run impacts. There is plenty of evidence and arguments for long-run benefits of poverty relief, health improvements and economic growth.
10) Case study: Climate is visibly absent or downplayed within some key EA publications and initiatives
Pick another cause area that’s currently highlighted, compare it to climate change, and show how climate change is a more effective cause area.
1. But WHY do you believe that the costs outweigh benefits? Again—the paper looking at Ethiopia estimated that benefits of lower prices outweighed costs on average. This seems intuitively sensible, too—if we sell subsidized low-priced goods, it should increase their wealth in the short run at least.
2. It could be—and there are also many other ways to address vulnerability to spikes in global commodity prices, as described in the last paper I linked. Of course none of these solutions is perfect and simple otherwise the problems would not exist anymore. I think we should look at the likely consequences within current regimes rather than assuming that countries/societies will get much better at responding to problems.
3. But you see how it’s a tradeoff, right? People can specialize in farming or they can specialize in other trades, not both. There can be different people doing different jobs, but every person who becomes a farmer is neglecting the possibility of specializing in something else. If a country has an industrial policy it will have to make a tough choice of what industries it wants to specialize in.
I am adding these considerations to Candidate Scoring System, which is more of an encyclopedia with all kinds of policy issues, but for the Civic Handbook I think I will leave the matter out as it does not have the kind of clear argumentative support necessary to build an Effective Altruist consensus.
Regarding food aid, you showed a couple papers discussing negative impacts from ‘food dumping’, subsidized agricultural exports from wealthy countries to poor ones. A topic that you studied in detail.
I did not read all of the text, but they mainly say: the foreign impact is that it displaces farmers. We send cheap exports, which are in fact cheaper than what a free market would produce, for a combination of reasons but mainly because of our agricultural subsidies. This puts farmers in the aid-receiving country out of work because they cannot compete.
My immediate objection is, why believe that the costs to farmers outweigh the benefits to consumers? If food is lower-priced then that should help many people. I found this paper arguing that the consumer benefits outweighed the hit to farming, on average, for households at all income levels in Ethiopia. It was not cited by either of the papers listed above.
The 1st article also says that dependence on food imports creates vulnerability to price spikes, citing this paper. But local food sources are volatile too, no? Local weather patterns, political instability, plant diseases, etc can create local price spikes. I imagine this would be worse than volatility in global commodity prices. Now, you can have imports step up to cover local price spikes, but you can also have local production step up to cover global price spikes. The former may be easier, but overall I just don’t see good reason to believe that dumping increases price volatility.
There is then the long-run question of whether a country should develop its agricultural sector vs other sectors. The 1st paper touches on this. I will have to think/read more on this, or maybe you can better answer it.