I’m curious to know what you think the difference is. Both problems require greenhouse gas emissions to be halted.
I agree that both mainline and extreme scenarios are helped by reducing greenhouse gas emissions, but there are other things one can do about climate change, and the most effective actions might turn out to be things which are specific to either mainline or extreme risks. To take examples from that link:
Developing drought-resistant crops could mitigate some of the worst effects of mainline scenarios, but might help little in extreme scenarios.
Attempting to artificially reverse climate change may be a last resort for extreme scenarios, but may be too risky to be worthwhile for mainline scenarios.
For the avoidance of doubt, I think that my point about mainline and extreme risks appealing to different worldviews is sufficient reason to separate the analyses even if the interventions ended up looking similar.
if you have two problems who require $100 or $200 of total funding to solve completely, if they both have $50 of funding today, they are not equally neglected
Yep, you could use the word ‘neglected’ that way, but I stand by my comment that if you do that without also modifying your definition of ‘scale’ or ‘solvability’, the three factors no longer add up to a cost-effectiveness heuristic. i.e. if you formalise what you mean by neglectedness and insert it into the formula here without changing anything else, the formula will no longer cancel out to ‘good done / extra person or $’.
Thanks!
I wanted to say thanks for spelling that out. It seems that this implicitly underlies some important disagreements. By contrast, I think this addition is somewhat counterproductive:
The idea of a plateau brings to mind images of sub-linear growth, but all that is required is sub-exponential growth, a much weaker claim. I think this will cause confusion.
I also appreciated that the piece is consistently accurate. As I wrote this comment, there were several times where I was considering writing some response, then saw that the piece has a caveat for exactly the problem I was going to point out, or a footnote which explained what I was confused about.
A particular kind of accuracy is representing the views of others well. I don’t think the piece is always as charitable as it could be, but details like footnote 15 make it much easier to understand what exactly other people’s views are. Also, the simple absence of gross mischaracterisations of other people’s views made this piece much more useful to me than many critiques.
Here are a few thoughts on how the model or framing could be more useful:
‘Growth rate’
The concept of a ‘growth rate’ seems useful in many contexts. However, applying the concept to a long-run process locks the model of the process into the framework of an exponential curve, because only exponential curves have a meaningful long-run growth rate (as defined in this piece). The position that utility will grow like an exponential is just one of many possibilities. As such, it seems preferable to simply talk directly in terms of the shape of long-run utility.
Model decomposition
When discussing the shape of long-run utility, it might be easier to decompose total utility into population size and utility per capita. In particular, the ‘utility = log(GDP)’ model is actually ‘in a perfectly equal world, utility per capita = log(GDP per capita)’. i.e. in a perfectly equal world, utility = population size x log(GDP per capita).[1]
For example, this resolves the objection that
The proposed duplication doubles population size while keeping utility per capita fixed, so it is a doubling[2] of utility in a model of this form, as expected.
More broadly, I suspect that the feasibility of ways to gain at-least-exponentially greater resources over time (analogous to population size, e.g. baby-universes, reversible computation[3]) and ways to use those resources at-least-exponentially better (analogous to utility per capita, no known proposals?) might be debated quite separately.
How things relate to utility
Where I disagreed or thought the piece was less clear, it was usually because something seemed at risk of being confused for utility. For example, explosive growth in ‘the space of possible patterns of matter we can potentially explore’ is used as an argument for possible greater-than-exponential growth in utility, but the connection between these two things seems tenuous. Sharpening the argument there could make it more convincing.
More broadly, I can imagine any concrete proposal for how utility per capita might be able to rise exponentially over very long timescales being much more compelling for taking the idea seriously. For example, if the Christiano reversible computation piece Max Daniel links to turns out to be accurate, that naively seems more compelling.
Switching costs
My take is that these parts don’t get at the heart of any disagreements.
It already seems fairly common that, when faced with two approaches which look optimal under different answers to intractable questions, Effective Altruism-related teams and communities take both approaches simultaneously. For example, this is ongoing at the level of cause prioritisation and in how the AI alignment community works on multiple agendas simultaneously. It seems that the true disagreements are mostly around whether or not growth interventions are sufficiently plausible to add to the portfolio, rather than whether diversification can be valuable.
The piece also ties in some concerns about community health to switching costs. I particularly agree that we would not want to lose informed critics. However, similarly to the above, I don’t think this is a real point of disagreement. Discussed simultaneously are risk from being ‘surrounded by people who think that what I intend to do is of negligible importance’ and risks from people ‘being reminded that their work is of negligible importance’. I think this conflates what people believe with whether they treat those around them with respect, which I think are largely independent problems. It seems fairly clear that we should attempt to form accurate beliefs about what is best, and simultaneously be kind and supportive to other people trying to help others using evidence and reason.
---
[1] The standard log model is surely wrong but the point stands with any decomposition into population size multiplied by a function of GDP per capita.
[2] I think the part about creating identical copies is not the main point of the thought experiment and would be better separated out (by stipulating that a very similar but not identical population is created). However, I guess that in the case that we are actually creating identical people we can handle how much extra moral relevance we think this creates through the population factor.
[3] I guess it might be worth making super clear that these are hypothetical examples rather than things for which I have views on whether they are real.