A Sequence Against Strong Longtermism
Seven months ago I posted A Case Against Strong Longtermism on the forum, and it caused a bit of a stir. I promised to respond to all the unaddressed comments, and as a result, have produced a four-part “sequence” of sorts.
The first and last post, A Case Against Strong Longtermism and The Poverty of Longtermism deal with longtermism specifically, while the middle two posts Proving Too Much and The Credence Assumption deal with bayesian epistemology, the iceberg-like structure keeping longtermism afloat.
The subsections are listed below and don’t need to be read in any particular order. Special thanks to Max Daniel, Jack Malte, Elliott Hornley, Owen Cotton Barratt, and Mauricio in particular, without whose criticism this sequence would not exist.
Now time to move on to other subjects...
- 22 Jul 2021 22:57 UTC; 2 points) 's comment on NunoSempere’s Quick takes by (
Wait are you assuming that physics is continuous? If so, isn’t this a rejection of modern physics? If not, how do you respond to the objection that there is a limited number of possible configurations for atoms in our controllable universe to be in?
I’m worried that this will come across as a bravery debate, but are you familiar with the phrase “anything is possible when you lie?”
I don’t find it particularly problematic that sufficient numerical fiddling (or verbal fiddling for that matter) can produce arbitrary conclusions.
Your critique reminds me of people who argue that consequentialism can be used to justify horrific conclusions, as if consequentialism had an unusually bad track record, or if other common ethical systems have never justified any terrible actions ever.
I don’t think there’s a consensus on whether physics is continuous or discrete, but I expect that what matters ethically is describable in discrete terms. Things like wavefunctions (or the motions of physical objects) could depend continuously on time or space. I don’t think we know that there are finitely many configurations of a finite set of atoms, but maybe there are only finitely many functionally distinct ones, and the rest are effectively equivalent.
I think we’ve also probed scales smaller than Planck by observing gamma ray bursts, but I might be misinterpreting, and these were specific claims about specific theories of quantum gravity.
Also, a good Bayesian should grant the hypothesis of continuity nonzero credence.
FWIW, though, I don’t think dealing with infinitely many possibilities is much of a problem as made out to be here. We can use (mixed-)continuous measures, and we can decide what resolutions are relevant and useful as a practical matter.
I agree that a good Bayesian should grant the hypothesis of continuity nonzero credence, as well as other ways the universe can be infinite. I think the critique will be more compelling if it was framed as “there’s a small chance the universe is infinite, Bayesian consequentialism by default will incorporate small probability of infinity, the decision theory can potentially blow up under those constraints “
Then we see that this is a special unresolved case of infinity (which is likely an issue with many other decision theories) rather than a claim that the universe is by its very nature infinitely non-measurable and thus not subject to evaluation, which is quite an intuitively extreme stance!
(The specialness of this critique makes it clearer where the burden of proof is, akin to “our modest epistemology forces us to believe that the stars do not exist).
Fwiw, while I have some sympathy with the arguments you advance (and haven’t managed to form an opinion on much of it), I find the level of snark in your writing offputting.
For what it’s worth, I have the opposite reaction, and between the OP having higher quality arguments and less snark, would strongly prefer higher quality arguments.
It’s not a trade-off!
You can imagine that OP has limited opportunities or interest or time to improve, and can only focus on one thing. In that case I’d strongly encourage focusing on higher quality arguments over better style, as I usually find the lack of the former much more offputting than the latter.
I find it hard to believe that leaving out snarky comments is a drain on anyone’s productivity, let alone that the movement should encourage norms where we assume our value is so high that the risks of snark-deprivation outweigh the benefits.
I don’t think the claim from Linch here is that not bothering to edit out snark has led to high value, rather that if a piece of work is flawed both in the level of snark and the poor quality of argument, the latter is more important to fix.
Yes, this is what I mean. I was unsure how to be diplomatic about it.
I think that some of your anti-expected-value beef can be addressed by considering stochastic dominance as a backup decision theory in cases where expected value fails.
For instance, maybe I think that a donation to ALLFED in expectation leads to more lives saved than a donation to a GiveWell charity. But you could point out that the expected value is undefined, because maybe the future contains infinite amount of both flourishing and suffering. Then donating to ALLFED can still be the superior option if I think that it’s stochastically dominant.
There are probably also tweaks to make to stochastic dominance, e.g., if you have two “games”,
Game 1: Get X expected value in the next K years, then play game 3
Game 2: Get Y expected value in the next K years, then play game 3
Game 3: Some Pasadena-like game with undefined value
then one could also have a principle where Game 1 is preferable to Game 2 if X > Y, and this also sidesteps some more expected value problems.
I like your comparisons with other historical cases when people thought they had inevitable theories about society, and it is a thing I think about.
I do have a pet peeve though about the following claim.
Let’s consider a very short argument for strong longterminism (and a tractable way to influence the distant future by reducing x-risk):
- There is a lot of future ahead of us.
- The universe is large
- humans are fragile/the universe is harsh (most planets are not inhabitable for us (yet). We don’t survive in most space by default)
⇒ Therefore expected outcomes of your actions for the near future become rounding errors compared to future expected outcomes by making sure humanity survives.
All three of these points (while more might be necessary for a convincing case for longterminism) are very much informed by physical theories which in turn have been informed by data about the world we live in (observing through a telescope, going to the moon)!
To illustrate:
- Had I been born in a universe where physicists were predicting with high degrees of certainty (through well-established theories like thermodynamics in our world) that the universe (all of which already inhabited) would be facing an inevitable heat death in 1000 years from now, then I would think that the arguments for longterminism were weak since they would not apply to the universe we live in.
I am not convinced by your arguments around epistemology. I don’t understand your fascination with Popper. Popper’s philosophy seems more like an informal way to make bayesian updates. You did not provide sufficient evidence for me to convince me to the contrary. While I agree that rigid Bayseanism has flaws, my current best guess means more subjectivism, not less.
Thanks for collating all of this here in one place. I should have read the later posts before I replied to the first one. Thank you too for your bold challenge. I feel like Kant waking from his ‘dogmatic slumber’. A few thoughts:
Humanity is an ‘interactive kind’ (to use Hacking’s term). Thinking about humanity can change humanity, and the human future.
Therefore, Ord’s ‘Long Reflection’ could lead to there being no future humans at all (if that was the course that the Long Reflection concluded).
This simple example shows that we cannot quantify over future humans, quadrillions or otherwise, or make long term assumptions about their value.
You’re right about trends, and in this context the outcomes are tied up with ‘human kinds’, as humans can respond to predictions and thereby invalidate the prediction. Makes me think of Godfrey-Smith’s observation that natural selection has no inertia, change the selective environment and the observable ‘trend’ towards some adaptation (trend) vanishes.
Cluelessness seems to be some version of the Socratic Paradox (I know only that I know nothing).
RCTs don’t just falsify hypotheses, but also provide evidence for causal inference (in spite of hypotheses!)
Hmm, I think 3 does not follow from 2.
If I think there’s a 10% chance I will quit my job upon further reflection, and I do the reflection, and then quit my job, this does not mean that before the reflection I cannot make any quantified statements about the expected earnings from my job.