Summary for myself. Note: Pretty stream-of-thought.
Proving too much
The set of all possible futures is infinite which somehow breaks some important assumptions longtermists are apparently making.
Somehow this fails to actually bother me
...the methodological error of equating made up numbers with real data
This seems like a cheap/unjustified shot. In the world where we can calculate the expected values, it would seems fine to compare (wide, uncertain) speculative interventions with harcore GiveWell data (note that the next step would probably be to get more information, not to stop donating to GiveWell charities)
Sometimes, expected utility is undefined (Pasadena game)
The Pasadena game also fails to bother me, because the series hasn’t (yet) showed that longtermism bets are “Pasadena-like”
(Also, note that you can use stochastic dominance to solve many expected value paradoxes, e.g, to decide between two universes with infinite expected value, or with undefined expected value.)
...mention of E.T. Jaynes
Yeah, I’m also a fan of E.T. Jaynes, and I think that this is a cheap shot, not an argument.
Subject, Object, Instrument
This section seems confused/bad. In particular, there is a switch from “credences are subjective” to “we should somehow change our credences if this is useful”. No, if one’s best guess is that “the future is vast in size”, then considering that one can change one’s opinions to better attain goals doesn’t make it stop being one’s best guess
Overall: The core of this section seems to be that expected values are sometimes undefined. I agree, but this doesn’t deter me from trying to do the most good by seeking more speculative/longtermist interventions. I can use stochastic dominance when expected utility fails me.
Then, using our figure of one quadrillion lives, the expected good done by Shivani contributing $10,000 to [preventing world domination by a repressive global political regime] would, by the lights of utilitarian axiology, be 100 lives. In contrast, funding for the Against Malaria Foundation, often regarded as the most cost-effective intervention in the area of short-term global health improvements, on average saves one life per $3500. (Nuño: italics and bold from the OP, not from original article)
I agree that the paragraph just intuitively looks pretty bad, so I looked at the context:
Now, the argument we are making is ultimately a quantitative one: that the expected impact one can have on the long-run future is greater than the expected impact one can have on the short run. It’s not true, in general, that options that involve low probabilities of high stakes systematically lead to greater expected values than options that involve high probabilities of modest payoffs: everything depends on the numbers. (For instance, not all insurance contracts are worth buying.) So merely pointing out that one might be able to influence the long run, or that one can do so to a nonzero extent (in expectation), isn’t enough for our argument. But, we will claim, any reasonable set of credences would allow that for at least one of these pathways, the expected impact is greater for the long-run.
Suppose, for instance, Shivani thinks there’s a 1% probability of a transition to a world government in the next century, and that $1 billion of well-targeted grants — aimed (say) at decreasing the chance of great power war, and improving the state of knowledge on optimal institutional design — would increase the well-being in an average future life, under the world government, by 0.1%, with a 0.1% chance of that effect lasting until the end of civilisation, and that the impact of grants in this area is approximately linear with respect to the amount of spending. Then, using our figure of one quadrillion lives to come, the expected good done by Shivani contributing $10,000 to this goal would, by the lights of a utilitarian axiology, be 100 lives. In contrast, funding for Against Malaria Foundation, often regarded as the most cost-effective intervention in the area of short-term global health improvements, on average saves one life per $3500
Yeah, this is in the context of a thought experiment. I’d still do this with distributions rather than with point estimates, but ok.
The Credence Assumption
Ok, so the OP wants to argue that expected value theory breaks ⇒ the tool is not useful ⇒ we should abandon credences ⇒ longtermism somehow fails.
But I think that “My best guess is that I can do more good with more speculative interventions” is fairly robust to that line of criticism; it doesn’t stop being my best guess just because credences are subjective.
E.g., if my best guess is that ALLFED does “more good” (e.g., more lives saved in expectation) than GiveWell charities, pointing out that actually the expected value is undefined (maybe the future contains both infinite amounts of flourishing and suffering) doesn’t necessarily change my conclusion if I still think that donating to ALLFED is stochastically dominant.
Cox Theorem requires that probabilities be real numbers
The OP doesn’t buy that. Sure, a piano is not going to drop on his head, but he might e.g., make worse decisions on account of being overconfident because he has not been keeping track of his (numerical) predictions and thus suffers from more hindsight bias than someone who kept track.
But what alternative do we have?
One can use e.g., upper and lower bounds on probabilities instead of real valued numbers: Sure, I do that. Longtermism still doesn’t break.
Instead of relying on explicit expected value calculations, we should rely on evolutionary approaches
The Poverty of Longtermism
“In 1957, Karl Popper proved it is impossible to predict the future of humanity, but scholars at the Future of Humanity Institute insist on trying anyway”
Notes on: A Sequence Against Strong Longtermism
Summary for myself. Note: Pretty stream-of-thought.
Proving too much
The set of all possible futures is infinite which somehow breaks some important assumptions longtermists are apparently making.
Somehow this fails to actually bother me
...the methodological error of equating made up numbers with real data
This seems like a cheap/unjustified shot. In the world where we can calculate the expected values, it would seems fine to compare (wide, uncertain) speculative interventions with harcore GiveWell data (note that the next step would probably be to get more information, not to stop donating to GiveWell charities)
Sometimes, expected utility is undefined (Pasadena game)
The Pasadena game also fails to bother me, because the series hasn’t (yet) showed that longtermism bets are “Pasadena-like”
(Also, note that you can use stochastic dominance to solve many expected value paradoxes, e.g, to decide between two universes with infinite expected value, or with undefined expected value.)
...mention of E.T. Jaynes
Yeah, I’m also a fan of E.T. Jaynes, and I think that this is a cheap shot, not an argument.
Subject, Object, Instrument
This section seems confused/bad. In particular, there is a switch from “credences are subjective” to “we should somehow change our credences if this is useful”. No, if one’s best guess is that “the future is vast in size”, then considering that one can change one’s opinions to better attain goals doesn’t make it stop being one’s best guess
Overall: The core of this section seems to be that expected values are sometimes undefined. I agree, but this doesn’t deter me from trying to do the most good by seeking more speculative/longtermist interventions. I can use stochastic dominance when expected utility fails me.
The post also takes issue with the following paragraph from The Case For Strong Longtermism:
I agree that the paragraph just intuitively looks pretty bad, so I looked at the context:
Yeah, this is in the context of a thought experiment. I’d still do this with distributions rather than with point estimates, but ok.
The Credence Assumption
Ok, so the OP wants to argue that expected value theory breaks ⇒ the tool is not useful ⇒ we should abandon credences ⇒ longtermism somehow fails.
But I think that “My best guess is that I can do more good with more speculative interventions” is fairly robust to that line of criticism; it doesn’t stop being my best guess just because credences are subjective.
E.g., if my best guess is that ALLFED does “more good” (e.g., more lives saved in expectation) than GiveWell charities, pointing out that actually the expected value is undefined (maybe the future contains both infinite amounts of flourishing and suffering) doesn’t necessarily change my conclusion if I still think that donating to ALLFED is stochastically dominant.
Cox Theorem requires that probabilities be real numbers
The OP doesn’t buy that. Sure, a piano is not going to drop on his head, but he might e.g., make worse decisions on account of being overconfident because he has not been keeping track of his (numerical) predictions and thus suffers from more hindsight bias than someone who kept track.
But what alternative do we have?
One can use e.g., upper and lower bounds on probabilities instead of real valued numbers: Sure, I do that. Longtermism still doesn’t break.
Some thought experiment which looks like The Whispering Earring.
Instead of relying on explicit expected value calculations, we should rely on evolutionary approaches
The Poverty of Longtermism
“In 1957, Karl Popper proved it is impossible to predict the future of humanity, but scholars at the Future of Humanity Institute insist on trying anyway”
Come on
Yeah, this is just fairly bad
Lesson of the 20th Century
This is going to be an ad-hitlerium, isn’t it
No, an ad-failures of communism
At this point, I stopped reading.