Overall though I think that longtermism is going to end up with practical advice which looks quite a lot like “it is the duty of each generation to do what it can to make the world a little bit better for its descendants.”
Goodness, I really hope so. As it stands, Greaves and MacAskill are telling people that they can “simply ignore all the effects [of their actions] contained in the first 100 (or even 1000) years”, which seems rather far from the practical advice both you and I hope they arrive at.
Anyway, I appreciate all your thoughtful feedback—it seems like we agree much more than we disagree, so I’m going to leave it here :)
Hey Owen—thanks for your feedback! Just to respond to a few points -
>Your argument against expected value is a direct rebuttal of the argument for, but in my eyes this is one of your weaker criticisms.
Would be able to elaborate a bit on where the weaknesses are? I see in the thread you agree the argument is correct (and from googling your name I see you have a pure math background! Glad it passes your sniff-test :) ). If we agree EVs are undefined over possible futures, then in the Shivani example, this is like comparing 3 lives to NaN. Does this not refute at least 1 / 2 of the assumptions longtermism needs to ‘get off the ground’?
> Overall I feel like a lot of your critique is not engaging directly with the case for strong longtermism; rather you’re pointing out apparently unpalatable implications.
Just to comment here—yup I intentionally didn’t address the philosophical arguments in favor of longtermism, just because I felt that criticizing the incorrect use of expected values was a “deeper” critique and one which I hadn’t seen made on the forum before. What would the argument for strong longtermism look like without the expected value calculus? It’s my impression that EVs are central to the claim that we can and should concern ourselves with the future 1 billion years from now.
Also my hope was that this would highlight a methodological error (equating made up numbers to real data) that could be rectified, whether or not you buy my other arguments about longtermism. I’d be a lot more sympathetic with longtermism in general if the proponents were careful to adhere to the methodological rule of only ever comparing subjective probabilities with other subjective probabilities (and not subjective probabilities with objective ones, derived from data).
> I would welcome more work on understanding the limits of this kind of reasoning, but I’m wary of throwing the baby out with the bathwater if we say we must throw our hands up rather than reason at all about things affecting the future.
Yup totally—if you permit me a shameless self plug, I wrote about an alternative way to reason here.
> As a minor point, I don’t think that discounting the future really saves you from undefined expectations, as you’re implying.
Oops sorry no wasn’t implying that—two orthogonal arguments.
>I do think that if all people across time were united in working for the good
People are united across time working for the good! Each generation does what it can to make the world a little bit better for its descendants, and in this way we are all united.