Thanks for replying Ben, good stuff! Few thoughts.
I don’t think so. There’s no data on the problem, so there’s nothing to adjudicate between our disagreements. We can honestly try this if you want. What’s your credence?
I’ll concede that point!
Now, even if we could converge on some number, what’s the reason for thinking that number captures any aspect of reality?
I think a better response to the one I originally gave was to point out that the case for strong longtermism relies on establishing a sensible lower(ish) bound for total future population. Greaves and MacAskill want to convince you that (say) at least a quadrillion lives could plausibly lie in the future. I’m curious if you have an issue with that weaker claim?
I think your point about space exploration is absolutely right, and more than a nitpick. I would say two things: one is that I can imagine a world in which we could be confident that we would never colonise the stars (e.g. if the earth were more massive and we had 5 decades before the sun scorched us or something). Second, voicing support for the ‘anything permitted by physics can become practically possible’ camp indirectly supports an expectation of a large numbers of future lives, no?
But then we need to be clear that these estimates aren’t saying “anything precise about the actual world.” They should be treated completely differently than estimates based on actual data. But they’re not. When Greaves and MacAskill compare how many lives are saved by donating to AI safety versus the AMF, they compare these numbers as if they were equally as reliable and equally as capable of capturing something about reality.
Hmm — to my lights Greaves and MacAskill are fairly clear about the differences between the two kinds of estimate. If your reply is that doing any kind of (toy) EV calculation with both estimates just implies that they’re somehow “equally as capable of capturing something about reality”, then it feels like you’re begging the question.
There should be no daylight. Whatever daylight there is would have to be a result of purely subjective beliefs, and we shouldn’t lend this any credibility. It doesn’t belong alongside an actual statistical estimate.
I don’t understand what you mean here, which is partly my fault for being unclear in my original comment. Here’s what I had in mind: suppose you’ve run a small-scale experiment and collected your data. You can generate a bunch of statistical scores indicating e.g. the effect size, plus the chance of getting the results you got assuming the null hypothesis was true (p-value). Crucially (and unsurprisingly) none of those scores directly give you the likelihood of an effect (or the ‘true’ anything else). If you have reason to expect a bias in the direction of positive results (e.g. publication bias), then your guess about how likely it is that you’re picked up on a real effect may in fact be very different from any statistic, because it makes use of information from beyond those statistics (i.e. your prior). For instance, in certain social psych journals, you might pick a paper at random, see that p < 0.05, and nonetheless be fairly confident that you’re looking at a false positive. So subjective credences (incorporating info from beyond the raw stats) do seem useful here. My guess is that I’m misunderstanding you, yell at me if I am.
Subjective credences aren’t applicable to short term situations. (Again, when I say “subjective” there’s an implied “and based on no data”).
By ‘subjective credence’ I just mean degree of belief. It feels important that everyone’s on the same terminological page here, and I’m not sure any card-carrying Bayesians imply “based on no data” by “subjective”! Can you point me towards someone who has argued that subjective credences in this broader sense aren’t applicable even to straightforward ‘short-term’ situations?
Fair point about strong longtermism plausibly recommending slowing certain kinds of progress. I’m also not convinced — David Deutsch was an influence here (as I’m guessing he was for you). But the ‘wisdom outrunning technological capacity’ thing still rings true to me.
I take the implication here to be that we should stop developing technology and wait for our wisdom to catch up.
There’s two ways to close the gap, of course, and isn’t the obvious conclusion just to speed up the ‘wisdom’ side?
Which ties in to your last point. Correct me if I’m wrong, but I’m taking you as saying: to the extent thatstrong longtermism implies significant changes in global priorities, those changes are really worrying: the logic can justify almost any present sacrifices, there’s no closed feedback loop or error-correction mechanism, and it may imply a slowing down of technological progress in some cases. To the extent thatstrong longtermism doesn’t imply significant changes in global priorities, then it hardly adds any new or compelling reasons for existing priorities. So it’s either dangerous or useless or somewhere between the two.
I won’t stick up for strong longtermism, because I’m unsure about it, but I will stick up for semi-skimmed longtermism. My tentative response is that there are some recommendations that (i) are more-or-less uniquely recomended by this kind of longtermism, and (ii) not dangerous or silly in the ways you suggest. One example is establishing kinds of political representation for future generations. Or funding international bodies like the BWC, spreading long-term thinking through journalism, getting fair legislative frameworks in place for when transformative / general AI arrives, or indeed for space governance.
Anyway, a crossover podcast on this would be amazing! I’ll send you a message.
Thanks for replying Ben, good stuff! Few thoughts.
I’ll concede that point!
I think a better response to the one I originally gave was to point out that the case for strong longtermism relies on establishing a sensible lower(ish) bound for total future population. Greaves and MacAskill want to convince you that (say) at least a quadrillion lives could plausibly lie in the future. I’m curious if you have an issue with that weaker claim?
I think your point about space exploration is absolutely right, and more than a nitpick. I would say two things: one is that I can imagine a world in which we could be confident that we would never colonise the stars (e.g. if the earth were more massive and we had 5 decades before the sun scorched us or something). Second, voicing support for the ‘anything permitted by physics can become practically possible’ camp indirectly supports an expectation of a large numbers of future lives, no?
Hmm — to my lights Greaves and MacAskill are fairly clear about the differences between the two kinds of estimate. If your reply is that doing any kind of (toy) EV calculation with both estimates just implies that they’re somehow “equally as capable of capturing something about reality”, then it feels like you’re begging the question.
I don’t understand what you mean here, which is partly my fault for being unclear in my original comment. Here’s what I had in mind: suppose you’ve run a small-scale experiment and collected your data. You can generate a bunch of statistical scores indicating e.g. the effect size, plus the chance of getting the results you got assuming the null hypothesis was true (p-value). Crucially (and unsurprisingly) none of those scores directly give you the likelihood of an effect (or the ‘true’ anything else). If you have reason to expect a bias in the direction of positive results (e.g. publication bias), then your guess about how likely it is that you’re picked up on a real effect may in fact be very different from any statistic, because it makes use of information from beyond those statistics (i.e. your prior). For instance, in certain social psych journals, you might pick a paper at random, see that p < 0.05, and nonetheless be fairly confident that you’re looking at a false positive. So subjective credences (incorporating info from beyond the raw stats) do seem useful here. My guess is that I’m misunderstanding you, yell at me if I am.
By ‘subjective credence’ I just mean degree of belief. It feels important that everyone’s on the same terminological page here, and I’m not sure any card-carrying Bayesians imply “based on no data” by “subjective”! Can you point me towards someone who has argued that subjective credences in this broader sense aren’t applicable even to straightforward ‘short-term’ situations?
Fair point about strong longtermism plausibly recommending slowing certain kinds of progress. I’m also not convinced — David Deutsch was an influence here (as I’m guessing he was for you). But the ‘wisdom outrunning technological capacity’ thing still rings true to me.
There’s two ways to close the gap, of course, and isn’t the obvious conclusion just to speed up the ‘wisdom’ side?
Which ties in to your last point. Correct me if I’m wrong, but I’m taking you as saying: to the extent that strong longtermism implies significant changes in global priorities, those changes are really worrying: the logic can justify almost any present sacrifices, there’s no closed feedback loop or error-correction mechanism, and it may imply a slowing down of technological progress in some cases. To the extent that strong longtermism doesn’t imply significant changes in global priorities, then it hardly adds any new or compelling reasons for existing priorities. So it’s either dangerous or useless or somewhere between the two.
I won’t stick up for strong longtermism, because I’m unsure about it, but I will stick up for semi-skimmed longtermism. My tentative response is that there are some recommendations that (i) are more-or-less uniquely recomended by this kind of longtermism, and (ii) not dangerous or silly in the ways you suggest. One example is establishing kinds of political representation for future generations. Or funding international bodies like the BWC, spreading long-term thinking through journalism, getting fair legislative frameworks in place for when transformative / general AI arrives, or indeed for space governance.
Anyway, a crossover podcast on this would be amazing! I’ll send you a message.