Thanks so much for writing this Ben! I think it’s great that strong longtermism is being properly scrutinised, and I loved your recent podcast episode on this (as well as Vaden’s piece).
I don’t have a view of my own yet; but I do have some questions about a few of your points, and I think I can guess at how a proponent of strong longtermism might respond to others.
For clarity, I’m understanding part of your argument as saying something like the following. First, “[E]xpected value calculations, Bayes theorem, and mathematical models” are tools — often useful, often totally innapropriate or inapplicable. Second, ‘Bayesian epistemology’ (BE) makes inviolable laws out of these tools, running into all kinds of paradoxes and failing to represent how scientific knowledge advances. This makes BE silly at best and downright ‘refuted’ at worst. Third, the case for strong longtermism relies essentially on BE, which is bad news for strong longtermism.
I can imagine that a fan of BE would just object that Bayesianism in particular is just not a tool which can be swapped out for something else when it’s convenient . This feels like an important but tangential argument — this LW post might be relevant. Also, briefly, I’m not 100% convinced by Popper’s argument against Bayesianism which you’re indirectly referencing, and I haven’t read the paper Vaden wrote but it looks interesting. In any case: declaring that BE “has been refuted” seems unfairly rash.
You suggest at a few points that longtermists are just pulling numbers out of nowhere in order to take an expectation over, for instance, the number of people who will live in the long-run future. In other words, I’m reading you as saying that these numbers are totally arbitrary. You also mention that they’re problematically unfalsifiable.
On the first point, it feels more accurate to say that these numbers are highly uncertain rather than totally arbitrary. I can imagine someone saying “I wouldn’t be surprised if my estimate were off by several orders of magnitude”; but not “I have literally no reason to believe that this estimate is any better than a wildly different one”. That’s because it is possible to begin reasoning about these numbers. For instance, I was reminded of Nick Beckstead’s preliminary review of the feasibility of space colonisation. If it turned out that space colonisation was practically impossible, the ceiling would fall down on estimates for the size of humanity’s future. So there’s some information to go on — just very little.
You make the same point in the context of estimating existential risks:
My credence could be that working on AI safety will reduce existential risk by 5% and yours could be 10−19%, and there’s no way to discriminate between them.
Really? If you’re a rationalist (in the broad Popperian sense and the internet-cult sense), and we share common knowledge of each other’s beliefs, then shouldn’t we be able to argue towards closer agreement? Not if our estimates were totally arbitrary — but clearly they’re not. Again, they’re just especially uncertain.
[I]t abolishes the means by which one can disagree with its conclusion, because it can always simply use bigger numbers.
You can use bigger numbers in the sense that you can type extra zeroes on your keyboard, but you can’t use bigger numbers if you care about making sure your numbers fall reasonably in line with the available facts, right? I could try turning “donating to Fin’s retirement fund” into an EA cause area by just lying about its impact, but there are norms of honesty and criticism (and common sense) which would prevent the plot succeeding. Because I don’t think you’re suggesting that proponents of strong longtermism are being dishonest in this way, I’m confused about what you are suggesting.
Plus, as James Aung mentioned, I don’t think it works to criticise subjective probabilities (and estimates derived from them) as too precise. The response is presumably: “sure, this guess is hugely uncertain. But better to give some number rather than none, and any number I pick is going to seem too precise to you. Crucially, I’m trying to represent something about my own beliefs — not that I know something precise about the actual world.”
On the falsifiability point, estimates about the size of humanity’s future clearly are falsifiable — it’s just going to take a long time to find out. But plenty of sensible scientific claims are like this — e.g. predictions about the future of stars including our Sun. So the criticism can’t be that predictions about the size of humanity’s future are somehow unscientific because not immediately falsifiable.
I think this paragraph is key:
Thus, subjective credences tend to be compared side-by-side with statistics derived from actual data, and treated as if they were equivalent. But prophecies about when AGI will take over the world — even when cloaked in advanced mathematics — are of an entirely different nature than, say, impact evaluations from randomized controlled trials. They should not be treated as equivalent.
My reaction is something like this: even if other interpretations of probability are available, it seems at least harmless to form subjective credences about the effectiveness of, say, global health interventions backed by a bunch of RCTs. Where there’s lots of empirical evidence, there should be little daylight between your subjective credences and the probabilities that fall straight out of the ‘actual data’. In fact, using subjective credences begins to look positively useful when you venture into otherwise comparable but more speculative interventions. That’s because whether you might want to fund that intervention is going to depend on your best guess about its likely effects and what you might learn from them, and that guess should be sensitive to all kinds of information — a job Bayesian methods were built for. However, if you agree that subjective credences are applicable to innocuous ‘short-term’ situations with plenty of ‘data’, then you can imagine gradually pushing the time horizon (or some other source of uncertainty) all the way to questions about the very long-run future. At this extreme, you’ve said that there’s something qualitatively wrong with subjective credences about such murky questions. But I want to say: given that you can join up the two kinds of subject matter by a series of intermediate questions, and there wasn’t originally anything wrong with using credences and no qualitative or step-change, why think that the two ends of the scale end up being “of an entirely different nature”? I think this applies to Vaden’s point that the maths of taking an expectation over the long-run future is somehow literally unworkable, because you can’t have a measure over infinite possibilities (or something). Does that mean we can’t take an expectation over what happens next year? The next decade?
I hope that makes sense! Happy to say more.
My last worry is that you’re painting an unrealistically grim picture of what strong longtermism practically entails. For starters, you say “[l]ongtermism asks us to ignore problems now”, and Hilary and Will say we can “often” ignore short-term effects . Two points here: first, in situations where we can have a large effect on the present / immediate future without risking something comparably bad in the future, it’s presumably still just as good to do that thing. Second, it seems reasonable to expect considerable overlap between solving present problems and making the long-run future go best, for obvious reasons. For example, investing in renewables or clean meat R&D just seem robustly good from short-term and long-term perspectives.
I’m interested in the comparison to totalitarian regimes, and it reminded me of something Isaiah Berlin wrote:
[T]o make mankind just and happy and creative and harmonious forever—what could be too high a price to pay for that? To make such an omelette, there is surely no limit to the number of eggs that should be broken[.]
However, my guess is that there are too few similarities for the comparison to be instructive. I would want to say that the totalitarian regimes of the past failed so horrendously not because they used expected utility theory or Bayesian epistemology correctly but innapropriately, but because they were just wrong — wrong that revolutionary violence and totalitarianism make the world remotely better in the short or long term. Also, note that a vein of longtermist thinking discusses reducing the likelihood of a great power conflict, improving instutional decision-making, and spreading good (viz. liberal) political norms in general — in other words, how to secure an open society for our descendants.
Longtermism asks us to ignore problems now, and focus on what we believe will be the biggest problems many generations from now. Abiding by this logic would result in the stagnation of knowledge creation and progress.
Isn’t it the case that strong longtermism makes knowledge creation and accelerating progress seem more valuable, if anything? And would the world really generate less knowledge, or progress at a slower rate, if the EA community shifted priorities in a longtermist direction?
Finally, a minor point: my impression is that ‘longtermism’ is generally taken to mean something a little less controversial than ‘strong longtermism’. I appreciate you make the distinction early on, but using the ‘longtermism’ shorthand seems borderline misleading when some of your arguments only apply to a specific version.
For what it’s worth, I’m most convinced by the practical problems with strong longtermism. I especially liked your point about longtermism being less permeable to error correction, and generally I’m curious to know more about reasons for thinking that influencing the long-run future is really tractable. Thanks again for starting this conversation along with Vaden!
Really? If you’re a rationalist (in the broad Popperian sense and the internet-cult sense), and we share common knowledge of each other’s beliefs, then shouldn’t we be able to argue towards closer agreement? Not if our estimates were totally arbitrary — but clearly they’re not. Again, they’re just especially uncertain.
I think there is an important point here. One of the assumptions in Aumann’s theorem is that both people have the same prior, and I think this is rarely true in the real world.
I roughly think of Bayesian reasoning as starting with a prior, and then adjusting the prior based on observed evidence. If there’s a ton of evidence, and your prior isn’t dumb, the prior doesn’t really matter. But the more speculative the problem, and the less available evidence, the more the prior starts to matter. And your prior bakes in a lot of your assumptions about the world, and I think it’s tricky to resolve disagreements about what your prior should be. At least not in ways that approach being objective.
I think you can make progress on this. Eg, ‘how likely is it that AI could get way better, really fast?’ is a difficult question to answer, and could be baked into a prior either way. And things like AI Impact’s study of discontinuous progress in other technologies can be helpful for getting closer to consensus. But I think choosing a good prior is still a really hard and important problem, and near impossible to be objective about
Hey Fin! Nice—lot’s here. I’ll respond to what I can. If I miss anything crucial just yell at me :) (BTW, also enjoying your podcast. Maybe we should have a podcast battle at some point … you can defend longtermism’s honour).
In any case: declaring that BE “has been refuted” seems unfairly rash.
Yep, this is fair. I’m imagining myself in the position of some random stranger outside of a fancy EA-gala, and trying to get people’s attention. So yes—the language might be a little strong (although I do really think Bayesianism doesn’t stand up to scrutiny if you drill down on it).
On the first point, it feels more accurate to say that these numbers are highly uncertain rather than totally arbitrary.
Sure, guessing that there will be between 1 billion and 1000 quadrillion people in the future is probably a better estimate than 1000 people. But it still leaves open a discomfortingly huge ran. Greaves and MacAskill could easily have used half a quadrillion people, or 10 quadrillion people. Instead of trying to wrestle with this uncertainty, which is fruitless, we should just acknowledge that we can’t know and stop trying.
If it turned out that space colonisation was practically impossible, the ceiling would fall down on estimates for the size of humanity’s future. So there’s some information to go on — just very little.
Bit of a nitpick here, but space colonization isn’t prohibited by the laws of physics, so it can only be “practically impossible” based on our current knowledge. It’s just a problem to be solved. So this particular example couldn’t bring down the curtains on our expected value calculations.
Really? If you’re a rationalist (in the broad Popperian sense and the internet-cult sense), and we share common knowledge of each other’s beliefs, then shouldn’t we be able to argue towards closer agreement?
I don’t think so. There’s no data on the problem, so there’s nothing to adjudicate between our disagreements. We can honestly try this if you want. What’s your credence?
Now, even if we could converge on some number, what’s the reason for thinking that number captures any aspect of reality? Most academics were sympathetic to communism before it was tried; most physicists thought Einstein was wrong.
You can use bigger numbers in the sense that you can type extra zeroes on your keyboard, but you can’t use bigger numbers if you care about making sure your numbers fall reasonably in line with the available facts, right?
What are the available facts when it comes to the size of the future? There’s a reason these estimates are wildly different across papers: From 10^15 here, to 10^68 (or something) from Bostrom, and everything in between. I’m gonna add mine in: 10^124 + 3.
The response is presumably: “sure, this guess is hugely uncertain. But better to give some number rather than none, and any number I pick is going to seem too precise to you. Crucially, I’m trying to represent something about my own beliefs — not that I know something precise about the actual world.”
Agree that this is probably the response. But then we need to be clear that these estimates aren’t saying “anything precise about the actual world.” They should be treated completely differently than estimates based on actual data. But they’re not. When Greaves and MacAskill compare how many lives are saved by donating to AI safety versus the AMF, they compare these numbers as if they were equally as reliable and equally as capable of capturing something about reality.
Where there’s lots of empirical evidence, there should be little daylight between your subjective credences and the probabilities that fall straight out of the ‘actual data’.
There should be no daylight. Whatever daylight there is would have to be a result of purely subjective beliefs, and we shouldn’t lend this any credibility. It doesn’t belong alongside an actual statistical estimate.
However, if you agree that subjective credences are applicable to innocuous ‘short-term’ situations with plenty of ‘data’, then you can imagine gradually pushing the time horizon (or some other source of uncertainty) all the way to questions about the very long-run future.
I think the above also answers this? Subjective credences aren’t applicable to short term situations. (Again, when I say “subjective” there’s an implied “and based on no data”).
Isn’t it the case that strong longtermism makes knowledge creation and accelerating progress seem more valuable, if anything? And would the world really generate less knowledge, or progress at a slower rate, if the EA community shifted priorities in a longtermist direction?
I’ve seen arguments to the contrary. Here for instance:
I spoke to one EA who made an argument against slowing down AGI development that I think is basically indefensible: that doing so would slow the development of machine learning-based technology that is likely to lead to massive benefits in the short/medium term. But by the own arguments of the AI-focused EAs, the far future effects of AGI dominate all other considerations by orders of magnitude. If that’s the case, then getting it right should be the absolute top priority, and virtually everyone agrees (I think) that the sooner AGI is developed, the higher the likelihood that we were ill prepared and that something will go horribly wrong. So, it seems clear that if we can take steps to effectively slow down AGI development we should.
There’s also the quote by Toby Ord (I think?) that goes something like: “We’ve grown technologically mature without acquiring the commensurate wisdom.” I take the implication here to be that we should stop developing technology and wait for our wisdom to catch up. But this misses how wisdom is generated in the first place: by solving problems.
When you believe the fate of an untold number of future people is on the line, then you can justify almost anything in the present. This is what I find so disturbing about longtermism. I find many of the responses to my critique say things like: “Look, longtermism doesn’t mean we should throw out concern for the present, or be focused on problem-solving and knowledge creation, or continue improving our ethics”. But you can get those things without appealing to longtermism. What does longtermism buy you that other philosophies don’t, except for headaches when trying to deal with insanely big numbers? I see a lot of downsides, and no benefits that aren’t there in other philosophies. (Okay, harsh words to end, I know—but if anyone is still reading at this point I’m surprised ;) )
When Greaves and MacAskill compare how many lives are saved by donating to AI safety versus the AMF, they compare these numbers as if they were equally as reliable and equally as capable of capturing something about reality.
I don’t think this is true. Whenever Greaves and MacAskill carry out a longtermist EV calculation in the paper it seems clear to me that their aim is to illustrate a point rather than calculate a reliable EV of a longtermist intervention. Their world government EV calculation starts with the words “suppose that...”. They also go on to say:
Of course, in either case one could debate these numbers. But, to repeat, all we need is that there be one course of action such that one ought to have a non-minuscule credence in that action’s having non-negligible long-lasting influence. Given the multitude of plausible ways by which one could have such influence, diverse points of view are likely to agree on this claim
This is the point they are trying to get across by doing the EV calculations.
Thanks for replying Ben, good stuff! Few thoughts.
I don’t think so. There’s no data on the problem, so there’s nothing to adjudicate between our disagreements. We can honestly try this if you want. What’s your credence?
I’ll concede that point!
Now, even if we could converge on some number, what’s the reason for thinking that number captures any aspect of reality?
I think a better response to the one I originally gave was to point out that the case for strong longtermism relies on establishing a sensible lower(ish) bound for total future population. Greaves and MacAskill want to convince you that (say) at least a quadrillion lives could plausibly lie in the future. I’m curious if you have an issue with that weaker claim?
I think your point about space exploration is absolutely right, and more than a nitpick. I would say two things: one is that I can imagine a world in which we could be confident that we would never colonise the stars (e.g. if the earth were more massive and we had 5 decades before the sun scorched us or something). Second, voicing support for the ‘anything permitted by physics can become practically possible’ camp indirectly supports an expectation of a large numbers of future lives, no?
But then we need to be clear that these estimates aren’t saying “anything precise about the actual world.” They should be treated completely differently than estimates based on actual data. But they’re not. When Greaves and MacAskill compare how many lives are saved by donating to AI safety versus the AMF, they compare these numbers as if they were equally as reliable and equally as capable of capturing something about reality.
Hmm — to my lights Greaves and MacAskill are fairly clear about the differences between the two kinds of estimate. If your reply is that doing any kind of (toy) EV calculation with both estimates just implies that they’re somehow “equally as capable of capturing something about reality”, then it feels like you’re begging the question.
There should be no daylight. Whatever daylight there is would have to be a result of purely subjective beliefs, and we shouldn’t lend this any credibility. It doesn’t belong alongside an actual statistical estimate.
I don’t understand what you mean here, which is partly my fault for being unclear in my original comment. Here’s what I had in mind: suppose you’ve run a small-scale experiment and collected your data. You can generate a bunch of statistical scores indicating e.g. the effect size, plus the chance of getting the results you got assuming the null hypothesis was true (p-value). Crucially (and unsurprisingly) none of those scores directly give you the likelihood of an effect (or the ‘true’ anything else). If you have reason to expect a bias in the direction of positive results (e.g. publication bias), then your guess about how likely it is that you’re picked up on a real effect may in fact be very different from any statistic, because it makes use of information from beyond those statistics (i.e. your prior). For instance, in certain social psych journals, you might pick a paper at random, see that p < 0.05, and nonetheless be fairly confident that you’re looking at a false positive. So subjective credences (incorporating info from beyond the raw stats) do seem useful here. My guess is that I’m misunderstanding you, yell at me if I am.
Subjective credences aren’t applicable to short term situations. (Again, when I say “subjective” there’s an implied “and based on no data”).
By ‘subjective credence’ I just mean degree of belief. It feels important that everyone’s on the same terminological page here, and I’m not sure any card-carrying Bayesians imply “based on no data” by “subjective”! Can you point me towards someone who has argued that subjective credences in this broader sense aren’t applicable even to straightforward ‘short-term’ situations?
Fair point about strong longtermism plausibly recommending slowing certain kinds of progress. I’m also not convinced — David Deutsch was an influence here (as I’m guessing he was for you). But the ‘wisdom outrunning technological capacity’ thing still rings true to me.
I take the implication here to be that we should stop developing technology and wait for our wisdom to catch up.
There’s two ways to close the gap, of course, and isn’t the obvious conclusion just to speed up the ‘wisdom’ side?
Which ties in to your last point. Correct me if I’m wrong, but I’m taking you as saying: to the extent thatstrong longtermism implies significant changes in global priorities, those changes are really worrying: the logic can justify almost any present sacrifices, there’s no closed feedback loop or error-correction mechanism, and it may imply a slowing down of technological progress in some cases. To the extent thatstrong longtermism doesn’t imply significant changes in global priorities, then it hardly adds any new or compelling reasons for existing priorities. So it’s either dangerous or useless or somewhere between the two.
I won’t stick up for strong longtermism, because I’m unsure about it, but I will stick up for semi-skimmed longtermism. My tentative response is that there are some recommendations that (i) are more-or-less uniquely recomended by this kind of longtermism, and (ii) not dangerous or silly in the ways you suggest. One example is establishing kinds of political representation for future generations. Or funding international bodies like the BWC, spreading long-term thinking through journalism, getting fair legislative frameworks in place for when transformative / general AI arrives, or indeed for space governance.
Anyway, a crossover podcast on this would be amazing! I’ll send you a message.
Thanks so much for writing this Ben! I think it’s great that strong longtermism is being properly scrutinised, and I loved your recent podcast episode on this (as well as Vaden’s piece).
I don’t have a view of my own yet; but I do have some questions about a few of your points, and I think I can guess at how a proponent of strong longtermism might respond to others.
For clarity, I’m understanding part of your argument as saying something like the following. First, “[E]xpected value calculations, Bayes theorem, and mathematical models” are tools — often useful, often totally innapropriate or inapplicable. Second, ‘Bayesian epistemology’ (BE) makes inviolable laws out of these tools, running into all kinds of paradoxes and failing to represent how scientific knowledge advances. This makes BE silly at best and downright ‘refuted’ at worst. Third, the case for strong longtermism relies essentially on BE, which is bad news for strong longtermism.
I can imagine that a fan of BE would just object that Bayesianism in particular is just not a tool which can be swapped out for something else when it’s convenient . This feels like an important but tangential argument — this LW post might be relevant. Also, briefly, I’m not 100% convinced by Popper’s argument against Bayesianism which you’re indirectly referencing, and I haven’t read the paper Vaden wrote but it looks interesting. In any case: declaring that BE “has been refuted” seems unfairly rash.
You suggest at a few points that longtermists are just pulling numbers out of nowhere in order to take an expectation over, for instance, the number of people who will live in the long-run future. In other words, I’m reading you as saying that these numbers are totally arbitrary. You also mention that they’re problematically unfalsifiable.
On the first point, it feels more accurate to say that these numbers are highly uncertain rather than totally arbitrary. I can imagine someone saying “I wouldn’t be surprised if my estimate were off by several orders of magnitude”; but not “I have literally no reason to believe that this estimate is any better than a wildly different one”. That’s because it is possible to begin reasoning about these numbers. For instance, I was reminded of Nick Beckstead’s preliminary review of the feasibility of space colonisation. If it turned out that space colonisation was practically impossible, the ceiling would fall down on estimates for the size of humanity’s future. So there’s some information to go on — just very little.
You make the same point in the context of estimating existential risks:
Really? If you’re a rationalist (in the broad Popperian sense and the internet-cult sense), and we share common knowledge of each other’s beliefs, then shouldn’t we be able to argue towards closer agreement? Not if our estimates were totally arbitrary — but clearly they’re not. Again, they’re just especially uncertain.
You can use bigger numbers in the sense that you can type extra zeroes on your keyboard, but you can’t use bigger numbers if you care about making sure your numbers fall reasonably in line with the available facts, right? I could try turning “donating to Fin’s retirement fund” into an EA cause area by just lying about its impact, but there are norms of honesty and criticism (and common sense) which would prevent the plot succeeding. Because I don’t think you’re suggesting that proponents of strong longtermism are being dishonest in this way, I’m confused about what you are suggesting.
Plus, as James Aung mentioned, I don’t think it works to criticise subjective probabilities (and estimates derived from them) as too precise. The response is presumably: “sure, this guess is hugely uncertain. But better to give some number rather than none, and any number I pick is going to seem too precise to you. Crucially, I’m trying to represent something about my own beliefs — not that I know something precise about the actual world.”
On the falsifiability point, estimates about the size of humanity’s future clearly are falsifiable — it’s just going to take a long time to find out. But plenty of sensible scientific claims are like this — e.g. predictions about the future of stars including our Sun. So the criticism can’t be that predictions about the size of humanity’s future are somehow unscientific because not immediately falsifiable.
I think this paragraph is key:
My reaction is something like this: even if other interpretations of probability are available, it seems at least harmless to form subjective credences about the effectiveness of, say, global health interventions backed by a bunch of RCTs. Where there’s lots of empirical evidence, there should be little daylight between your subjective credences and the probabilities that fall straight out of the ‘actual data’. In fact, using subjective credences begins to look positively useful when you venture into otherwise comparable but more speculative interventions. That’s because whether you might want to fund that intervention is going to depend on your best guess about its likely effects and what you might learn from them, and that guess should be sensitive to all kinds of information — a job Bayesian methods were built for. However, if you agree that subjective credences are applicable to innocuous ‘short-term’ situations with plenty of ‘data’, then you can imagine gradually pushing the time horizon (or some other source of uncertainty) all the way to questions about the very long-run future. At this extreme, you’ve said that there’s something qualitatively wrong with subjective credences about such murky questions. But I want to say: given that you can join up the two kinds of subject matter by a series of intermediate questions, and there wasn’t originally anything wrong with using credences and no qualitative or step-change, why think that the two ends of the scale end up being “of an entirely different nature”? I think this applies to Vaden’s point that the maths of taking an expectation over the long-run future is somehow literally unworkable, because you can’t have a measure over infinite possibilities (or something). Does that mean we can’t take an expectation over what happens next year? The next decade?
I hope that makes sense! Happy to say more.
My last worry is that you’re painting an unrealistically grim picture of what strong longtermism practically entails. For starters, you say “[l]ongtermism asks us to ignore problems now”, and Hilary and Will say we can “often” ignore short-term effects . Two points here: first, in situations where we can have a large effect on the present / immediate future without risking something comparably bad in the future, it’s presumably still just as good to do that thing. Second, it seems reasonable to expect considerable overlap between solving present problems and making the long-run future go best, for obvious reasons. For example, investing in renewables or clean meat R&D just seem robustly good from short-term and long-term perspectives.
I’m interested in the comparison to totalitarian regimes, and it reminded me of something Isaiah Berlin wrote:
However, my guess is that there are too few similarities for the comparison to be instructive. I would want to say that the totalitarian regimes of the past failed so horrendously not because they used expected utility theory or Bayesian epistemology correctly but innapropriately, but because they were just wrong — wrong that revolutionary violence and totalitarianism make the world remotely better in the short or long term. Also, note that a vein of longtermist thinking discusses reducing the likelihood of a great power conflict, improving instutional decision-making, and spreading good (viz. liberal) political norms in general — in other words, how to secure an open society for our descendants.
Isn’t it the case that strong longtermism makes knowledge creation and accelerating progress seem more valuable, if anything? And would the world really generate less knowledge, or progress at a slower rate, if the EA community shifted priorities in a longtermist direction?
Finally, a minor point: my impression is that ‘longtermism’ is generally taken to mean something a little less controversial than ‘strong longtermism’. I appreciate you make the distinction early on, but using the ‘longtermism’ shorthand seems borderline misleading when some of your arguments only apply to a specific version.
For what it’s worth, I’m most convinced by the practical problems with strong longtermism. I especially liked your point about longtermism being less permeable to error correction, and generally I’m curious to know more about reasons for thinking that influencing the long-run future is really tractable. Thanks again for starting this conversation along with Vaden!
I think there is an important point here. One of the assumptions in Aumann’s theorem is that both people have the same prior, and I think this is rarely true in the real world.
I roughly think of Bayesian reasoning as starting with a prior, and then adjusting the prior based on observed evidence. If there’s a ton of evidence, and your prior isn’t dumb, the prior doesn’t really matter. But the more speculative the problem, and the less available evidence, the more the prior starts to matter. And your prior bakes in a lot of your assumptions about the world, and I think it’s tricky to resolve disagreements about what your prior should be. At least not in ways that approach being objective.
I think you can make progress on this. Eg, ‘how likely is it that AI could get way better, really fast?’ is a difficult question to answer, and could be baked into a prior either way. And things like AI Impact’s study of discontinuous progress in other technologies can be helpful for getting closer to consensus. But I think choosing a good prior is still a really hard and important problem, and near impossible to be objective about
Hey Fin! Nice—lot’s here. I’ll respond to what I can. If I miss anything crucial just yell at me :) (BTW, also enjoying your podcast. Maybe we should have a podcast battle at some point … you can defend longtermism’s honour).
Yep, this is fair. I’m imagining myself in the position of some random stranger outside of a fancy EA-gala, and trying to get people’s attention. So yes—the language might be a little strong (although I do really think Bayesianism doesn’t stand up to scrutiny if you drill down on it).
Sure, guessing that there will be between 1 billion and 1000 quadrillion people in the future is probably a better estimate than 1000 people. But it still leaves open a discomfortingly huge ran. Greaves and MacAskill could easily have used half a quadrillion people, or 10 quadrillion people. Instead of trying to wrestle with this uncertainty, which is fruitless, we should just acknowledge that we can’t know and stop trying.
Bit of a nitpick here, but space colonization isn’t prohibited by the laws of physics, so it can only be “practically impossible” based on our current knowledge. It’s just a problem to be solved. So this particular example couldn’t bring down the curtains on our expected value calculations.
I don’t think so. There’s no data on the problem, so there’s nothing to adjudicate between our disagreements. We can honestly try this if you want. What’s your credence?
Now, even if we could converge on some number, what’s the reason for thinking that number captures any aspect of reality? Most academics were sympathetic to communism before it was tried; most physicists thought Einstein was wrong.
What are the available facts when it comes to the size of the future? There’s a reason these estimates are wildly different across papers: From 10^15 here, to 10^68 (or something) from Bostrom, and everything in between. I’m gonna add mine in: 10^124 + 3.
Agree that this is probably the response. But then we need to be clear that these estimates aren’t saying “anything precise about the actual world.” They should be treated completely differently than estimates based on actual data. But they’re not. When Greaves and MacAskill compare how many lives are saved by donating to AI safety versus the AMF, they compare these numbers as if they were equally as reliable and equally as capable of capturing something about reality.
There should be no daylight. Whatever daylight there is would have to be a result of purely subjective beliefs, and we shouldn’t lend this any credibility. It doesn’t belong alongside an actual statistical estimate.
I think the above also answers this? Subjective credences aren’t applicable to short term situations. (Again, when I say “subjective” there’s an implied “and based on no data”).
I’ve seen arguments to the contrary. Here for instance:
There’s also the quote by Toby Ord (I think?) that goes something like: “We’ve grown technologically mature without acquiring the commensurate wisdom.” I take the implication here to be that we should stop developing technology and wait for our wisdom to catch up. But this misses how wisdom is generated in the first place: by solving problems.
When you believe the fate of an untold number of future people is on the line, then you can justify almost anything in the present. This is what I find so disturbing about longtermism. I find many of the responses to my critique say things like: “Look, longtermism doesn’t mean we should throw out concern for the present, or be focused on problem-solving and knowledge creation, or continue improving our ethics”. But you can get those things without appealing to longtermism. What does longtermism buy you that other philosophies don’t, except for headaches when trying to deal with insanely big numbers? I see a lot of downsides, and no benefits that aren’t there in other philosophies. (Okay, harsh words to end, I know—but if anyone is still reading at this point I’m surprised ;) )
I don’t think this is true. Whenever Greaves and MacAskill carry out a longtermist EV calculation in the paper it seems clear to me that their aim is to illustrate a point rather than calculate a reliable EV of a longtermist intervention. Their world government EV calculation starts with the words “suppose that...”. They also go on to say:
This is the point they are trying to get across by doing the EV calculations.
Thanks for replying Ben, good stuff! Few thoughts.
I’ll concede that point!
I think a better response to the one I originally gave was to point out that the case for strong longtermism relies on establishing a sensible lower(ish) bound for total future population. Greaves and MacAskill want to convince you that (say) at least a quadrillion lives could plausibly lie in the future. I’m curious if you have an issue with that weaker claim?
I think your point about space exploration is absolutely right, and more than a nitpick. I would say two things: one is that I can imagine a world in which we could be confident that we would never colonise the stars (e.g. if the earth were more massive and we had 5 decades before the sun scorched us or something). Second, voicing support for the ‘anything permitted by physics can become practically possible’ camp indirectly supports an expectation of a large numbers of future lives, no?
Hmm — to my lights Greaves and MacAskill are fairly clear about the differences between the two kinds of estimate. If your reply is that doing any kind of (toy) EV calculation with both estimates just implies that they’re somehow “equally as capable of capturing something about reality”, then it feels like you’re begging the question.
I don’t understand what you mean here, which is partly my fault for being unclear in my original comment. Here’s what I had in mind: suppose you’ve run a small-scale experiment and collected your data. You can generate a bunch of statistical scores indicating e.g. the effect size, plus the chance of getting the results you got assuming the null hypothesis was true (p-value). Crucially (and unsurprisingly) none of those scores directly give you the likelihood of an effect (or the ‘true’ anything else). If you have reason to expect a bias in the direction of positive results (e.g. publication bias), then your guess about how likely it is that you’re picked up on a real effect may in fact be very different from any statistic, because it makes use of information from beyond those statistics (i.e. your prior). For instance, in certain social psych journals, you might pick a paper at random, see that p < 0.05, and nonetheless be fairly confident that you’re looking at a false positive. So subjective credences (incorporating info from beyond the raw stats) do seem useful here. My guess is that I’m misunderstanding you, yell at me if I am.
By ‘subjective credence’ I just mean degree of belief. It feels important that everyone’s on the same terminological page here, and I’m not sure any card-carrying Bayesians imply “based on no data” by “subjective”! Can you point me towards someone who has argued that subjective credences in this broader sense aren’t applicable even to straightforward ‘short-term’ situations?
Fair point about strong longtermism plausibly recommending slowing certain kinds of progress. I’m also not convinced — David Deutsch was an influence here (as I’m guessing he was for you). But the ‘wisdom outrunning technological capacity’ thing still rings true to me.
There’s two ways to close the gap, of course, and isn’t the obvious conclusion just to speed up the ‘wisdom’ side?
Which ties in to your last point. Correct me if I’m wrong, but I’m taking you as saying: to the extent that strong longtermism implies significant changes in global priorities, those changes are really worrying: the logic can justify almost any present sacrifices, there’s no closed feedback loop or error-correction mechanism, and it may imply a slowing down of technological progress in some cases. To the extent that strong longtermism doesn’t imply significant changes in global priorities, then it hardly adds any new or compelling reasons for existing priorities. So it’s either dangerous or useless or somewhere between the two.
I won’t stick up for strong longtermism, because I’m unsure about it, but I will stick up for semi-skimmed longtermism. My tentative response is that there are some recommendations that (i) are more-or-less uniquely recomended by this kind of longtermism, and (ii) not dangerous or silly in the ways you suggest. One example is establishing kinds of political representation for future generations. Or funding international bodies like the BWC, spreading long-term thinking through journalism, getting fair legislative frameworks in place for when transformative / general AI arrives, or indeed for space governance.
Anyway, a crossover podcast on this would be amazing! I’ll send you a message.