Hey Fin! Nice—lot’s here. I’ll respond to what I can. If I miss anything crucial just yell at me :) (BTW, also enjoying your podcast. Maybe we should have a podcast battle at some point … you can defend longtermism’s honour).
In any case: declaring that BE “has been refuted” seems unfairly rash.
Yep, this is fair. I’m imagining myself in the position of some random stranger outside of a fancy EA-gala, and trying to get people’s attention. So yes—the language might be a little strong (although I do really think Bayesianism doesn’t stand up to scrutiny if you drill down on it).
On the first point, it feels more accurate to say that these numbers are highly uncertain rather than totally arbitrary.
Sure, guessing that there will be between 1 billion and 1000 quadrillion people in the future is probably a better estimate than 1000 people. But it still leaves open a discomfortingly huge ran. Greaves and MacAskill could easily have used half a quadrillion people, or 10 quadrillion people. Instead of trying to wrestle with this uncertainty, which is fruitless, we should just acknowledge that we can’t know and stop trying.
If it turned out that space colonisation was practically impossible, the ceiling would fall down on estimates for the size of humanity’s future. So there’s some information to go on — just very little.
Bit of a nitpick here, but space colonization isn’t prohibited by the laws of physics, so it can only be “practically impossible” based on our current knowledge. It’s just a problem to be solved. So this particular example couldn’t bring down the curtains on our expected value calculations.
Really? If you’re a rationalist (in the broad Popperian sense and the internet-cult sense), and we share common knowledge of each other’s beliefs, then shouldn’t we be able to argue towards closer agreement?
I don’t think so. There’s no data on the problem, so there’s nothing to adjudicate between our disagreements. We can honestly try this if you want. What’s your credence?
Now, even if we could converge on some number, what’s the reason for thinking that number captures any aspect of reality? Most academics were sympathetic to communism before it was tried; most physicists thought Einstein was wrong.
You can use bigger numbers in the sense that you can type extra zeroes on your keyboard, but you can’t use bigger numbers if you care about making sure your numbers fall reasonably in line with the available facts, right?
What are the available facts when it comes to the size of the future? There’s a reason these estimates are wildly different across papers: From 10^15 here, to 10^68 (or something) from Bostrom, and everything in between. I’m gonna add mine in: 10^124 + 3.
The response is presumably: “sure, this guess is hugely uncertain. But better to give some number rather than none, and any number I pick is going to seem too precise to you. Crucially, I’m trying to represent something about my own beliefs — not that I know something precise about the actual world.”
Agree that this is probably the response. But then we need to be clear that these estimates aren’t saying “anything precise about the actual world.” They should be treated completely differently than estimates based on actual data. But they’re not. When Greaves and MacAskill compare how many lives are saved by donating to AI safety versus the AMF, they compare these numbers as if they were equally as reliable and equally as capable of capturing something about reality.
Where there’s lots of empirical evidence, there should be little daylight between your subjective credences and the probabilities that fall straight out of the ‘actual data’.
There should be no daylight. Whatever daylight there is would have to be a result of purely subjective beliefs, and we shouldn’t lend this any credibility. It doesn’t belong alongside an actual statistical estimate.
However, if you agree that subjective credences are applicable to innocuous ‘short-term’ situations with plenty of ‘data’, then you can imagine gradually pushing the time horizon (or some other source of uncertainty) all the way to questions about the very long-run future.
I think the above also answers this? Subjective credences aren’t applicable to short term situations. (Again, when I say “subjective” there’s an implied “and based on no data”).
Isn’t it the case that strong longtermism makes knowledge creation and accelerating progress seem more valuable, if anything? And would the world really generate less knowledge, or progress at a slower rate, if the EA community shifted priorities in a longtermist direction?
I’ve seen arguments to the contrary. Here for instance:
I spoke to one EA who made an argument against slowing down AGI development that I think is basically indefensible: that doing so would slow the development of machine learning-based technology that is likely to lead to massive benefits in the short/medium term. But by the own arguments of the AI-focused EAs, the far future effects of AGI dominate all other considerations by orders of magnitude. If that’s the case, then getting it right should be the absolute top priority, and virtually everyone agrees (I think) that the sooner AGI is developed, the higher the likelihood that we were ill prepared and that something will go horribly wrong. So, it seems clear that if we can take steps to effectively slow down AGI development we should.
There’s also the quote by Toby Ord (I think?) that goes something like: “We’ve grown technologically mature without acquiring the commensurate wisdom.” I take the implication here to be that we should stop developing technology and wait for our wisdom to catch up. But this misses how wisdom is generated in the first place: by solving problems.
When you believe the fate of an untold number of future people is on the line, then you can justify almost anything in the present. This is what I find so disturbing about longtermism. I find many of the responses to my critique say things like: “Look, longtermism doesn’t mean we should throw out concern for the present, or be focused on problem-solving and knowledge creation, or continue improving our ethics”. But you can get those things without appealing to longtermism. What does longtermism buy you that other philosophies don’t, except for headaches when trying to deal with insanely big numbers? I see a lot of downsides, and no benefits that aren’t there in other philosophies. (Okay, harsh words to end, I know—but if anyone is still reading at this point I’m surprised ;) )
When Greaves and MacAskill compare how many lives are saved by donating to AI safety versus the AMF, they compare these numbers as if they were equally as reliable and equally as capable of capturing something about reality.
I don’t think this is true. Whenever Greaves and MacAskill carry out a longtermist EV calculation in the paper it seems clear to me that their aim is to illustrate a point rather than calculate a reliable EV of a longtermist intervention. Their world government EV calculation starts with the words “suppose that...”. They also go on to say:
Of course, in either case one could debate these numbers. But, to repeat, all we need is that there be one course of action such that one ought to have a non-minuscule credence in that action’s having non-negligible long-lasting influence. Given the multitude of plausible ways by which one could have such influence, diverse points of view are likely to agree on this claim
This is the point they are trying to get across by doing the EV calculations.
Thanks for replying Ben, good stuff! Few thoughts.
I don’t think so. There’s no data on the problem, so there’s nothing to adjudicate between our disagreements. We can honestly try this if you want. What’s your credence?
I’ll concede that point!
Now, even if we could converge on some number, what’s the reason for thinking that number captures any aspect of reality?
I think a better response to the one I originally gave was to point out that the case for strong longtermism relies on establishing a sensible lower(ish) bound for total future population. Greaves and MacAskill want to convince you that (say) at least a quadrillion lives could plausibly lie in the future. I’m curious if you have an issue with that weaker claim?
I think your point about space exploration is absolutely right, and more than a nitpick. I would say two things: one is that I can imagine a world in which we could be confident that we would never colonise the stars (e.g. if the earth were more massive and we had 5 decades before the sun scorched us or something). Second, voicing support for the ‘anything permitted by physics can become practically possible’ camp indirectly supports an expectation of a large numbers of future lives, no?
But then we need to be clear that these estimates aren’t saying “anything precise about the actual world.” They should be treated completely differently than estimates based on actual data. But they’re not. When Greaves and MacAskill compare how many lives are saved by donating to AI safety versus the AMF, they compare these numbers as if they were equally as reliable and equally as capable of capturing something about reality.
Hmm — to my lights Greaves and MacAskill are fairly clear about the differences between the two kinds of estimate. If your reply is that doing any kind of (toy) EV calculation with both estimates just implies that they’re somehow “equally as capable of capturing something about reality”, then it feels like you’re begging the question.
There should be no daylight. Whatever daylight there is would have to be a result of purely subjective beliefs, and we shouldn’t lend this any credibility. It doesn’t belong alongside an actual statistical estimate.
I don’t understand what you mean here, which is partly my fault for being unclear in my original comment. Here’s what I had in mind: suppose you’ve run a small-scale experiment and collected your data. You can generate a bunch of statistical scores indicating e.g. the effect size, plus the chance of getting the results you got assuming the null hypothesis was true (p-value). Crucially (and unsurprisingly) none of those scores directly give you the likelihood of an effect (or the ‘true’ anything else). If you have reason to expect a bias in the direction of positive results (e.g. publication bias), then your guess about how likely it is that you’re picked up on a real effect may in fact be very different from any statistic, because it makes use of information from beyond those statistics (i.e. your prior). For instance, in certain social psych journals, you might pick a paper at random, see that p < 0.05, and nonetheless be fairly confident that you’re looking at a false positive. So subjective credences (incorporating info from beyond the raw stats) do seem useful here. My guess is that I’m misunderstanding you, yell at me if I am.
Subjective credences aren’t applicable to short term situations. (Again, when I say “subjective” there’s an implied “and based on no data”).
By ‘subjective credence’ I just mean degree of belief. It feels important that everyone’s on the same terminological page here, and I’m not sure any card-carrying Bayesians imply “based on no data” by “subjective”! Can you point me towards someone who has argued that subjective credences in this broader sense aren’t applicable even to straightforward ‘short-term’ situations?
Fair point about strong longtermism plausibly recommending slowing certain kinds of progress. I’m also not convinced — David Deutsch was an influence here (as I’m guessing he was for you). But the ‘wisdom outrunning technological capacity’ thing still rings true to me.
I take the implication here to be that we should stop developing technology and wait for our wisdom to catch up.
There’s two ways to close the gap, of course, and isn’t the obvious conclusion just to speed up the ‘wisdom’ side?
Which ties in to your last point. Correct me if I’m wrong, but I’m taking you as saying: to the extent thatstrong longtermism implies significant changes in global priorities, those changes are really worrying: the logic can justify almost any present sacrifices, there’s no closed feedback loop or error-correction mechanism, and it may imply a slowing down of technological progress in some cases. To the extent thatstrong longtermism doesn’t imply significant changes in global priorities, then it hardly adds any new or compelling reasons for existing priorities. So it’s either dangerous or useless or somewhere between the two.
I won’t stick up for strong longtermism, because I’m unsure about it, but I will stick up for semi-skimmed longtermism. My tentative response is that there are some recommendations that (i) are more-or-less uniquely recomended by this kind of longtermism, and (ii) not dangerous or silly in the ways you suggest. One example is establishing kinds of political representation for future generations. Or funding international bodies like the BWC, spreading long-term thinking through journalism, getting fair legislative frameworks in place for when transformative / general AI arrives, or indeed for space governance.
Anyway, a crossover podcast on this would be amazing! I’ll send you a message.
Hey Fin! Nice—lot’s here. I’ll respond to what I can. If I miss anything crucial just yell at me :) (BTW, also enjoying your podcast. Maybe we should have a podcast battle at some point … you can defend longtermism’s honour).
Yep, this is fair. I’m imagining myself in the position of some random stranger outside of a fancy EA-gala, and trying to get people’s attention. So yes—the language might be a little strong (although I do really think Bayesianism doesn’t stand up to scrutiny if you drill down on it).
Sure, guessing that there will be between 1 billion and 1000 quadrillion people in the future is probably a better estimate than 1000 people. But it still leaves open a discomfortingly huge ran. Greaves and MacAskill could easily have used half a quadrillion people, or 10 quadrillion people. Instead of trying to wrestle with this uncertainty, which is fruitless, we should just acknowledge that we can’t know and stop trying.
Bit of a nitpick here, but space colonization isn’t prohibited by the laws of physics, so it can only be “practically impossible” based on our current knowledge. It’s just a problem to be solved. So this particular example couldn’t bring down the curtains on our expected value calculations.
I don’t think so. There’s no data on the problem, so there’s nothing to adjudicate between our disagreements. We can honestly try this if you want. What’s your credence?
Now, even if we could converge on some number, what’s the reason for thinking that number captures any aspect of reality? Most academics were sympathetic to communism before it was tried; most physicists thought Einstein was wrong.
What are the available facts when it comes to the size of the future? There’s a reason these estimates are wildly different across papers: From 10^15 here, to 10^68 (or something) from Bostrom, and everything in between. I’m gonna add mine in: 10^124 + 3.
Agree that this is probably the response. But then we need to be clear that these estimates aren’t saying “anything precise about the actual world.” They should be treated completely differently than estimates based on actual data. But they’re not. When Greaves and MacAskill compare how many lives are saved by donating to AI safety versus the AMF, they compare these numbers as if they were equally as reliable and equally as capable of capturing something about reality.
There should be no daylight. Whatever daylight there is would have to be a result of purely subjective beliefs, and we shouldn’t lend this any credibility. It doesn’t belong alongside an actual statistical estimate.
I think the above also answers this? Subjective credences aren’t applicable to short term situations. (Again, when I say “subjective” there’s an implied “and based on no data”).
I’ve seen arguments to the contrary. Here for instance:
There’s also the quote by Toby Ord (I think?) that goes something like: “We’ve grown technologically mature without acquiring the commensurate wisdom.” I take the implication here to be that we should stop developing technology and wait for our wisdom to catch up. But this misses how wisdom is generated in the first place: by solving problems.
When you believe the fate of an untold number of future people is on the line, then you can justify almost anything in the present. This is what I find so disturbing about longtermism. I find many of the responses to my critique say things like: “Look, longtermism doesn’t mean we should throw out concern for the present, or be focused on problem-solving and knowledge creation, or continue improving our ethics”. But you can get those things without appealing to longtermism. What does longtermism buy you that other philosophies don’t, except for headaches when trying to deal with insanely big numbers? I see a lot of downsides, and no benefits that aren’t there in other philosophies. (Okay, harsh words to end, I know—but if anyone is still reading at this point I’m surprised ;) )
I don’t think this is true. Whenever Greaves and MacAskill carry out a longtermist EV calculation in the paper it seems clear to me that their aim is to illustrate a point rather than calculate a reliable EV of a longtermist intervention. Their world government EV calculation starts with the words “suppose that...”. They also go on to say:
This is the point they are trying to get across by doing the EV calculations.
Thanks for replying Ben, good stuff! Few thoughts.
I’ll concede that point!
I think a better response to the one I originally gave was to point out that the case for strong longtermism relies on establishing a sensible lower(ish) bound for total future population. Greaves and MacAskill want to convince you that (say) at least a quadrillion lives could plausibly lie in the future. I’m curious if you have an issue with that weaker claim?
I think your point about space exploration is absolutely right, and more than a nitpick. I would say two things: one is that I can imagine a world in which we could be confident that we would never colonise the stars (e.g. if the earth were more massive and we had 5 decades before the sun scorched us or something). Second, voicing support for the ‘anything permitted by physics can become practically possible’ camp indirectly supports an expectation of a large numbers of future lives, no?
Hmm — to my lights Greaves and MacAskill are fairly clear about the differences between the two kinds of estimate. If your reply is that doing any kind of (toy) EV calculation with both estimates just implies that they’re somehow “equally as capable of capturing something about reality”, then it feels like you’re begging the question.
I don’t understand what you mean here, which is partly my fault for being unclear in my original comment. Here’s what I had in mind: suppose you’ve run a small-scale experiment and collected your data. You can generate a bunch of statistical scores indicating e.g. the effect size, plus the chance of getting the results you got assuming the null hypothesis was true (p-value). Crucially (and unsurprisingly) none of those scores directly give you the likelihood of an effect (or the ‘true’ anything else). If you have reason to expect a bias in the direction of positive results (e.g. publication bias), then your guess about how likely it is that you’re picked up on a real effect may in fact be very different from any statistic, because it makes use of information from beyond those statistics (i.e. your prior). For instance, in certain social psych journals, you might pick a paper at random, see that p < 0.05, and nonetheless be fairly confident that you’re looking at a false positive. So subjective credences (incorporating info from beyond the raw stats) do seem useful here. My guess is that I’m misunderstanding you, yell at me if I am.
By ‘subjective credence’ I just mean degree of belief. It feels important that everyone’s on the same terminological page here, and I’m not sure any card-carrying Bayesians imply “based on no data” by “subjective”! Can you point me towards someone who has argued that subjective credences in this broader sense aren’t applicable even to straightforward ‘short-term’ situations?
Fair point about strong longtermism plausibly recommending slowing certain kinds of progress. I’m also not convinced — David Deutsch was an influence here (as I’m guessing he was for you). But the ‘wisdom outrunning technological capacity’ thing still rings true to me.
There’s two ways to close the gap, of course, and isn’t the obvious conclusion just to speed up the ‘wisdom’ side?
Which ties in to your last point. Correct me if I’m wrong, but I’m taking you as saying: to the extent that strong longtermism implies significant changes in global priorities, those changes are really worrying: the logic can justify almost any present sacrifices, there’s no closed feedback loop or error-correction mechanism, and it may imply a slowing down of technological progress in some cases. To the extent that strong longtermism doesn’t imply significant changes in global priorities, then it hardly adds any new or compelling reasons for existing priorities. So it’s either dangerous or useless or somewhere between the two.
I won’t stick up for strong longtermism, because I’m unsure about it, but I will stick up for semi-skimmed longtermism. My tentative response is that there are some recommendations that (i) are more-or-less uniquely recomended by this kind of longtermism, and (ii) not dangerous or silly in the ways you suggest. One example is establishing kinds of political representation for future generations. Or funding international bodies like the BWC, spreading long-term thinking through journalism, getting fair legislative frameworks in place for when transformative / general AI arrives, or indeed for space governance.
Anyway, a crossover podcast on this would be amazing! I’ll send you a message.