The informality of that equation makes it hard for me to know how to reason about it. For eg,
T, D and F seem heavily interdependent.
I’m just not sure how to parse ‘computational sent-years spent non-solipsishly simulating almost-space-colonizing ancestral planets’. What does it mean for a year of sentient life to be spent simulating something? Do you think he means what fraction of experienced years exist in ancestor simulations? I’m still confused by this after reading the last paragraph.
I’m not sure what the expression’s value represents. Are we supposed to multiply some further estimate we have of longtermist work by 10^7? (if so, what estimate is it that’s so low that 10^7 isn’t enough of a multiplier to make it still eclipse all short termist work?)
If you feel like you understand it, maybe you could give me a concrete example of how to apply this reasoning?
For what it’s worth, I have much more prosaic reasons for doubting the value of explicitly longtermist work both in practice (the stuff I’ve discussed with you before that makes me feel like it’s misprioritised) and in principle (my instinct is that in situations that reduce to a kind of Pascalian mugging, xP(x) where x is a counterfactual payoff increase and P(x) is the probability of that payoff increase, approaches 0 as x tends to infinity).
I’m just not sure how to parse ‘computational sent-years spent non-solipsishly simulating almost-space-colonizing ancestral planets’.
I think F = “sent-years respecting the simulations of the beings in almost-space-colonizing ancestral planets”/”all sent-years of the universe”. Brian defines sent-years as follows:
I’ll define 1 sent-year as the amount of complexity-weighted experience of one life-year of a typical biological human. That is, consider the sentience over time experienced in a year by the median biological human on Earth right now. Then, a computational process that has 46 times this much subjective experience has 46 sent-years of computation.2 Computations with a higher density of sentience may have more sents even if they have fewer FLOPS.
I said Brian concluded that L/S = T*D/F, but this was after simplifying L/S = T*D/(E/N + F), where:
E is “the amount of sentience on Earth in the near term (say, the next century or two)”.
“On average, these civilizations [“that are about to colonize space”] will run computations whose sentience is equivalent to that of N human-years”.
Then Brian says:
Everyone agrees that E/N is very small, perhaps less than 10-30 or something, because the far future could contain astronomical amounts of sentience [see e.g. Table 1 of Newberry 2021]. If F is not nearly as small (and I would guess that it’s not), then we can approximate L/S as T * D / F.
The simulation argument dampening future fanaticism comes from Brian assuming that E/N << F, in which case L/S = T*D/F, and therefore prioritising the future no longer depends on its size. However, for the reasons you mentioned (we are not simulating our ancestors much), I feel like we should a priori expect E/N and F to be similar, and correlated, in which case L/S will still be huge unless it is countered by a very small D (i.e. if the typical low tractability argument against longtermism goes through).
I’m not sure what the expression’s value represents. Are we supposed to multiply some further estimate we have of longtermist work by 10^7? (if so, what estimate is it that’s so low that 10^7 isn’t enough of a multiplier to make it still eclipse all short termist work?)
I think L/S is just supposed to be a heuristic for how much to prioritise longtermist actions relative to neartermist ones. Brian’s inputs lead to 10^7, but they were mainly illustrative:
This [L/S = 10^7] happens to be bigger than 1, which suggests that targeting the far future is still ~10 million times better than targeting the short term. But this calculation could have come out as less than 1 using other possible inputs. Combined with general model uncertainty, it seems premature to conclude that far-future-focused actions dominate short-term helping. It’s likely that the far future will still dominate after more thorough analysis, but by much less than a naive future fanatic would have thought.
However, it seems to me that, even if one thinks that both E/N and F are super small, L/S could still be smaller than 1 due to super small D. This relates to your point that:
my instinct is that in situations that reduce to a kind of Pascalian mugging, xP(x) where x is a payoff size and P(x) is the probability of that payoff, approaches 0 as x tends to infinity
I share your instinct. I think David Thorstad calls that rapid diminution.
If you feel like you understand it, maybe you could give me a concrete example of how to apply this reasoning?
I think Brian’s reasoning works more or less as follows. Neglecting the simulation argument, if I save one life, I am only saving one life. However, if F = 10^-16[1] of sentience-years are spent simulating situation like my own, and the future contains N = 10^30 sentience-years, then me saving a life will imply saving F*N = 10^14 copies of the person I saved. I do not think the argument goes through because I would expect F to be super small in this case, such that F*N is similar to 1.
This [L/S = 10^7] happens to be bigger than 1, which suggests that targeting the far future is still ~10 million times better than targeting the short term. But this calculation could have come out as less than 1 using other possible inputs. Combined with general model uncertainty, it seems premature to conclude that far-future-focused actions dominate short-term helping. It’s likely that the far future will still dominate after more thorough analysis, but by much less than a naive future fanatic would have thought.
This is more of a sidenote, but given all the empirical and model uncertainty in any far-future oriented work, it doesn’t seem like adding a highly speculative counterargument with its own radical uncertainties should meaningfully shift anyone’s priors. It seems like a strong longtermist could accept Brian’s views at face value and say ‘but the possibility of L/S being vastly bigger than 1 means we should just accept the Pascalian reasoning and plow ahead regardless’, while a sceptic could point to rapid diminution and say no simulationy weirdness is necessary to reject these views.
(Sidesidenote: I wonder whether anyone has investigated the maths of this in any detail? I can imagine there being some possible proof by contradiction of RD, along the lines of ’if there were some minimum amount that it was rational for the muggee to accept, a dishonest mugger could learn that and raise the offer beyond it whereas an honest mugger might not be able to, and therefore, when the mugger’s epistemics are taken into account, you should not be willing to accept that amount. Though I can also imagine this might just end up as an awkward integral that you have to choose your values for somewhat arbitrarily)
I think Brian’s reasoning works more or less as follows. Neglecting the simulation argument, if I save one life, I am only saving one life. However, if F = 10^-16[1] of sentience-years are spent simulating situation like my own, and the future contains N = 10^30 sentience-years, then me saving a life will imply saving F*N = 10^14 copies of the person I saved. I do not think the argument goes through because I would expect F to be super small in this case, such that F*N is similar to 1.
For the record, this kind of thing is why I love Brian (aside from him being a wonderful human) - I disagree with him vigorously on almost every point of detail on reflection, but he always come up with some weird take. I had either forgotten or never saw this version of the argument, and was imagining the version closer to Pablo’s that talks about the limited value of the far future rather than the increased near-term value.
That said, I still think I can basically C&P my objection. It’s maybe less that I think F is likely to be super small, and more that, given our inability to make any intelligible statements about our purported simulators’ nature or intentions it feels basically undefined (or, if you like, any statement whatsoever about its value is ultimately going to be predicated on arbitrary assumptions), making the equation just not parse (or not output any value that could guide our behaviour).
The informality of that equation makes it hard for me to know how to reason about it. For eg,
T, D and F seem heavily interdependent.
I’m just not sure how to parse ‘computational sent-years spent non-solipsishly simulating almost-space-colonizing ancestral planets’. What does it mean for a year of sentient life to be spent simulating something? Do you think he means what fraction of experienced years exist in ancestor simulations? I’m still confused by this after reading the last paragraph.
I’m not sure what the expression’s value represents. Are we supposed to multiply some further estimate we have of longtermist work by 10^7? (if so, what estimate is it that’s so low that 10^7 isn’t enough of a multiplier to make it still eclipse all short termist work?)
If you feel like you understand it, maybe you could give me a concrete example of how to apply this reasoning?
For what it’s worth, I have much more prosaic reasons for doubting the value of explicitly longtermist work both in practice (the stuff I’ve discussed with you before that makes me feel like it’s misprioritised) and in principle (my instinct is that in situations that reduce to a kind of Pascalian mugging, xP(x) where x is a counterfactual payoff increase and P(x) is the probability of that payoff increase, approaches 0 as x tends to infinity).
I agree.
I think F = “sent-years respecting the simulations of the beings in almost-space-colonizing ancestral planets”/”all sent-years of the universe”. Brian defines sent-years as follows:
I said Brian concluded that L/S = T*D/F, but this was after simplifying L/S = T*D/(E/N + F), where:
E is “the amount of sentience on Earth in the near term (say, the next century or two)”.
“On average, these civilizations [“that are about to colonize space”] will run computations whose sentience is equivalent to that of N human-years”.
Then Brian says:
The simulation argument dampening future fanaticism comes from Brian assuming that E/N << F, in which case L/S = T*D/F, and therefore prioritising the future no longer depends on its size. However, for the reasons you mentioned (we are not simulating our ancestors much), I feel like we should a priori expect E/N and F to be similar, and correlated, in which case L/S will still be huge unless it is countered by a very small D (i.e. if the typical low tractability argument against longtermism goes through).
I think L/S is just supposed to be a heuristic for how much to prioritise longtermist actions relative to neartermist ones. Brian’s inputs lead to 10^7, but they were mainly illustrative:
However, it seems to me that, even if one thinks that both E/N and F are super small, L/S could still be smaller than 1 due to super small D. This relates to your point that:
I share your instinct. I think David Thorstad calls that rapid diminution.
I think Brian’s reasoning works more or less as follows. Neglecting the simulation argument, if I save one life, I am only saving one life. However, if F = 10^-16[1] of sentience-years are spent simulating situation like my own, and the future contains N = 10^30 sentience-years, then me saving a life will imply saving F*N = 10^14 copies of the person I saved. I do not think the argument goes through because I would expect F to be super small in this case, such that F*N is similar to 1.
Brian’s F = 10^-6 divided by the human population of 10^10.
Appreciate the patient breakdown :)
This is more of a sidenote, but given all the empirical and model uncertainty in any far-future oriented work, it doesn’t seem like adding a highly speculative counterargument with its own radical uncertainties should meaningfully shift anyone’s priors. It seems like a strong longtermist could accept Brian’s views at face value and say ‘but the possibility of L/S being vastly bigger than 1 means we should just accept the Pascalian reasoning and plow ahead regardless’, while a sceptic could point to rapid diminution and say no simulationy weirdness is necessary to reject these views.
(Sidesidenote: I wonder whether anyone has investigated the maths of this in any detail? I can imagine there being some possible proof by contradiction of RD, along the lines of ’if there were some minimum amount that it was rational for the muggee to accept, a dishonest mugger could learn that and raise the offer beyond it whereas an honest mugger might not be able to, and therefore, when the mugger’s epistemics are taken into account, you should not be willing to accept that amount. Though I can also imagine this might just end up as an awkward integral that you have to choose your values for somewhat arbitrarily)
For the record, this kind of thing is why I love Brian (aside from him being a wonderful human) - I disagree with him vigorously on almost every point of detail on reflection, but he always come up with some weird take. I had either forgotten or never saw this version of the argument, and was imagining the version closer to Pablo’s that talks about the limited value of the far future rather than the increased near-term value.
That said, I still think I can basically C&P my objection. It’s maybe less that I think F is likely to be super small, and more that, given our inability to make any intelligible statements about our purported simulators’ nature or intentions it feels basically undefined (or, if you like, any statement whatsoever about its value is ultimately going to be predicated on arbitrary assumptions), making the equation just not parse (or not output any value that could guide our behaviour).