We figure out how to prevent the heat death of the universe indefinitely. (Technically this doesn’t lead to infinite utility, since you could still destroy everything of value in the universe, but by driving the probability of that low enough you can get arbitrarily large amounts of utility, which leads to the same fanatical conclusions.)
We figure out that a particular configuration of matter produces experiences so optimized for pleasure that it is infinite utility (i.e. we’d accept any finite amount of torture to create it even for one second).
We discover a previously unknown law of physics that allow us to run hypercomputers which can run infinite simulations of happy people.
None of these seem particularly likely, but I’m not literally certain that they can’t happen / that I can’t affect their probability, and if you accept fanaticism then you should be striving to increase the probability of making something like this happen. (Which, to be clear, could be the right thing to do! But it’s not how longtermists tend to reason in practice.)
you should be striving to increase the probability of making something like this happen. (Which, to be clear, could be the right thing to do! But it’s not how longtermists tend to reason in practice.)
As you said in your previous comment we essentially are increasing the probability of these things happening by reducing x-risk. I’m not convinced we don’t tend to reason fanatically in practice—after all Bostrom’s astronomical waste argument motivates reducing x-risk by raising the possibility of achieving incredibly high levels of utility (in a footnote he says he is setting aside the possibility of infinitely many people). So reducing x-risk and trying to achieve existential security seems to me to be consistent with fanatical reasoning.
It’s interesting to consider what we would do if we actually achieved existential security and entered the long reflection. If we take fanaticism seriously at that point (and I think we will) we may well go for infinite value. It’s worth noting though that certain approaches to going for infinite value will probably dominate other approaches by having a higher probability of success. So we’d probably decide on the most promising possibility and run with that. If I had to guess I’d say we’d look into creating infinitely many digital people with extremely high levels of utility.
I’m not sure whether you are disagreeing with me or not. My claims are (a) accepting fanaticism implies choosing actions that most increase probability of infinite utility, (b) we are not currently choosing actions based on how much they increase probability of infinite utility, (c) therefore we do not currently accept fanaticism (though we might in the future), (d) given we don’t accept fanaticism we should not use “fanaticism is fine” as an argument to persuade people of longtermism.
Is there a specific claim there you disagree with? Or were you riffing off what I said to make other points?
Yes I disagree with b) although it’s a nuanced disagreement.
I think the EA longtermist movement is currently choosing the actions that most increase probability of infinite utility, by reducing existential risk.
What I’m less sure of is that achieving infinite utility is the motivation for reducing existential risk. It might just be that achieving “incredibly high utility” is the motivation for reducing existential risk. I’m not too sure on this.
My point about the long reflection was that when we reach this period it will be easier to see the fanatics from the non-fanatics.
I think the EA longtermist movement is currently choosing the actions that most increase probability of infinite utility, by reducing existential risk.
This is not in conflict with my claim (b). My claim (b) is about the motivation or reasoning by which actions are chosen. That’s all I rely on for the inferences in claims (c) and (d).
I think we’re mostly in agreement here, except that perhaps I’m more confident that most longtermists are not (currently) motivated by “highest probability of infinite utility”.
Yeah that’s fair. As I said I’m not entirely sure on the motivation point.
I think in practice EAs are quite fanatical, but only to a certain point. So they probably wouldn’t give in to a Pascal’s mugging but many of them are willing to give to a long-term future fund over GiveWell charities - which is quite a bit fanaticism! So justifying fanaticism still seems useful to me, even if EAs put their fingers in their ears with regards to the most extreme conclusion...
many of them are willing to give to a long-term future fund over GiveWell charities
It really doesn’t seem fanatical to me to try to reduce the chance of everyone dying, when you have a specific mechanism by which everyone might die that doesn’t seem all that unlikely! That’s the right action according to all sorts of belief systems, not just longtermism! (See also theseposts.)
Hmm I do think it’s fairly fanatical. To quote this summary:
For example, it might seem fanatical to spend $1 billion on ASI-alignment for the sake of a 1-in-100,000 chance of preventing a catastrophe, when one could instead use that money to help many people with near-certainty in the near-term.
The probability that any one longtermist’s actions will actually prevent a catastrophe is very small. So I do think longtermist EAs are acting fairly fanatically.
Another way of thinking about it is that, whilst the probability of x-risk may be fairly high, the x-risk probability decrease any one person can achieve is very small. I raised this point on Neel’s post.
By this logic it seems like all sorts of ordinary things are fanatical:
Buying less chicken from the grocery store is fanatical (this only reduces the number of suffering chickens if you buying less chicken was the tipping point that caused the grocery store to order one less shipment of chicken, and that one fewer order was the tipping point that caused the factory farm to reduce the number of chickens it aimed to produce; this seems very low probability)
Donating small amounts to AMF is fanatical (it’s very unlikely that your $25 causes AMF to do another distribution beyond what it would have otherwise done)
Voting is fanatical (the probability of any one vote swinging the outcome is very small)
Attending a particular lecture of a college course is fanatical (it’s highly unlikely that missing that particular lecture will make a difference to e.g. your chance of getting the job you want).
Generally I think it’s a bad move to take a collection of very similar actions and require that each individual action within the collection be reasonably likely to have an impact.
To quote this summary
I don’t know of anyone who (a) is actively working reducing the probability of catastrophe and (b) thinks we only reduce the probability of catastrophe by 1-in-100,000 if we spend $1 billion on it. Maybe Eliezer Yudkowsky and Nate Soares, but probably not even them. The summary is speaking theoretically; I’m talking about what happens in practice.
Probabilities are on a continuum. It’s subjective at what point fanaticism starts. You can call those examples fanatical if you want to, but the probabilities of success in those examples are probably considerably higher than in the case of averting an existential catastrophe.
I think the probability that my personal actions avert an existential catastrophe is higher than the probability that my personal vote in the next US presidential election would change its outcome.
I think I’d plausibly say the same thing for my other examples; I’d have to think a bit more about the actual probabilities involved.
That’s fair enough, although when it comes to voting I mainly do it for personal pleasure / so that I don’t have to lie to people about having voted!
When it comes to something like donating to GiveWell charities on a regular basis / going vegan for life I think one can probably have greater than 50% belief they will genuinely save lives / avert suffering. Any single donation or choice to avoid meat will have far lower probability, but it seems fair to consider doing these things over a longer period of time as that is typically what people do (and what someone who chooses a longtermist career essentially does).
Why consider only a single longtermist career in isolation, but consider multiple donations in aggregate?
Given that you seem to agree voting is fanatical, I’m guessing you want to consider the probability that an individual’s actions are impactful, but why should the locus of agency be the individual? Seems pretty arbitrary.
If you agree that voting is fanatical, do you also agree that activism is fanatical? The addition of a single activist is very unlikely to change the end result of the activism.
Why consider only a single longtermist career in isolation, but consider multiple donations in aggregate?
A longtermist career spans decades, as would going vegan for life or donating regularly for decades. So it was mostly a temporal thing, trying to somewhat equalise the commitment associated with different altruistic choices.
but why should the locus of agency be the individual? Seems pretty arbitrary.
Hmm well aren’t we all individuals making individual choices? So ultimately what is relevant to me is if my actions are fanatical?
If you agree that voting is fanatical, do you also agree that activism is fanatical?
Pretty much yes. To clarify—I have never said I’m against acting fanatically. I think the arguments for acting fanatically, particularly the one in this paper, are very strong. That said, something like a Pascal’s mugging does seem a bit ridiculous to me (but I’m open to the possibility I should hand over the money!).
Hmm well aren’t we all individuals making individual choices? So ultimately what is relevant to me is if my actions are fanatical?
We’re all particular brain cognitions that only exist for ephemeral moments before our brains change and become a new cognition that is similar but not the same. (See also “What counts as death?”.) I coordinate both with the temporally-distant (i.e. future) brain cognitions that we typically call “me in the past/future” and with the spatially-distant brain cognitions that we typically call “other people”. The temporally-distant cognitions are more similar to current-brain-cognition than the spatially-distant cognitions but it’s fundamentally a quantitative difference, not a qualitative one.
That said, something like a Pascal’s mugging does seem a bit ridiculous to me (but I’m open to the possibility I should hand over the money!).
By “fanatical” I want to talk about the thing that seems weird about Pascal’s mugging and the thing that seems weird about spending your career searching for ways to create infinitely large baby universes, on the principle that it slightly increases the chance of infinite utility.
If you agree there’s something weird there and that longtermists don’t generally reason using that weird thing and typically do some other thing instead, that’s sufficient for my claim (b).
Can you give some examples of infinite utility?
We figure out how to prevent the heat death of the universe indefinitely. (Technically this doesn’t lead to infinite utility, since you could still destroy everything of value in the universe, but by driving the probability of that low enough you can get arbitrarily large amounts of utility, which leads to the same fanatical conclusions.)
We figure out that a particular configuration of matter produces experiences so optimized for pleasure that it is infinite utility (i.e. we’d accept any finite amount of torture to create it even for one second).
We discover a previously unknown law of physics that allow us to run hypercomputers which can run infinite simulations of happy people.
None of these seem particularly likely, but I’m not literally certain that they can’t happen / that I can’t affect their probability, and if you accept fanaticism then you should be striving to increase the probability of making something like this happen. (Which, to be clear, could be the right thing to do! But it’s not how longtermists tend to reason in practice.)
As you said in your previous comment we essentially are increasing the probability of these things happening by reducing x-risk. I’m not convinced we don’t tend to reason fanatically in practice—after all Bostrom’s astronomical waste argument motivates reducing x-risk by raising the possibility of achieving incredibly high levels of utility (in a footnote he says he is setting aside the possibility of infinitely many people). So reducing x-risk and trying to achieve existential security seems to me to be consistent with fanatical reasoning.
It’s interesting to consider what we would do if we actually achieved existential security and entered the long reflection. If we take fanaticism seriously at that point (and I think we will) we may well go for infinite value. It’s worth noting though that certain approaches to going for infinite value will probably dominate other approaches by having a higher probability of success. So we’d probably decide on the most promising possibility and run with that. If I had to guess I’d say we’d look into creating infinitely many digital people with extremely high levels of utility.
I’m not sure whether you are disagreeing with me or not. My claims are (a) accepting fanaticism implies choosing actions that most increase probability of infinite utility, (b) we are not currently choosing actions based on how much they increase probability of infinite utility, (c) therefore we do not currently accept fanaticism (though we might in the future), (d) given we don’t accept fanaticism we should not use “fanaticism is fine” as an argument to persuade people of longtermism.
Is there a specific claim there you disagree with? Or were you riffing off what I said to make other points?
Yes I disagree with b) although it’s a nuanced disagreement.
I think the EA longtermist movement is currently choosing the actions that most increase probability of infinite utility, by reducing existential risk.
What I’m less sure of is that achieving infinite utility is the motivation for reducing existential risk. It might just be that achieving “incredibly high utility” is the motivation for reducing existential risk. I’m not too sure on this.
My point about the long reflection was that when we reach this period it will be easier to see the fanatics from the non-fanatics.
This is not in conflict with my claim (b). My claim (b) is about the motivation or reasoning by which actions are chosen. That’s all I rely on for the inferences in claims (c) and (d).
I think we’re mostly in agreement here, except that perhaps I’m more confident that most longtermists are not (currently) motivated by “highest probability of infinite utility”.
Yeah that’s fair. As I said I’m not entirely sure on the motivation point.
I think in practice EAs are quite fanatical, but only to a certain point. So they probably wouldn’t give in to a Pascal’s mugging but many of them are willing to give to a long-term future fund over GiveWell charities - which is quite a bit fanaticism! So justifying fanaticism still seems useful to me, even if EAs put their fingers in their ears with regards to the most extreme conclusion...
It really doesn’t seem fanatical to me to try to reduce the chance of everyone dying, when you have a specific mechanism by which everyone might die that doesn’t seem all that unlikely! That’s the right action according to all sorts of belief systems, not just longtermism! (See also these posts.)
Hmm I do think it’s fairly fanatical. To quote this summary:
The probability that any one longtermist’s actions will actually prevent a catastrophe is very small. So I do think longtermist EAs are acting fairly fanatically.
Another way of thinking about it is that, whilst the probability of x-risk may be fairly high, the x-risk probability decrease any one person can achieve is very small. I raised this point on Neel’s post.
By this logic it seems like all sorts of ordinary things are fanatical:
Buying less chicken from the grocery store is fanatical (this only reduces the number of suffering chickens if you buying less chicken was the tipping point that caused the grocery store to order one less shipment of chicken, and that one fewer order was the tipping point that caused the factory farm to reduce the number of chickens it aimed to produce; this seems very low probability)
Donating small amounts to AMF is fanatical (it’s very unlikely that your $25 causes AMF to do another distribution beyond what it would have otherwise done)
Voting is fanatical (the probability of any one vote swinging the outcome is very small)
Attending a particular lecture of a college course is fanatical (it’s highly unlikely that missing that particular lecture will make a difference to e.g. your chance of getting the job you want).
Generally I think it’s a bad move to take a collection of very similar actions and require that each individual action within the collection be reasonably likely to have an impact.
I don’t know of anyone who (a) is actively working reducing the probability of catastrophe and (b) thinks we only reduce the probability of catastrophe by 1-in-100,000 if we spend $1 billion on it. Maybe Eliezer Yudkowsky and Nate Soares, but probably not even them. The summary is speaking theoretically; I’m talking about what happens in practice.
Probabilities are on a continuum. It’s subjective at what point fanaticism starts. You can call those examples fanatical if you want to, but the probabilities of success in those examples are probably considerably higher than in the case of averting an existential catastrophe.
I think the probability that my personal actions avert an existential catastrophe is higher than the probability that my personal vote in the next US presidential election would change its outcome.
I think I’d plausibly say the same thing for my other examples; I’d have to think a bit more about the actual probabilities involved.
That’s fair enough, although when it comes to voting I mainly do it for personal pleasure / so that I don’t have to lie to people about having voted!
When it comes to something like donating to GiveWell charities on a regular basis / going vegan for life I think one can probably have greater than 50% belief they will genuinely save lives / avert suffering. Any single donation or choice to avoid meat will have far lower probability, but it seems fair to consider doing these things over a longer period of time as that is typically what people do (and what someone who chooses a longtermist career essentially does).
Why consider only a single longtermist career in isolation, but consider multiple donations in aggregate?
Given that you seem to agree voting is fanatical, I’m guessing you want to consider the probability that an individual’s actions are impactful, but why should the locus of agency be the individual? Seems pretty arbitrary.
If you agree that voting is fanatical, do you also agree that activism is fanatical? The addition of a single activist is very unlikely to change the end result of the activism.
A longtermist career spans decades, as would going vegan for life or donating regularly for decades. So it was mostly a temporal thing, trying to somewhat equalise the commitment associated with different altruistic choices.
Hmm well aren’t we all individuals making individual choices? So ultimately what is relevant to me is if my actions are fanatical?
Pretty much yes. To clarify—I have never said I’m against acting fanatically. I think the arguments for acting fanatically, particularly the one in this paper, are very strong. That said, something like a Pascal’s mugging does seem a bit ridiculous to me (but I’m open to the possibility I should hand over the money!).
We’re all particular brain cognitions that only exist for ephemeral moments before our brains change and become a new cognition that is similar but not the same. (See also “What counts as death?”.) I coordinate both with the temporally-distant (i.e. future) brain cognitions that we typically call “me in the past/future” and with the spatially-distant brain cognitions that we typically call “other people”. The temporally-distant cognitions are more similar to current-brain-cognition than the spatially-distant cognitions but it’s fundamentally a quantitative difference, not a qualitative one.
By “fanatical” I want to talk about the thing that seems weird about Pascal’s mugging and the thing that seems weird about spending your career searching for ways to create infinitely large baby universes, on the principle that it slightly increases the chance of infinite utility.
If you agree there’s something weird there and that longtermists don’t generally reason using that weird thing and typically do some other thing instead, that’s sufficient for my claim (b).
Certainly agree there is something weird there!
Anyway I don’t really think there was too much disagreement between us, but it was an interesting exchange nonetheless!