I should admit at this point that I didnât actually watch the Philosophy Tube video, so canât comment on how this argument was portrayed there! And your response to that specific portrayal of it might be spot on.
I also agree with you that most existential risk work probably doesnât need to rely on the possibility of âBostromianâ futures (I like that term!) to justify itself. You only need extinction to be very bad (which I think it is), you donât need it to be very very very bad.
But I think there must be some prioritisation decisions where it becomes relevant whether you are a weak longtermist (existental risk would be very bad and is currently neglected) or a strong longtermist (reducing existential risk by a tiny amount has astronomical expected value).
This is also a common line of attack that EA is getting more and more, and I think the reply âwell yeah but you donât have to be on board with these sci-fi sounding concepts to support work on existential riskâ is a reply that people are understandably more suspicious of if they think the person making it is on board with these more sci-fi like arguments. Itâs like when a vegan tries to make the case that a particular form of farming is unnecessarily cruel, even if youâre ok with eating meat otherwise. Itâs very natural to be suspicious of their true motivations. (I say this as a vegan who takes part in welfare campaigns).
I should admit at this point that I didnât actually watch the Philosophy Tube video, so canât comment on how this argument was portrayed there! And your response to that specific portrayal of it might be spot on.
I also agree with you that most existential risk work probably doesnât need to rely on the possibility of âBostromianâ futures (I like that term!) to justify itself. You only need extinction to be very bad (which I think it is), you donât need it to be very very very bad.
But I think there must be some prioritisation decisions where it becomes relevant whether you are a weak longtermist (existental risk would be very bad and is currently neglected) or a strong longtermist (reducing existential risk by a tiny amount has astronomical expected value).
This is also a common line of attack that EA is getting more and more, and I think the reply âwell yeah but you donât have to be on board with these sci-fi sounding concepts to support work on existential riskâ is a reply that people are understandably more suspicious of if they think the person making it is on board with these more sci-fi like arguments. Itâs like when a vegan tries to make the case that a particular form of farming is unnecessarily cruel, even if youâre ok with eating meat otherwise. Itâs very natural to be suspicious of their true motivations. (I say this as a vegan who takes part in welfare campaigns).