I should admit at this point that I didn’t actually watch the Philosophy Tube video, so can’t comment on how this argument was portrayed there! And your response to that specific portrayal of it might be spot on.
I also agree with you that most existential risk work probably doesn’t need to rely on the possibility of ‘Bostromian’ futures (I like that term!) to justify itself. You only need extinction to be very bad (which I think it is), you don’t need it to be very very very bad.
But I think there must be some prioritisation decisions where it becomes relevant whether you are a weak longtermist (existental risk would be very bad and is currently neglected) or a strong longtermist (reducing existential risk by a tiny amount has astronomical expected value).
This is also a common line of attack that EA is getting more and more, and I think the reply “well yeah but you don’t have to be on board with these sci-fi sounding concepts to support work on existential risk” is a reply that people are understandably more suspicious of if they think the person making it is on board with these more sci-fi like arguments. It’s like when a vegan tries to make the case that a particular form of farming is unnecessarily cruel, even if you’re ok with eating meat otherwise. It’s very natural to be suspicious of their true motivations. (I say this as a vegan who takes part in welfare campaigns).
I should admit at this point that I didn’t actually watch the Philosophy Tube video, so can’t comment on how this argument was portrayed there! And your response to that specific portrayal of it might be spot on.
I also agree with you that most existential risk work probably doesn’t need to rely on the possibility of ‘Bostromian’ futures (I like that term!) to justify itself. You only need extinction to be very bad (which I think it is), you don’t need it to be very very very bad.
But I think there must be some prioritisation decisions where it becomes relevant whether you are a weak longtermist (existental risk would be very bad and is currently neglected) or a strong longtermist (reducing existential risk by a tiny amount has astronomical expected value).
This is also a common line of attack that EA is getting more and more, and I think the reply “well yeah but you don’t have to be on board with these sci-fi sounding concepts to support work on existential risk” is a reply that people are understandably more suspicious of if they think the person making it is on board with these more sci-fi like arguments. It’s like when a vegan tries to make the case that a particular form of farming is unnecessarily cruel, even if you’re ok with eating meat otherwise. It’s very natural to be suspicious of their true motivations. (I say this as a vegan who takes part in welfare campaigns).