I agree. However, I suppose under a s-risk longtermist paradigm, a tiny chance of spacefaring turning out in a particular way could still be worth taking action to prevent or even be of utmost importance.
To wit, I think a lot of retorts to Abraham’s argument appear to me to be of the form “well, this seems rather unlikely to happen”, whereas I don’t think such an argument actually succeeds.
And to reiterate for clarity, I’m not taking a particular stance on Abraham’s argument itself—only saying why I think this one particular counterargument doesn’t work for me.
To wit, I think a lot of retorts to Abraham’s argument appear to me to be of the form “well, this seems rather unlikely to happen”, whereas I don’t think such an argument actually succeeds.
Peter, do you find my arguments in the comments below persuasive? Basically I tried to argue that the relative probability of extremely good outcomes is much higher than the relative probability of extremely bad outcomes, especially when weighted by moral value. (And I think this is sufficiently true for both classical utilitarians and people with a slight negative leaning).
I agree. However, I suppose under a s-risk longtermist paradigm, a tiny chance of spacefaring turning out in a particular way could still be worth taking action to prevent or even be of utmost importance.
To wit, I think a lot of retorts to Abraham’s argument appear to me to be of the form “well, this seems rather unlikely to happen”, whereas I don’t think such an argument actually succeeds.
And to reiterate for clarity, I’m not taking a particular stance on Abraham’s argument itself—only saying why I think this one particular counterargument doesn’t work for me.
Part of the issue might be the subheading “Space colonization will probably include animals”.
If the heading had been ‘might’, then people would be less likely to object. Many things ‘might’ happen!
Good point. I agree.
That makes sense!
Peter, do you find my arguments in the comments below persuasive? Basically I tried to argue that the relative probability of extremely good outcomes is much higher than the relative probability of extremely bad outcomes, especially when weighted by moral value. (And I think this is sufficiently true for both classical utilitarians and people with a slight negative leaning).