80% seems reasonable. It’s hard to be confident about many things that far out, but:
i) We might be able to judge what things seem consistent with others. For example, it might be easier to say whether we’ll bring pigs to Alpha Centauri if we go, than whether we’ll ever go to Alpha Centauri.
ii) That we’ll terraform other planets is itself fairly speculative, so it seems fair to meet speculation with other speculation. There’s not much alternative.
iii) Inasmuch as we’re focussing in on (what’s in my opinion) a narrow part of the whole probability space — like flesh and blood humans going to colonise other stars and bringing animals with them — we can develop approaches that seem most likely to work in that particular scenario, rather than finding something that would hypothetically works across the whole space.
I agree. However, I suppose under a s-risk longtermist paradigm, a tiny chance of spacefaring turning out in a particular way could still be worth taking action to prevent or even be of utmost importance.
To wit, I think a lot of retorts to Abraham’s argument appear to me to be of the form “well, this seems rather unlikely to happen”, whereas I don’t think such an argument actually succeeds.
And to reiterate for clarity, I’m not taking a particular stance on Abraham’s argument itself—only saying why I think this one particular counterargument doesn’t work for me.
To wit, I think a lot of retorts to Abraham’s argument appear to me to be of the form “well, this seems rather unlikely to happen”, whereas I don’t think such an argument actually succeeds.
Peter, do you find my arguments in the comments below persuasive? Basically I tried to argue that the relative probability of extremely good outcomes is much higher than the relative probability of extremely bad outcomes, especially when weighted by moral value. (And I think this is sufficiently true for both classical utilitarians and people with a slight negative leaning).
80% seems reasonable. It’s hard to be confident about many things that far out, but:
i) We might be able to judge what things seem consistent with others. For example, it might be easier to say whether we’ll bring pigs to Alpha Centauri if we go, than whether we’ll ever go to Alpha Centauri.
ii) That we’ll terraform other planets is itself fairly speculative, so it seems fair to meet speculation with other speculation. There’s not much alternative.
iii) Inasmuch as we’re focussing in on (what’s in my opinion) a narrow part of the whole probability space — like flesh and blood humans going to colonise other stars and bringing animals with them — we can develop approaches that seem most likely to work in that particular scenario, rather than finding something that would hypothetically works across the whole space.
I agree. However, I suppose under a s-risk longtermist paradigm, a tiny chance of spacefaring turning out in a particular way could still be worth taking action to prevent or even be of utmost importance.
To wit, I think a lot of retorts to Abraham’s argument appear to me to be of the form “well, this seems rather unlikely to happen”, whereas I don’t think such an argument actually succeeds.
And to reiterate for clarity, I’m not taking a particular stance on Abraham’s argument itself—only saying why I think this one particular counterargument doesn’t work for me.
Part of the issue might be the subheading “Space colonization will probably include animals”.
If the heading had been ‘might’, then people would be less likely to object. Many things ‘might’ happen!
Good point. I agree.
That makes sense!
Peter, do you find my arguments in the comments below persuasive? Basically I tried to argue that the relative probability of extremely good outcomes is much higher than the relative probability of extremely bad outcomes, especially when weighted by moral value. (And I think this is sufficiently true for both classical utilitarians and people with a slight negative leaning).