I strongly disagree that utilitarianism isn’t a sound moral philosophy, and don’t understand the black and white distinction between longtermism and us not all dying. I might be missing something there is surely at least some overlap betwen those two reasons for preventing AI risk.
I don’t know if it’s a “black and white distinction”, but surely there’s a difference between:
Existential risk is bad because the future could have a zillion people, so their combined moral weight dominates all other considerations.
Existential risk is bad because (i) I personally am going to die (ii) my children are going to die (iii) everyone I love are going to die (iv) everyone I know are going to die, and also (v) humanity is not going to have a future (regardless of the number of people in it).
For example, something that “only” kills 99.99% of the population would be comparably bad by my standards (because i-iv still apply), whereas it would be way less bad by longtermism standards. Even something that “only” kills (say) everyone I know and everyone they know would be comparably bad for me, whereas utilitarianism would judge it a mere blip in comparison to human extinction.
Out of interest, if you aren’t an effective altruist, nor a longermist then what do you call yourself?
I call myself “Vanessa” :) Keep your identity small and all that. If you mean, do I have a name for my moral philosophy then… not really. We can call it “antirealist contractarianism”, I guess? I’m not that good at academic philosophy.
I don’t know if it’s a “black and white distinction”, but surely there’s a difference between:
Existential risk is bad because the future could have a zillion people, so their combined moral weight dominates all other considerations.
Existential risk is bad because (i) I personally am going to die (ii) my children are going to die (iii) everyone I love are going to die (iv) everyone I know are going to die, and also (v) humanity is not going to have a future (regardless of the number of people in it).
For example, something that “only” kills 99.99% of the population would be comparably bad by my standards (because i-iv still apply), whereas it would be way less bad by longtermism standards. Even something that “only” kills (say) everyone I know and everyone they know would be comparably bad for me, whereas utilitarianism would judge it a mere blip in comparison to human extinction.
I call myself “Vanessa” :) Keep your identity small and all that. If you mean, do I have a name for my moral philosophy then… not really. We can call it “antirealist contractarianism”, I guess? I’m not that good at academic philosophy.