<y statement above (not a ‘definition’, right?) is that
If you are not a total utilitarian, you don’t value “creating more lives” … at least not without some diminishing returns in your value. … perhaps you value reducing suffering or increasing happiness for people, now and in future, that will definitely or very likely exist...
then it is not clear that
“[A] reducing extinction risk is better than anything else we can do” …
because there’s also a strong case that, if the world is getting better, then helping people and animals right now is the most cost-effective solution.
Without the ‘extinction rules out an expected value very very OOM much larger number of future people’ cost, there is not a clear case that [A] preventing extinction risk must be the best use of our resources.
Now, suppose I were a total population utilitarian. Then there may be a strong case for [A]. But still maybe not; this seems to depend on empirical claims.
To me ‘reducing extinction risks’ seemed fairly obviously tractable, but on second thought, I can imagine some cases in which even this would be doubtful. Maybe, e.g., reducing risks of nuclear war in the next 100 years (e.g.) has actually little impact on extinction risks, as extinction is so likely anyways?!
Another important claim seems to be that there is a real likelihood of expansion past the earth into other planets/solar systems etc. Yet another is that ‘digital beings can have positive valenced existences’.
<y statement above (not a ‘definition’, right?) is that
If you are not a total utilitarian, you don’t value “creating more lives” … at least not without some diminishing returns in your value. … perhaps you value reducing suffering or increasing happiness for people, now and in future, that will definitely or very likely exist...
then it is not clear that “[A] reducing extinction risk is better than anything else we can do” …
because there’s also a strong case that, if the world is getting better, then helping people and animals right now is the most cost-effective solution.
Without the ‘extinction rules out an expected value very very OOM much larger number of future people’ cost, there is not a clear case that [A] preventing extinction risk must be the best use of our resources.
Now, suppose I were a total population utilitarian. Then there may be a strong case for [A]. But still maybe not; this seems to depend on empirical claims.
To me ‘reducing extinction risks’ seemed fairly obviously tractable, but on second thought, I can imagine some cases in which even this would be doubtful. Maybe, e.g., reducing risks of nuclear war in the next 100 years (e.g.) has actually little impact on extinction risks, as extinction is so likely anyways?!
Another important claim seems to be that there is a real likelihood of expansion past the earth into other planets/solar systems etc. Yet another is that ‘digital beings can have positive valenced existences’.