Ya, I’m skeptical of this, too. I’m skeptical that we can collect reliable evidence on the necessary scale and analyze it in a rigorous enough way to conclude much. Experimental and quasi-experimental studies on a huge scale (we’re talking astronomical stakes for longtermism, right?) don’t seem possible, but maybe? Something like this might be promising, but it might not help us weigh important considerations against each other.
On a slightly similar note I know that Will MacAskill has argued that we should prevent human extinction on the basis of option value, and that this holds even if we think we would rather humanity go extinct. Granted this argument does depend on global priorities research making progress on key questions. Do you have any thoughts on this argument?
I think it’s plausible, but at what point can we say it’s outweighed by other considerations? Why isn’t it now? I’d say it’s a case of complex cluelessness for me.
I think it’s plausible, but at what point can we say it’s outweighed by other considerations? Why isn’t it now? I’d say it’s a case of complex cluelessness for me.
I haven’t actually read the whole essay by Will but I think the gist is we should avert extinction if:
We are unsure about whether extinction is good or bad / how good or how bad it is
We expect to be able to make good progress on this question (or at least that there’s a non-negligible probability that we can)
Given the current state of population ethics I think the first statement is probably true. Credible people have varying views (totalism, person-affecting, suffering-focused etc.) that say different things about the value of human extinction.
Statement 2 is slightly more tricky, but I’m inclined to say that there is a non-negligible change of us making good progress. In the grand scheme of things population ethics is a very, very new discipline (I think it basically started with Parfit’s Reasons and Persons?) and we’re still figuring some of the basics out.
So maybe if in a few hundred years we’re still as uncertain about population ethics as we are now, the argument for avoiding human extinction based on option value would disappear. As it stands however I think the argument is fairly compelling.
So my counterargument is just that extinction is plausibly good in expectation on my views, so reducing extinction risk is not necessarily positive in expectation. Therefore it is not robustly positive, and I’d prefer something that is. I actually think world destruction would very likely to be good, with only concerns for aliens as a reason to avoid it, which seems extremely speculative, although I suppose this might also be a case of complex cluelessness, since the stakes are high with aliens, but dealing with aliens could also go badly.
I’m a moral antirealist, and I expect I would never endorse a non-asymmetric population ethics. The procreation asymmetry (at least implying good lives can never justify even a single bad life) is among my strongest intuitions, and I’d sooner give up pretty much all others to keep it and remain consistent. Negative utilitarianism specifically is my “fallback” view if I can’t include other moral intuitions I have in a consistent way (and I’m pretty close to NU now, anyway).
Ya, I’m skeptical of this, too. I’m skeptical that we can collect reliable evidence on the necessary scale and analyze it in a rigorous enough way to conclude much. Experimental and quasi-experimental studies on a huge scale (we’re talking astronomical stakes for longtermism, right?) don’t seem possible, but maybe? Something like this might be promising, but it might not help us weigh important considerations against each other.
I think it’s plausible, but at what point can we say it’s outweighed by other considerations? Why isn’t it now? I’d say it’s a case of complex cluelessness for me.
I haven’t actually read the whole essay by Will but I think the gist is we should avert extinction if:
We are unsure about whether extinction is good or bad / how good or how bad it is
We expect to be able to make good progress on this question (or at least that there’s a non-negligible probability that we can)
Given the current state of population ethics I think the first statement is probably true. Credible people have varying views (totalism, person-affecting, suffering-focused etc.) that say different things about the value of human extinction.
Statement 2 is slightly more tricky, but I’m inclined to say that there is a non-negligible change of us making good progress. In the grand scheme of things population ethics is a very, very new discipline (I think it basically started with Parfit’s Reasons and Persons?) and we’re still figuring some of the basics out.
So maybe if in a few hundred years we’re still as uncertain about population ethics as we are now, the argument for avoiding human extinction based on option value would disappear. As it stands however I think the argument is fairly compelling.
So my counterargument is just that extinction is plausibly good in expectation on my views, so reducing extinction risk is not necessarily positive in expectation. Therefore it is not robustly positive, and I’d prefer something that is. I actually think world destruction would very likely to be good, with only concerns for aliens as a reason to avoid it, which seems extremely speculative, although I suppose this might also be a case of complex cluelessness, since the stakes are high with aliens, but dealing with aliens could also go badly.
I’m a moral antirealist, and I expect I would never endorse a non-asymmetric population ethics. The procreation asymmetry (at least implying good lives can never justify even a single bad life) is among my strongest intuitions, and I’d sooner give up pretty much all others to keep it and remain consistent. Negative utilitarianism specifically is my “fallback” view if I can’t include other moral intuitions I have in a consistent way (and I’m pretty close to NU now, anyway).