To be clear: youâre arguing that we should be agnostic (and, more strongly, take others to also be utterly clueless) about whether it would be good or bad for everyone to die?
If you think that, in general, justified belief is incompatible with âjudgment callsâ, then radical skepticism immediately follows. You canât even establish, to this standard, that the external world exists. I take that to show that thereâs a problem with the epistemic standards youâre assuming.
Itâs OKâindeed, essentialâto make judgment calls, and we should simply try to exercise better rather than worse judgment. There are, of course, tricky questions about how best to do that. But if thereâs anything that weâve learned from philosophy since Descartes, itâs that skeptical calls to abjure from disputable judgments altogether are⊠not feasible.
To be clear: youâre arguing that we should be agnostic (and, more strongly, take others to also be utterly clueless) about whether it would be good or bad for everyone to die?
whether it would be good or bad for everyone to die
Maybe a nitpick, but I find this choice of words quite unfair as it implicitly appeals to commonsensical intuitions that seem to have nothing to do with longtermism (to implicitly back your opinion that we know X-risk reduction is good from a longtermist perspective). You do something very similar multiple times in Itâs Not Wise to be Clueless.
If you think that, in general, justified belief is incompatible with âjudgment callsâ
EDIT: actually, not sure this is related. You donât seem to argue that our judgment calls are truth-tracking. You argue that there is a rational requirement to start with a certain prior (i.e., you implicitly suggest that all rational agents should agree with you on X-risk reduction without having to make judgment calls, in fact).
I donât think penalizing complexity is enough to escape radical skepticism in general. Consider the âuniverse popped into existence (fully-formed) 5 minutes agoâ hypothesis. Itâs not obvious that this is more complex than the alternative hypothesis that includes the past five minutes PLUS billions of years before that. One could try to argue for this claim, but I donât think that our confidence in history should be *contingent* on that extremely contentious philosophical project working out successfully!
But to clarify: I donât think I say anything much in that post about âthe reasons why we should start withâ various anti-skeptical priors, and Iâm certainly not committed to saying that there are âsimilar reasonsâ in every anti-skeptical case. The similarity I point to is simply that we clearly should have anti-skeptical priors. âWhyâ is a separate question (if it has an answer at all, the answer may vary from case to case).
On whether we agree: When I talk about exercising better rather than worse judgment, I take success here to be determined by the contents of our judgments. Some claims warrant higher credence than others, and we should try to have our credences match as close as possible to the objectively warranted level.
But thatâs quite different from focusing on whether our judgments stem from a âreliable sourceâ. I think thereâs very little chance that you could show that almost any of your philosophical beliefs (including this very epistemic demand) stem from a source that we can independently demonstrate to be reliable. I think the kind of higher-order inquiry youâre proposing is a dead end: you canât really judge which philosophical dispositions are reliable until youâve determined which philosophical beliefs are true.
To illustrate with a couple of concrete examples:
(1) You claim that âan evolutionary pressure toward pro-natalist beliefsâ is an âunreliableâ source. But that isnât unreliable if pro-natalism is (broadly) correct.
(2) Compare evolutionary pressures to judge that pain is bad. A skeptic might claim this source is âunreliableâ, but we neednât accept that claim. Since pain is bad, when evolution disposes us to believe this, it is disposing us towards a true belief. (To simply assert this obviously wonât suffice to convince a skeptic, but the lesson of post-Cartesian epistemology is that trying to convince skeptics is a foolâs game.)
Re (1): I mean, say we know the reason why Alice is a pro-natalist is 100% due to the mere fact that this belief was evolutionarily advantageous for her ancestors (and 0% due to good philosophical reasoning). This would discredit her belief, right? This wouldnât mean pro-natalism is incorrect. It would just mean that if it is correct, it is for reasons that have nothing to do with what led Alice to endorse it. She just happened to luckily be âright for the wrong reasonsâ. Do you at least agree with this in this particular contrived example or do you think that evolutionary pressures cannot ever be a reason to question our beliefs?
(Waiting for your answer on this before potentially responding to the rest as I think this will help us pin down the crux.)
Philosophical truths are causally inefficacious, so we already know that there is a causal explanation for any philosophical belief you have that (one could characterize as) having ânothing to do withâ the reasons why it is true. So if you accept that causal condition as sufficient for debunking, you cannot have any philosophical beliefs whatsoever.
Put another way: we should already be âquestioning our beliefsâ; spinning out a causal debunking story offers nothing new. Itâs just an isolated demand for rigor, when you should already be questioning everything, and forming the overall most coherent belief-set you can in light of that questioning.
We do better, I argue, to regard the causal origins of a (normative) belief as lacking intrinsic epistemic significance. The important question is instead just whether the proposition in question is itself either intrinsically credible or otherwise justified. Parfit rejects this (p.287):
Suppose we discover that we have some belief because we were hypnotized to have this belief, by some hypnotist who chose at random what to cause us to believe. One example might be the belief that incest between siblings is morally wrong. If the hypnotistâs flipped coin had landed the other way up, he would have caused us to believe that such incest is not wrong. If we discovered that this was how our belief was caused, we could not justifiably assume that this belief was true.
I agree that we cannot just assume that such a belief is true (but this was just as true before we learned of its causal originsâthe hypnotist makes no difference). We need to expose it to critical reflection in light of all else that we believe. Perhaps we will find that there is no basis for believing such incest to be wrong. Or perhaps we will find a basis after all (perhaps on indirect consequentialist grounds). Either way, what matters is just whether there is a good justification to be found or not, which is a matter completely independent of us and how we originally came by the belief. Parfit commits the genetic fallacy when he asserts that the causal origins âwould cast grave doubt on the justifiability of these beliefs.â (288)
Note that âphilosophical reasoningâ governs how we update our beliefs, iron out inconsistencies, etc. But the raw starting points are not reached by âreasoningâ (what would you be reasoning from, if you donât already accept any premises?) So your assumed contrast between âgood philosophical reasoningâ and âsuspicious causal forces that undermine beliefâ would actually undermine all beliefs, once you trace them back to foundational premises.
The only way to actually maintain coherent beliefs is to make your peace with having starting points that were not themselves determined via a rational process. Such causal âdebunkingâ gives us a reason to take another look at our starting points, and consider whether (in light of everything we now believe) we want to revise them. But if the starting points still seem right to us, in light of everything, then it has to be reasonable to stick with them whatever their original causal basis may have been.
Overall, the solution is just to assess the first-order issues on their merits. âDebunkingâ arguments are a sideshow. They should never convince anyone who shouldnât already have been equally convinced on independent (first-order) grounds.
Imagine you and I have laid out all the possible considerations for and against reducing X-risks and still disagree (because we make different opaque judgment calls when weighing these considerations against one another). Then, do you agree that we have nothing left to discuss other than whether any of our judgment calls correlate with the truth?
(This, on its own, doesnât prove anything about whether EDAs can ever help us; Iâm just trying to pin down which assumption Iâm making that you donât or vice versa).
Probably nothing left to discuss, period. (Which judgment calls we take to correlate with the truth will simply depend on what we take the truth to be, which is just whatâs in dispute. I donât think thereâs any neutral way to establish whose starting points are more intrinsically credible.)
> I donât think thereâs any neutral way to establish whose starting points are more intrinsically credible.
So do I have any good reason to favor my starting points (/âjudgment calls) over yours, then? Whether to keep mine or to adopt yours becomes an arbitrary choice, no?
It depends what constraints you put on what can qualify as a âgood reasonâ. If you think that a good reason has to be âneutrally recognizableâ as such, then thereâll be no good reason to prefer any internally-coherent worldview over any other. That includes some really crazy (by our lights) worldviews. So we may instead allow that good reasons arenât always recognizable by others. Each person may then take themselves to have good reason to stick with their starting points, though perhaps only one is actually right about thisâand since it isnât independently verifiable which, there would seem an element of epistemic luck to it all. (A disheartening result, if you had hoped that rational argumentation could guarantee that we would all converge on the truth!)
I discuss this epistemic picture in a bit more detail in âKnowing What Mattersâ.
I donât think this response engages with the argument that judgment calls about our impact on net welfare over the whole cosmos are extraordinary claims, so they should be held to a high epistemic standard. What do you think of my points on this here and in this thread?
I think itâs conceptually confused to use the term âhigh epistemic standardsâ to favor imprecise credence or suspended judgment over using oneâs best judgment. I donât think the former two are automatically more epistemically responsible.
Suspended judgment may be better than forming a bad precise judgment, but worse than forming a good precise judgment. Nothing in the concept of âhigh standardsâ should necessarily lead us to prioritize avoiding the risk of bad judgment over the risk of failing to form a good judgment when we could and should have.
I donât see how this engages with the arguments I cited, or the cited post more generally. Why do you think itâs plausible to form a (non-arbitrary) determinate judgment about these matters? Why think these determinate judgments are our âbestâ judgment, when we could instead have imprecise credences that donât narrow things down beyond what we have reason to?
We disagree about âwhat we have reason toâ think about the value of humanityâs continued existenceâthatâs precisely the question in dispute. I might as well ask why you limit yourself to (widely) imprecise credences that donât narrow things down nearly enough (or as much as we have reason to).
The topics under dispute here (e.g. whether we should think that human extinction is worse in expectation than humanityâs continued existence) involve ineradicable judgment calls. The OP wants to call pro-humanity judgment calls âsuspiciousâ. Iâve pointed out that I think their reasons for suspicion are insufficient to overturn such a datum of good judgment as âit would be bad if everyone died.â (Iâm not saying itâs impossible to overturn this verdict, but it should take a lot more than mere debunking arguments.)
Incidentally, I think the tendency of some in the community to be swayed to âcrazy townâ conclusions on the basis of such flimsy arguments is a big part of why many outsiders think EAs are unhinged. Itâs a genuine failure mode thatâs worth being aware of; the only way to avoid it, I suspect, is to have robustly sensible priors that are not so easily swayed without a much stronger basis.
Anyway, that was my response to the OP. You then complained that my response to the OP didnât engage with your posts. But I donât see why it would need to. Your post treats broad imprecision as a privileged default; my previous reply explained why I disagree with that starting point. Your own post links to further explanations Iâve given, here, about how sufficiently imprecise credences lead to crazy verdicts. Your response (in your linked post) dismisses this as âmotivated reasoning,â which I donât find convincing.
To mandate broadly imprecise credences on the topic at hand would be to defer overly much to a formal apparatus which, in virtue of forcing (with insufficient reason) a kind of practical neutrality about whether it would be bad for everyone to die, is manifestly unfit to guide high-stakes decision-making. Thatâs my view. Youâre free to disagree with it, of course.
I worry weâre going to continue to talk past each other. So I donât plan to engage further. But for other readersâ sake:
I definitely donât treat broad imprecision as âa privileged defaultâ. In the post I explain the motivation for having more or less severely imprecise credences in different hypotheses. The heart of it is that adding more precision, beyond what the evidence and plausible foundational principles merit, seems arbitrary. And you havenât explained why your bottom-line intuition â about which decisions are good w.r.t. a moral standard as extremely far-reaching as impartial beneficence[1] â would constitute evidence or a plausible foundational principle. (To me this seems pretty clearly different from the kind of intuition that would justify rejecting radical skepticism.)
[...] whether it would be good or bad for everyone to die
Iâm sorry for not engaging with the rest of your comment (Iâm not very knowledgeable on questions of cluelessness), but this is something I sometimes hear in X-risk discussion and I find it a bit confusing. Depending on what animals are sentient, itâs likely that every few weeks, the vast majority of the worldâs individuals die prematurely, often in painful ways (being eaten alive or starving). To my understanding, the case EA makes against X-risk is not the badness of death for the individuals whose lives will be somewhat shortenedâbecause it would not seem compelling in that case, especially when aiming to take into consideration of the welfare /â interests of most individuals on earth. I donât think this is a complex philosophical point or some extreme skepticism: Iâm just superficially observing that the situation of âeveryone dies prematurelyâ[1] seems to be very close to what we already have, so it doesnât seem that obvious that this is what makes X-risks intuitively bad.
(To be clear, Iâm not saying âanimals die so X-risk is goodâ, my point is simply that I donât agree that the fact that X-risks cause death is what makes them exceptionally bad, and (though Iâm much less sure about that) to my understanding, that is not what initially motivated EAs to care about X-risks (as opposed to the possibility of creating a flourishing future, or other considerations I know less well)).
Not that I supposed that âprematurelyâ was implied when you said âgood or bad for everyone to dieâ. Of course, if we think that itâs bad in general that individuals will die, no matter whether they die at a very young age or not, the case for X-risks being exceptionally bad seems weaker.
A very important consequence of everyone simultaneously dying would be that there would not be any future people. (I didnât mean to imply that what makes it bad is just the harm of death to the individuals directly affected. Just that it would be bad for everyone to die so.)
Yes, I agree with that! This is what I consider to be the core concern regarding X-risk. Therefore, instead of framing it as âwhether it would be good or bad for everyone to die,â the statement âwhether it would be good or bad for no future people to come into existenceâ seems more accurate, as it addresses what is likely the crux of the issue. This latter framing makes it much more reasonable to hold some degree of agnosticism on the question. Moreover, I think everyone maintains some minor uncertainties about thisâeven those most convinced of the importance of reducing extinction risk often remind us of the possibility of âfutures worse than extinction.â This clarification isnât intended to draw any definitive conclusion, just to highlight that being agnostic on this specific question isnât as counter-intuitive as the initial statement in your top comment might have suggested (though, as Jim noted, the post wasnât specifically arguing that we should be agnostic on that point either).
I hope I didnât come across as excessively nitpicky. I was motivated to write by impression that in X-risk discourse, there is sometimes (accidental) equivocation between the badness of our deaths and the badness of the non-existence of future beings. I sympathize with this: given the short timelines, I think many of us are concerned about X-risks for both reasons, and so itâs understandable that both get discussed (and this isnât unique to X-risks, of course). I hope you have a nice day of existence, Richard Y. Chappell, I really appreciate your blog!
One last clarification Iâd want to add is just the distinction between uncertainty and cluelessness. Thereâs immense uncertainty about the future: many different possibilities, varying in valence from very good to very bad. But appreciating that uncertainty is compatible with having (very) confident views about whether the continuation of humanity is good or bad in expectation, and thus not being utterly âcluelessâ about how the various prospects balance out.
To be clear: youâre arguing that we should be agnostic (and, more strongly, take others to also be utterly clueless) about whether it would be good or bad for everyone to die?
I think this is a really good example of what I was talking about in my post, Itâs Not Wise to be Clueless.
If you think that, in general, justified belief is incompatible with âjudgment callsâ, then radical skepticism immediately follows. You canât even establish, to this standard, that the external world exists. I take that to show that thereâs a problem with the epistemic standards youâre assuming.
Itâs OKâindeed, essentialâto make judgment calls, and we should simply try to exercise better rather than worse judgment. There are, of course, tricky questions about how best to do that. But if thereâs anything that weâve learned from philosophy since Descartes, itâs that skeptical calls to abjure from disputable judgments altogether are⊠not feasible.
Thanks for engaging with this, Richard!
I think I am making a much weaker claim than this. While I suggest that the EDA argument I raise is valid, I do not argue that it is strong to the point where optimistic longtermism is unwarranted. Also, the argument itself does not say what people should believe if they do not endorse optimistic longtermism (an alternative to cluelessness is pessimistic longtermismâI do not say anything about which one is the most appropriate alternative to optimistic longtermism if the EDA argument is strong enough). Sorry if my writing was unclear.
Maybe a nitpick, but I find this choice of words quite unfair as it implicitly appeals to commonsensical intuitions that seem to have nothing to do with longtermism (to implicitly back your opinion that we know X-risk reduction is good from a longtermist perspective). You do something very similar multiple times in Itâs Not Wise to be Clueless.
I didnât say that. I said that we ought to wonder whether these judgment calls are reliable, claim which you seem to agree with when you write:
Now, you seem much more convinced than me that our judgment calls with regard to the long-term value of X-risk reduction come from a reliable source (such as an evolutionary pressure selecting correct longtermist beliefs, whether directly or indirectly) rather than from evolutionary pressures towards pro-natalist beliefs. In Itâs Not Wise to be Clueless, the justification you provide for something in this vicinity[1] is that we ought to start with the prior that something like X-risk reduction is good for the similar reasons why we should start with the prior that the sun will rise tomorrow. But I think Jesse quite accurately pointed out the disanalogy and the problem with your argument in his comment. Do you have another argument and/âor an objection to Jesseâs reply that you are happy to share?
EDIT: actually, not sure this is related. You donât seem to argue that our judgment calls are truth-tracking. You argue that there is a rational requirement to start with a certain prior (i.e., you implicitly suggest that all rational agents should agree with you on X-risk reduction without having to make judgment calls, in fact).
I just posted the following reply to Jesse:
But to clarify: I donât think I say anything much in that post about âthe reasons why we should start withâ various anti-skeptical priors, and Iâm certainly not committed to saying that there are âsimilar reasonsâ in every anti-skeptical case. The similarity I point to is simply that we clearly should have anti-skeptical priors. âWhyâ is a separate question (if it has an answer at all, the answer may vary from case to case).
On whether we agree: When I talk about exercising better rather than worse judgment, I take success here to be determined by the contents of our judgments. Some claims warrant higher credence than others, and we should try to have our credences match as close as possible to the objectively warranted level.
But thatâs quite different from focusing on whether our judgments stem from a âreliable sourceâ. I think thereâs very little chance that you could show that almost any of your philosophical beliefs (including this very epistemic demand) stem from a source that we can independently demonstrate to be reliable. I think the kind of higher-order inquiry youâre proposing is a dead end: you canât really judge which philosophical dispositions are reliable until youâve determined which philosophical beliefs are true.
To illustrate with a couple of concrete examples:
(1) You claim that âan evolutionary pressure toward pro-natalist beliefsâ is an âunreliableâ source. But that isnât unreliable if pro-natalism is (broadly) correct.
(2) Compare evolutionary pressures to judge that pain is bad. A skeptic might claim this source is âunreliableâ, but we neednât accept that claim. Since pain is bad, when evolution disposes us to believe this, it is disposing us towards a true belief. (To simply assert this obviously wonât suffice to convince a skeptic, but the lesson of post-Cartesian epistemology is that trying to convince skeptics is a foolâs game.)
Re (1): I mean, say we know the reason why Alice is a pro-natalist is 100% due to the mere fact that this belief was evolutionarily advantageous for her ancestors (and 0% due to good philosophical reasoning). This would discredit her belief, right? This wouldnât mean pro-natalism is incorrect. It would just mean that if it is correct, it is for reasons that have nothing to do with what led Alice to endorse it. She just happened to luckily be âright for the wrong reasonsâ. Do you at least agree with this in this particular contrived example or do you think that evolutionary pressures cannot ever be a reason to question our beliefs?
(Waiting for your answer on this before potentially responding to the rest as I think this will help us pin down the crux.)
Philosophical truths are causally inefficacious, so we already know that there is a causal explanation for any philosophical belief you have that (one could characterize as) having ânothing to do withâ the reasons why it is true. So if you accept that causal condition as sufficient for debunking, you cannot have any philosophical beliefs whatsoever.
Put another way: we should already be âquestioning our beliefsâ; spinning out a causal debunking story offers nothing new. Itâs just an isolated demand for rigor, when you should already be questioning everything, and forming the overall most coherent belief-set you can in light of that questioning.
Compare my response to Parfit:
Note that âphilosophical reasoningâ governs how we update our beliefs, iron out inconsistencies, etc. But the raw starting points are not reached by âreasoningâ (what would you be reasoning from, if you donât already accept any premises?) So your assumed contrast between âgood philosophical reasoningâ and âsuspicious causal forces that undermine beliefâ would actually undermine all beliefs, once you trace them back to foundational premises.
The only way to actually maintain coherent beliefs is to make your peace with having starting points that were not themselves determined via a rational process. Such causal âdebunkingâ gives us a reason to take another look at our starting points, and consider whether (in light of everything we now believe) we want to revise them. But if the starting points still seem right to us, in light of everything, then it has to be reasonable to stick with them whatever their original causal basis may have been.
Overall, the solution is just to assess the first-order issues on their merits. âDebunkingâ arguments are a sideshow. They should never convince anyone who shouldnât already have been equally convinced on independent (first-order) grounds.
Imagine you and I have laid out all the possible considerations for and against reducing X-risks and still disagree (because we make different opaque judgment calls when weighing these considerations against one another). Then, do you agree that we have nothing left to discuss other than whether any of our judgment calls correlate with the truth?
(This, on its own, doesnât prove anything about whether EDAs can ever help us; Iâm just trying to pin down which assumption Iâm making that you donât or vice versa).
Probably nothing left to discuss, period. (Which judgment calls we take to correlate with the truth will simply depend on what we take the truth to be, which is just whatâs in dispute. I donât think thereâs any neutral way to establish whose starting points are more intrinsically credible.)
Oh interesting.
> I donât think thereâs any neutral way to establish whose starting points are more intrinsically credible.
So do I have any good reason to favor my starting points (/âjudgment calls) over yours, then? Whether to keep mine or to adopt yours becomes an arbitrary choice, no?
It depends what constraints you put on what can qualify as a âgood reasonâ. If you think that a good reason has to be âneutrally recognizableâ as such, then thereâll be no good reason to prefer any internally-coherent worldview over any other. That includes some really crazy (by our lights) worldviews. So we may instead allow that good reasons arenât always recognizable by others. Each person may then take themselves to have good reason to stick with their starting points, though perhaps only one is actually right about thisâand since it isnât independently verifiable which, there would seem an element of epistemic luck to it all. (A disheartening result, if you had hoped that rational argumentation could guarantee that we would all converge on the truth!)
I discuss this epistemic picture in a bit more detail in âKnowing What Mattersâ.
Nice, I see. Iâll go read that in more detail. Thanks for taking the time to clarify your view in this thread. Glad we identified the crux. :)
I donât think this response engages with the argument that judgment calls about our impact on net welfare over the whole cosmos are extraordinary claims, so they should be held to a high epistemic standard. What do you think of my points on this here and in this thread?
I think itâs conceptually confused to use the term âhigh epistemic standardsâ to favor imprecise credence or suspended judgment over using oneâs best judgment. I donât think the former two are automatically more epistemically responsible.
Suspended judgment may be better than forming a bad precise judgment, but worse than forming a good precise judgment. Nothing in the concept of âhigh standardsâ should necessarily lead us to prioritize avoiding the risk of bad judgment over the risk of failing to form a good judgment when we could and should have.
Iâve written about this more (with practical examples from pandemic policy disputes) in âAgency and Epistemic Cheems Mindsetâ
I donât see how this engages with the arguments I cited, or the cited post more generally. Why do you think itâs plausible to form a (non-arbitrary) determinate judgment about these matters? Why think these determinate judgments are our âbestâ judgment, when we could instead have imprecise credences that donât narrow things down beyond what we have reason to?
We disagree about âwhat we have reason toâ think about the value of humanityâs continued existenceâthatâs precisely the question in dispute. I might as well ask why you limit yourself to (widely) imprecise credences that donât narrow things down nearly enough (or as much as we have reason to).
The topics under dispute here (e.g. whether we should think that human extinction is worse in expectation than humanityâs continued existence) involve ineradicable judgment calls. The OP wants to call pro-humanity judgment calls âsuspiciousâ. Iâve pointed out that I think their reasons for suspicion are insufficient to overturn such a datum of good judgment as âit would be bad if everyone died.â (Iâm not saying itâs impossible to overturn this verdict, but it should take a lot more than mere debunking arguments.)
Incidentally, I think the tendency of some in the community to be swayed to âcrazy townâ conclusions on the basis of such flimsy arguments is a big part of why many outsiders think EAs are unhinged. Itâs a genuine failure mode thatâs worth being aware of; the only way to avoid it, I suspect, is to have robustly sensible priors that are not so easily swayed without a much stronger basis.
Anyway, that was my response to the OP. You then complained that my response to the OP didnât engage with your posts. But I donât see why it would need to. Your post treats broad imprecision as a privileged default; my previous reply explained why I disagree with that starting point. Your own post links to further explanations Iâve given, here, about how sufficiently imprecise credences lead to crazy verdicts. Your response (in your linked post) dismisses this as âmotivated reasoning,â which I donât find convincing.
To mandate broadly imprecise credences on the topic at hand would be to defer overly much to a formal apparatus which, in virtue of forcing (with insufficient reason) a kind of practical neutrality about whether it would be bad for everyone to die, is manifestly unfit to guide high-stakes decision-making. Thatâs my view. Youâre free to disagree with it, of course.
I worry weâre going to continue to talk past each other. So I donât plan to engage further. But for other readersâ sake:
I definitely donât treat broad imprecision as âa privileged defaultâ. In the post I explain the motivation for having more or less severely imprecise credences in different hypotheses. The heart of it is that adding more precision, beyond what the evidence and plausible foundational principles merit, seems arbitrary. And you havenât explained why your bottom-line intuition â about which decisions are good w.r.t. a moral standard as extremely far-reaching as impartial beneficence[1] â would constitute evidence or a plausible foundational principle. (To me this seems pretty clearly different from the kind of intuition that would justify rejecting radical skepticism.)
As I mention in the part of the post I linked, here.
Iâm sorry for not engaging with the rest of your comment (Iâm not very knowledgeable on questions of cluelessness), but this is something I sometimes hear in X-risk discussion and I find it a bit confusing. Depending on what animals are sentient, itâs likely that every few weeks, the vast majority of the worldâs individuals die prematurely, often in painful ways (being eaten alive or starving). To my understanding, the case EA makes against X-risk is not the badness of death for the individuals whose lives will be somewhat shortenedâbecause it would not seem compelling in that case, especially when aiming to take into consideration of the welfare /â interests of most individuals on earth. I donât think this is a complex philosophical point or some extreme skepticism: Iâm just superficially observing that the situation of âeveryone dies prematurelyâ[1] seems to be very close to what we already have, so it doesnât seem that obvious that this is what makes X-risks intuitively bad.
(To be clear, Iâm not saying âanimals die so X-risk is goodâ, my point is simply that I donât agree that the fact that X-risks cause death is what makes them exceptionally bad, and (though Iâm much less sure about that) to my understanding, that is not what initially motivated EAs to care about X-risks (as opposed to the possibility of creating a flourishing future, or other considerations I know less well)).
Not that I supposed that âprematurelyâ was implied when you said âgood or bad for everyone to dieâ. Of course, if we think that itâs bad in general that individuals will die, no matter whether they die at a very young age or not, the case for X-risks being exceptionally bad seems weaker.
A very important consequence of everyone simultaneously dying would be that there would not be any future people. (I didnât mean to imply that what makes it bad is just the harm of death to the individuals directly affected. Just that it would be bad for everyone to die so.)
Yes, I agree with that! This is what I consider to be the core concern regarding X-risk. Therefore, instead of framing it as âwhether it would be good or bad for everyone to die,â the statement âwhether it would be good or bad for no future people to come into existenceâ seems more accurate, as it addresses what is likely the crux of the issue. This latter framing makes it much more reasonable to hold some degree of agnosticism on the question. Moreover, I think everyone maintains some minor uncertainties about thisâeven those most convinced of the importance of reducing extinction risk often remind us of the possibility of âfutures worse than extinction.â This clarification isnât intended to draw any definitive conclusion, just to highlight that being agnostic on this specific question isnât as counter-intuitive as the initial statement in your top comment might have suggested (though, as Jim noted, the post wasnât specifically arguing that we should be agnostic on that point either).
I hope I didnât come across as excessively nitpicky. I was motivated to write by impression that in X-risk discourse, there is sometimes (accidental) equivocation between the badness of our deaths and the badness of the non-existence of future beings. I sympathize with this: given the short timelines, I think many of us are concerned about X-risks for both reasons, and so itâs understandable that both get discussed (and this isnât unique to X-risks, of course). I hope you have a nice day of existence, Richard Y. Chappell, I really appreciate your blog!
No worries at all (and best wishes to you too!).
One last clarification Iâd want to add is just the distinction between uncertainty and cluelessness. Thereâs immense uncertainty about the future: many different possibilities, varying in valence from very good to very bad. But appreciating that uncertainty is compatible with having (very) confident views about whether the continuation of humanity is good or bad in expectation, and thus not being utterly âcluelessâ about how the various prospects balance out.