I’m not sure why you think non-longtermist beliefs are irrelevant.
Nice. That’s what makes us misunderstand each other, I think. (This is crucial to my point.)
Many people have no beliefs about what actions are good or bad for the long-term future (they are clueless or just don’t care anyway). But some people have beliefs about this, most of whom believe X-risk reduction is good in the very long run. The most fundamental question I raise is Where do the beliefs of the latter type of people come from? Why do they hold them instead of holding that x-risk reduction is bad in the very long run or being agnostic on this particular question? [1]Is it because X-risk reduction is in fact good in the long term (i.e., these people have the capacity to make judgment calls that track the truth on this question) or because of something else?
And then my post considers the potential evolutionary pressure towards optimism vis-a-vis the long-term future of humanity as a candidate for “something else”.
So I’m not saying optimistic longtermism is more evolutionary-debunkable than, e.g., partial altruism towards your loved ones. I’m saying it is more evolutionary-debunkable than not optimistic longtermism (i.e., pessimistic longtermism OR agnosticism on how to feel about the long-term future of humanity). Actually I’m not even really saying that, but I think that and this is why I chose to discuss an EDA against optimistic longtermism, specifically.
So if you want to disagree with me, you have to argue that: A) Not optimistic longtermism is at least just as evolutionarily debunkable as optimistic longtermism, and/or B) Optimistic longtermism is better explained by the possibility that our judgment calls vis-a-vis the long-term value of X-risk reduction track the truth than by something else.
So I’m interested in optimistic longtermism vs not optimistic longtermsm (i.e., pessimictic longtermism OR agnosticism on the long-term value of x-risk reduction). Beliefs that the long-term future doesn’t matter or something are irrevelant, here.
Yes, I do think this: “Not optimistic longtermism is at least just as evolutionarily debunkable as optimistic longtermism.”
That’s what I think our prior should be, and generally we shouldn’t accept evolutionary debunking arguments for moral beliefs unless there’s actual findings in evolutionary psychology that suggest evolutionary pressure is the best explanation for them. I think it’s indeed trivially easy to come up with some story for why any given belief is subject to evolutionary debunking, but these stories are so easy to come up with that they provide essentially no meaningful evidence that the debunking is warranted, unless further substantiated.
E.g., I think the claim that pessimistic longtermism is evolutionarily selected for, because it would cause people to care more about their own families and kin than about far-off generations, is at least as plausible as your claim about optimistic longtermism. Or we might think agnostic longtermism is selected for, because we’re cognitive misers and thinking about the long-term future is too intensive and not decision relevant to be selected for. In fact, I think none of these claims is very plausible at all, because I don’t think it’s likely evolution is selecting for these kinds of beliefs at this level of detail.
My argument about neutrality toward creating lives also counts against your claim, because if it were true that there was evolutionary pressure toward pro-natalist, optimistic longtermism, I would predict we’d not see intuitions for neutrality about creating future lives be so prevalent. But they are prevalent, so this is another reason I don’t think your claim is plausible.
I think the claim that pessimistic longtermism is evolutionarily selected for, because it would cause people to care more about their own families and kin than about far-off generations
Wait sorry, what? No, it would cause people to work on making the future smaller or reduce s-risks or something. Pessimistic longtermists are still longtermists. They do care about far-off generations. They just think it’s ideally better if they don’t exist.[1]
Having clarified that, do you really not find optimistic longtermism more evolutionarily adaptive than pessimistic longtermism? (Let’s forget about agnosticism, here, for simplicity). I mean, the former says “save humanity and increase population size” and the latter says the exact opposite. I find it hard not to think the former favors survival and reproduction more than the latter, all else equal, such that it is more likely to be selected for.
Is it just that we had different definitions of pessimistic longtermism in mind? (I should have been clearer, sorry.)
And btw, this is not necessarily due to them making different moral assumptions than optimistic longtermists. The disagreement might be purely empirical.
What a belief implies about what someone does depends on many other things, like other beliefs and their options in the world. If, e.g., there are more opportunities to work on x-risk reduction than s-risk reduction, then it might be true that optimistic longtermists are less likely than pessimsitic longtermists to form families (because they’re more focused on work) than pessimistic longtermists.
Having clarified that, do you really not find optimistic longtermism more evolutionarily adaptive than pessimistic longtermism?
As my answer made clear, the point I really want to emphasise is that this feels like an absurd exercise — there’s no reason to believe that longtermist beliefs are heritable or selected for in our ancestral environment.
Oh ok so our disagreement is on whether concern for the long-term future needs to be selected for for evolution to “directly” (in the same sense you used it earlier) influence longtermists’ beliefs on the value of X-risk reduction and making the future bigger, right?
Nice. That’s what makes us misunderstand each other, I think. (This is crucial to my point.)
Many people have no beliefs about what actions are good or bad for the long-term future (they are clueless or just don’t care anyway). But some people have beliefs about this, most of whom believe X-risk reduction is good in the very long run. The most fundamental question I raise is Where do the beliefs of the latter type of people come from? Why do they hold them instead of holding that x-risk reduction is bad in the very long run or being agnostic on this particular question? [1] Is it because X-risk reduction is in fact good in the long term (i.e., these people have the capacity to make judgment calls that track the truth on this question) or because of something else?
And then my post considers the potential evolutionary pressure towards optimism vis-a-vis the long-term future of humanity as a candidate for “something else”.
So I’m not saying optimistic longtermism is more evolutionary-debunkable than, e.g., partial altruism towards your loved ones. I’m saying it is more evolutionary-debunkable than not optimistic longtermism (i.e., pessimistic longtermism OR agnosticism on how to feel about the long-term future of humanity). Actually I’m not even really saying that, but I think that and this is why I chose to discuss an EDA against optimistic longtermism, specifically.
So if you want to disagree with me, you have to argue that:
A) Not optimistic longtermism is at least just as evolutionarily debunkable as optimistic longtermism, and/or
B) Optimistic longtermism is better explained by the possibility that our judgment calls vis-a-vis the long-term value of X-risk reduction track the truth than by something else.
Does that make sense?
So I’m interested in optimistic longtermism vs not optimistic longtermsm (i.e., pessimictic longtermism OR agnosticism on the long-term value of x-risk reduction). Beliefs that the long-term future doesn’t matter or something are irrevelant, here.
Yes, I do think this: “Not optimistic longtermism is at least just as evolutionarily debunkable as optimistic longtermism.”
That’s what I think our prior should be, and generally we shouldn’t accept evolutionary debunking arguments for moral beliefs unless there’s actual findings in evolutionary psychology that suggest evolutionary pressure is the best explanation for them. I think it’s indeed trivially easy to come up with some story for why any given belief is subject to evolutionary debunking, but these stories are so easy to come up with that they provide essentially no meaningful evidence that the debunking is warranted, unless further substantiated.
E.g., I think the claim that pessimistic longtermism is evolutionarily selected for, because it would cause people to care more about their own families and kin than about far-off generations, is at least as plausible as your claim about optimistic longtermism. Or we might think agnostic longtermism is selected for, because we’re cognitive misers and thinking about the long-term future is too intensive and not decision relevant to be selected for. In fact, I think none of these claims is very plausible at all, because I don’t think it’s likely evolution is selecting for these kinds of beliefs at this level of detail.
My argument about neutrality toward creating lives also counts against your claim, because if it were true that there was evolutionary pressure toward pro-natalist, optimistic longtermism, I would predict we’d not see intuitions for neutrality about creating future lives be so prevalent. But they are prevalent, so this is another reason I don’t think your claim is plausible.
Wait sorry, what? No, it would cause people to work on making the future smaller or reduce s-risks or something. Pessimistic longtermists are still longtermists. They do care about far-off generations. They just think it’s ideally better if they don’t exist.[1]
Having clarified that, do you really not find optimistic longtermism more evolutionarily adaptive than pessimistic longtermism? (Let’s forget about agnosticism, here, for simplicity). I mean, the former says “save humanity and increase population size” and the latter says the exact opposite. I find it hard not to think the former favors survival and reproduction more than the latter, all else equal, such that it is more likely to be selected for.
Is it just that we had different definitions of pessimistic longtermism in mind? (I should have been clearer, sorry.)
And btw, this is not necessarily due to them making different moral assumptions than optimistic longtermists. The disagreement might be purely empirical.
What a belief implies about what someone does depends on many other things, like other beliefs and their options in the world. If, e.g., there are more opportunities to work on x-risk reduction than s-risk reduction, then it might be true that optimistic longtermists are less likely than pessimsitic longtermists to form families (because they’re more focused on work) than pessimistic longtermists.
As my answer made clear, the point I really want to emphasise is that this feels like an absurd exercise — there’s no reason to believe that longtermist beliefs are heritable or selected for in our ancestral environment.
Oh ok so our disagreement is on whether concern for the long-term future needs to be selected for for evolution to “directly” (in the same sense you used it earlier) influence longtermists’ beliefs on the value of X-risk reduction and making the future bigger, right?