I mean… it’s quite easy. There were people who, for some reason, were optimistic regarding the long-term future of humanity and they had more children than others (and maybe a stronger survival drive), all else equal. The claim that there exists such a selection effect seems trivially true.
I agree that you can construct hypothetical scenarios in which a given trait is selected for (though even then you have to postulate that it’s heritable, which you didn’t specify here). But your claim is is not trivially true, and it does not establish that optimism regarding the long-term future of humanity has in fact been selected for in human evolutionary history. Other beliefs that are more plausibly susceptible to evolutionary debunking include the idea that we have special obligations to our family members, since these are likely connected to kinship ties that have been widely studied across many species.
So I think a key crux between us is on the question: what does it take for a belief to be vulnerable to evolutionary debunking? My view is that it should actually be established in the field of evolutionary psychology that the belief is best explained as the direct[1] product of our evolutionary history. (Even then, as I think you agree, that doesn’t falsify the belief, but it gives us reason to be suspicious of it.)
I asked ChatGPT how evolutionary psychologists typically try to show that a psychological trait was selected for. Here was its answer:
Evolutionary psychologists aim to show that a psychological trait is a product of selection by demonstrating that it likely solved adaptive problems in our ancestral environment. They look for traits that are universal across cultures, appear reliably during development, and show efficiency and specificity in addressing evolutionary challenges. Evidence from comparative studies with other species, heritability data, and cost-benefit analyses related to reproductive success also support such claims. Altogether, these approaches help build a case that the trait was shaped by natural or sexual selection rather than by learning or cultural influence alone.
I think you might say that you don’t have to show that a belief is best explain by evolutionary pressure, just that there’s some selection for it. In fact, I don’t think you’ve done that (because e.g. you have to show that it’s heritable). But I think that’s not nearly enough, because “some evolutionary pressure toward belief X” is a claim we can likely make about any belief at all. (E.g., pessimism about the future can be very valuable, because it can make you aware of potential dangers that optimists would miss.)
Also, in response to this:
On person-affecting beliefs: The vast majority of people holding these are not longtermists to begin with. What we should be wondering is “to the extent that we have intuitions about what is best for the long-term (and care about this), where do these intuitions come from?”. Non-longtermist beliefs are irrelevant, here. Hopefully, this also addresses your last bullet point.
I’m not sure why you think non-longtermist beliefs are irrelevant. Your claim is that optimistic longtermist beliefs are vulnerable to evolutionary debunking. But that would only be true if they were plausibly a product of evolutionary pressures which should apply to populations that have been subject to evolutionary selection; otherwise they’re not a product of our evolutionary history. And so evidence of what humans generally are prone to believe seems highly relevant. The fact that many people, perhaps most, are pre-theoretically disposed toward views that push away from optimistic longtermism and pro-natalism casts further doubt on the claim that the intuitions that push people toward optimistic longtermism and pro-natalism have been selected for.
I’m not sure why you think non-longtermist beliefs are irrelevant.
Nice. That’s what makes us misunderstand each other, I think. (This is crucial to my point.)
Many people have no beliefs about what actions are good or bad for the long-term future (they are clueless or just don’t care anyway). But some people have beliefs about this, most of whom believe X-risk reduction is good in the very long run. The most fundamental question I raise is Where do the beliefs of the latter type of people come from? Why do they hold them instead of holding that x-risk reduction is bad in the very long run or being agnostic on this particular question? [1]Is it because X-risk reduction is in fact good in the long term (i.e., these people have the capacity to make judgment calls that track the truth on this question) or because of something else?
And then my post considers the potential evolutionary pressure towards optimism vis-a-vis the long-term future of humanity as a candidate for “something else”.
So I’m not saying optimistic longtermism is more evolutionary-debunkable than, e.g., partial altruism towards your loved ones. I’m saying it is more evolutionary-debunkable than not optimistic longtermism (i.e., pessimistic longtermism OR agnosticism on how to feel about the long-term future of humanity). Actually I’m not even really saying that, but I think that and this is why I chose to discuss an EDA against optimistic longtermism, specifically.
So if you want to disagree with me, you have to argue that: A) Not optimistic longtermism is at least just as evolutionarily debunkable as optimistic longtermism, and/or B) Optimistic longtermism is better explained by the possibility that our judgment calls vis-a-vis the long-term value of X-risk reduction track the truth than by something else.
So I’m interested in optimistic longtermism vs not optimistic longtermsm (i.e., pessimictic longtermism OR agnosticism on the long-term value of x-risk reduction). Beliefs that the long-term future doesn’t matter or something are irrevelant, here.
Yes, I do think this: “Not optimistic longtermism is at least just as evolutionarily debunkable as optimistic longtermism.”
That’s what I think our prior should be, and generally we shouldn’t accept evolutionary debunking arguments for moral beliefs unless there’s actual findings in evolutionary psychology that suggest evolutionary pressure is the best explanation for them. I think it’s indeed trivially easy to come up with some story for why any given belief is subject to evolutionary debunking, but these stories are so easy to come up with that they provide essentially no meaningful evidence that the debunking is warranted, unless further substantiated.
E.g., I think the claim that pessimistic longtermism is evolutionarily selected for, because it would cause people to care more about their own families and kin than about far-off generations, is at least as plausible as your claim about optimistic longtermism. Or we might think agnostic longtermism is selected for, because we’re cognitive misers and thinking about the long-term future is too intensive and not decision relevant to be selected for. In fact, I think none of these claims is very plausible at all, because I don’t think it’s likely evolution is selecting for these kinds of beliefs at this level of detail.
My argument about neutrality toward creating lives also counts against your claim, because if it were true that there was evolutionary pressure toward pro-natalist, optimistic longtermism, I would predict we’d not see intuitions for neutrality about creating future lives be so prevalent. But they are prevalent, so this is another reason I don’t think your claim is plausible.
I think the claim that pessimistic longtermism is evolutionarily selected for, because it would cause people to care more about their own families and kin than about far-off generations
Wait sorry, what? No, it would cause people to work on making the future smaller or reduce s-risks or something. Pessimistic longtermists are still longtermists. They do care about far-off generations. They just think it’s ideally better if they don’t exist.[1]
Having clarified that, do you really not find optimistic longtermism more evolutionarily adaptive than pessimistic longtermism? (Let’s forget about agnosticism, here, for simplicity). I mean, the former says “save humanity and increase population size” and the latter says the exact opposite. I find it hard not to think the former favors survival and reproduction more than the latter, all else equal, such that it is more likely to be selected for.
Is it just that we had different definitions of pessimistic longtermism in mind? (I should have been clearer, sorry.)
And btw, this is not necessarily due to them making different moral assumptions than optimistic longtermists. The disagreement might be purely empirical.
What a belief implies about what someone does depends on many other things, like other beliefs and their options in the world. If, e.g., there are more opportunities to work on x-risk reduction than s-risk reduction, then it might be true that optimistic longtermists are less likely than pessimsitic longtermists to form families (because they’re more focused on work) than pessimistic longtermists.
Having clarified that, do you really not find optimistic longtermism more evolutionarily adaptive than pessimistic longtermism?
As my answer made clear, the point I really want to emphasise is that this feels like an absurd exercise — there’s no reason to believe that longtermist beliefs are heritable or selected for in our ancestral environment.
Oh ok so our disagreement is on whether concern for the long-term future needs to be selected for for evolution to “directly” (in the same sense you used it earlier) influence longtermists’ beliefs on the value of X-risk reduction and making the future bigger, right?
I agree that you can construct hypothetical scenarios in which a given trait is selected for (though even then you have to postulate that it’s heritable, which you didn’t specify here). But your claim is is not trivially true, and it does not establish that optimism regarding the long-term future of humanity has in fact been selected for in human evolutionary history. Other beliefs that are more plausibly susceptible to evolutionary debunking include the idea that we have special obligations to our family members, since these are likely connected to kinship ties that have been widely studied across many species.
So I think a key crux between us is on the question: what does it take for a belief to be vulnerable to evolutionary debunking? My view is that it should actually be established in the field of evolutionary psychology that the belief is best explained as the direct[1] product of our evolutionary history. (Even then, as I think you agree, that doesn’t falsify the belief, but it gives us reason to be suspicious of it.)
I asked ChatGPT how evolutionary psychologists typically try to show that a psychological trait was selected for. Here was its answer:
I think you might say that you don’t have to show that a belief is best explain by evolutionary pressure, just that there’s some selection for it. In fact, I don’t think you’ve done that (because e.g. you have to show that it’s heritable). But I think that’s not nearly enough, because “some evolutionary pressure toward belief X” is a claim we can likely make about any belief at all. (E.g., pessimism about the future can be very valuable, because it can make you aware of potential dangers that optimists would miss.)
Also, in response to this:
I’m not sure why you think non-longtermist beliefs are irrelevant. Your claim is that optimistic longtermist beliefs are vulnerable to evolutionary debunking. But that would only be true if they were plausibly a product of evolutionary pressures which should apply to populations that have been subject to evolutionary selection; otherwise they’re not a product of our evolutionary history. And so evidence of what humans generally are prone to believe seems highly relevant. The fact that many people, perhaps most, are pre-theoretically disposed toward views that push away from optimistic longtermism and pro-natalism casts further doubt on the claim that the intuitions that push people toward optimistic longtermism and pro-natalism have been selected for.
I used “direct” here because, in some sense, all of our beliefs are the product of our evolutionary history.
Nice. That’s what makes us misunderstand each other, I think. (This is crucial to my point.)
Many people have no beliefs about what actions are good or bad for the long-term future (they are clueless or just don’t care anyway). But some people have beliefs about this, most of whom believe X-risk reduction is good in the very long run. The most fundamental question I raise is Where do the beliefs of the latter type of people come from? Why do they hold them instead of holding that x-risk reduction is bad in the very long run or being agnostic on this particular question? [1] Is it because X-risk reduction is in fact good in the long term (i.e., these people have the capacity to make judgment calls that track the truth on this question) or because of something else?
And then my post considers the potential evolutionary pressure towards optimism vis-a-vis the long-term future of humanity as a candidate for “something else”.
So I’m not saying optimistic longtermism is more evolutionary-debunkable than, e.g., partial altruism towards your loved ones. I’m saying it is more evolutionary-debunkable than not optimistic longtermism (i.e., pessimistic longtermism OR agnosticism on how to feel about the long-term future of humanity). Actually I’m not even really saying that, but I think that and this is why I chose to discuss an EDA against optimistic longtermism, specifically.
So if you want to disagree with me, you have to argue that:
A) Not optimistic longtermism is at least just as evolutionarily debunkable as optimistic longtermism, and/or
B) Optimistic longtermism is better explained by the possibility that our judgment calls vis-a-vis the long-term value of X-risk reduction track the truth than by something else.
Does that make sense?
So I’m interested in optimistic longtermism vs not optimistic longtermsm (i.e., pessimictic longtermism OR agnosticism on the long-term value of x-risk reduction). Beliefs that the long-term future doesn’t matter or something are irrevelant, here.
Yes, I do think this: “Not optimistic longtermism is at least just as evolutionarily debunkable as optimistic longtermism.”
That’s what I think our prior should be, and generally we shouldn’t accept evolutionary debunking arguments for moral beliefs unless there’s actual findings in evolutionary psychology that suggest evolutionary pressure is the best explanation for them. I think it’s indeed trivially easy to come up with some story for why any given belief is subject to evolutionary debunking, but these stories are so easy to come up with that they provide essentially no meaningful evidence that the debunking is warranted, unless further substantiated.
E.g., I think the claim that pessimistic longtermism is evolutionarily selected for, because it would cause people to care more about their own families and kin than about far-off generations, is at least as plausible as your claim about optimistic longtermism. Or we might think agnostic longtermism is selected for, because we’re cognitive misers and thinking about the long-term future is too intensive and not decision relevant to be selected for. In fact, I think none of these claims is very plausible at all, because I don’t think it’s likely evolution is selecting for these kinds of beliefs at this level of detail.
My argument about neutrality toward creating lives also counts against your claim, because if it were true that there was evolutionary pressure toward pro-natalist, optimistic longtermism, I would predict we’d not see intuitions for neutrality about creating future lives be so prevalent. But they are prevalent, so this is another reason I don’t think your claim is plausible.
Wait sorry, what? No, it would cause people to work on making the future smaller or reduce s-risks or something. Pessimistic longtermists are still longtermists. They do care about far-off generations. They just think it’s ideally better if they don’t exist.[1]
Having clarified that, do you really not find optimistic longtermism more evolutionarily adaptive than pessimistic longtermism? (Let’s forget about agnosticism, here, for simplicity). I mean, the former says “save humanity and increase population size” and the latter says the exact opposite. I find it hard not to think the former favors survival and reproduction more than the latter, all else equal, such that it is more likely to be selected for.
Is it just that we had different definitions of pessimistic longtermism in mind? (I should have been clearer, sorry.)
And btw, this is not necessarily due to them making different moral assumptions than optimistic longtermists. The disagreement might be purely empirical.
What a belief implies about what someone does depends on many other things, like other beliefs and their options in the world. If, e.g., there are more opportunities to work on x-risk reduction than s-risk reduction, then it might be true that optimistic longtermists are less likely than pessimsitic longtermists to form families (because they’re more focused on work) than pessimistic longtermists.
As my answer made clear, the point I really want to emphasise is that this feels like an absurd exercise — there’s no reason to believe that longtermist beliefs are heritable or selected for in our ancestral environment.
Oh ok so our disagreement is on whether concern for the long-term future needs to be selected for for evolution to “directly” (in the same sense you used it earlier) influence longtermists’ beliefs on the value of X-risk reduction and making the future bigger, right?