I think some of these factors relate to effective altruism as an idea (“EA-I”), while others relate to effective altruism as a particular community (“EA-C”) that practices a form of EA-I.
I would place somewhat more emphasis on members of different Christian groups being more or less comfortable with the particular cultural practices of EA-C. For example, those from evangelical backgrounds are probably less likely to feel comfortable in a subculture that is often enthusiastic about recreational use of controlled psychoactive drugs.
Of course, neither EA-I nor EA-C can make everyone happy. For EA-I, this is more of an epistemic issue; we don’t want to water down what EA-I is. For EA-C, this is more of an unfortunate, practical issue (even if it is unavoidable). Aspects of EA-C may be historical accidents, or may be calculated to maximize the amount of aggregate good that the community can do (subject to the constraint that it is a single community). But there is no possible construction of EA-C that will maximize the good that each and every person who is open to EA-I will accomplish. Ideally, there would be multiple full communities[1] practicing EA-I, and each person open to EA-I could pick the full community that would be most conducive to them doing the most good.
Different Christian communities place different emphases on being (shall we say) publicly Christian. For some, it’s OK for faith to be a more of a private thing. Others feel an obligation to be vocal about their faith. And there are of course many gradations and variations in between. Those in the more-vocal camp may be more concerned about not being accepted, or being discriminated against.
On cause areas:
One such point of doctrine is eschatology. Those who are who think the Second Coming is sure or very likely to happen within decades would reject the concept of a prolonged future for humanity and hence longtermism. This kind of eschatological expectation is common among the more conservative protestants.
In the current meta, where longtermism is practically close enough to synonymous with x-risk reduction, any confident belief in the Second Coming may be sufficient to foreclose significant engagement with longtermism for many Christians. The Second Coming doesn’t really work if there are no people left because the AI killed them all! I suspect similar rationales would be present in many other religions, either because they have their own eschatologies or because human extinction would seem at tension with a foundational belief in a deity who is at least mostly benevolent, at least nearly omnipotent, and interested in human welfare.
Even beyond that, other subfields in longtermism don’t mesh as well with common Christian theological concepts. Transhumanism, digital minds, and similar concepts are likely to be non-starters for many Christians. In most Christian theologies, human beings are uniquely made[2] in the image of God and their creations would not share in that nature at all. Furthermore, EA thinking about the future may be seen as technoutopian, which is in tension with Christian theologies that identify sin (~ a religious version of evil or wrongdoing) as the fundamental cause of problems in the world. So EA thinking can come off as seeking mostly technological solutions to a spiritual problem.
Depending on their beliefs about soteriology, a Christian with longtermist tendencies might also focus on evangelism, theorizing that eternity is forever and that what happens in the life to come is far more important than what happens on earth.
Some Christians might perceive working on animal welfare as misdirected and reject EA because they see animal welfare being a prominent cause area in the movement.
My guess is that EA reasoning about cause prio, rather than beliefs about the need to reduce animal suffering per se, would be the major stumbling block here. After all, companion-animal charities have long been popular in the US, and I don’t have any reason to think that US Christians were shunning them. But (e.g.) trying to quantify the moral weight of a chicken’s welfare in comparison to that of a human is probably more likely to upset someone coming from a distinctly Christian point of view than (say) the median adult in a developed country. Suggesting that the resulting number is in the single digits, or that the meat-eater problem is relevant to deciding whether to donate to global health charities, is even more likely to be perceived as off-putting.[3]Cf. the discussion of humans as being made in the image of God above.
Characteristic to both of these stances is that they lead to a rejection of only a particular cause area within EA. This would leave room to engage with the other parts.
Yes, although we don’t know what EA content the hypothetical person would find first (or early). If the first content they happen to see is about (e.g.) the meat-eater problem, they may not come back for a second helping even though they would have resonated with GH&D work. With GH&D declining in the meta, this may be a bigger issue than it would have been years ago.
Also, I think many people—Christian or not—would be less likely to engage with a community if a significant portion of community effort and energy was devoted to something they found silly, inappropriate, or inconsistent with their deeply-held values.[4]
“Full community” is not the greatest term. I mean something significantly more than an affinity group, but not necessarily something insular from other groups practicing EA-I. A full community can stand on its own two feet, as it were. To use a Christian metaphor, a church would ordinarily be a full community. One can receive the sacraments/ordinances, learn and study, obtain spiritual support and guidance, serve those who are less privileged, and get what we might consider the other key functions of a communal Christian life through a church. I’m less clear in my own mind on the key functions of a community practicing EA-I.
I would place somewhat more emphasis on members of different Christian groups being more or less comfortable with the particular cultural practices of EA-C. For example, those from evangelical backgrounds are probably less likely to feel comfortable in a subculture that is often enthusiastic about recreational use of controlled psychoactive drugs.
Yeah, I would also imagine some Christians, especially more conservative ones, are being turned off by some cultural practices in (some) EA groups. And those who usually move in explicitly Christian social circles might find interacting with a secular community difficult regardless.
In the current meta, where longtermism is practically close enough to synonymous with x-risk reduction, any confident belief in the Second Coming may be sufficient to foreclose significant engagement with longtermism for many Christians. The Second Coming doesn’t really work if there are no people left because the AI killed them all!
The eschatology question is interesting. I think it can still make sense to work on what amounts practically to x-risk prevention even when expecting humans to be around at the Second Coming of Christ (or some eschatological event in other religions). If God doesn’t want humans to go extinct, he could achieve this through human efforts to mitigate potential x-risks – the idea of God implementing his plans through the actions of humans is a familiar theme from the Bible. Also, setting humanity on a path to self-destruction that could only be halted by the return of Christ definitely doesn’t sound like the kind of thing God wants humans to do, so working against it would seem like a good thing.
But a belief in the Second Coming does reduce the size of the future (unless one expects it to occur very far away in the future), so it undermines the astronomical value of far future oriented interventions.
My guess is that EA reasoning about cause prio, rather than beliefs about the need to reduce animal suffering per se, would be the major stumbling block here.
I think you’re right and it’s the priorisation more than the cause itself, I should have been more clear about that.
The eschatology question is interesting. I think it can still make sense to work on what amounts practically to x-risk prevention even when expecting humans to be around at the Second Coming of Christ (or some eschatological event in other religions).
Also, one can think that x-risk work is also generally effective in mitigating near-x-risk (e.g., a pandemic that “only” kills 99% of us). Particularly given the existence of the Genesis flood narrative, I expect most Christians would accept the possibility of a mass catastrophe that killed billions but less than everyone.
Thanks for writing this!
A few general points:
I think some of these factors relate to effective altruism as an idea (“EA-I”), while others relate to effective altruism as a particular community (“EA-C”) that practices a form of EA-I.
I would place somewhat more emphasis on members of different Christian groups being more or less comfortable with the particular cultural practices of EA-C. For example, those from evangelical backgrounds are probably less likely to feel comfortable in a subculture that is often enthusiastic about recreational use of controlled psychoactive drugs.
Of course, neither EA-I nor EA-C can make everyone happy. For EA-I, this is more of an epistemic issue; we don’t want to water down what EA-I is. For EA-C, this is more of an unfortunate, practical issue (even if it is unavoidable). Aspects of EA-C may be historical accidents, or may be calculated to maximize the amount of aggregate good that the community can do (subject to the constraint that it is a single community). But there is no possible construction of EA-C that will maximize the good that each and every person who is open to EA-I will accomplish. Ideally, there would be multiple full communities[1] practicing EA-I, and each person open to EA-I could pick the full community that would be most conducive to them doing the most good.
Different Christian communities place different emphases on being (shall we say) publicly Christian. For some, it’s OK for faith to be a more of a private thing. Others feel an obligation to be vocal about their faith. And there are of course many gradations and variations in between. Those in the more-vocal camp may be more concerned about not being accepted, or being discriminated against.
On cause areas:
In the current meta, where longtermism is practically close enough to synonymous with x-risk reduction, any confident belief in the Second Coming may be sufficient to foreclose significant engagement with longtermism for many Christians. The Second Coming doesn’t really work if there are no people left because the AI killed them all! I suspect similar rationales would be present in many other religions, either because they have their own eschatologies or because human extinction would seem at tension with a foundational belief in a deity who is at least mostly benevolent, at least nearly omnipotent, and interested in human welfare.
Even beyond that, other subfields in longtermism don’t mesh as well with common Christian theological concepts. Transhumanism, digital minds, and similar concepts are likely to be non-starters for many Christians. In most Christian theologies, human beings are uniquely made[2] in the image of God and their creations would not share in that nature at all. Furthermore, EA thinking about the future may be seen as technoutopian, which is in tension with Christian theologies that identify sin (~ a religious version of evil or wrongdoing) as the fundamental cause of problems in the world. So EA thinking can come off as seeking mostly technological solutions to a spiritual problem.
Depending on their beliefs about soteriology, a Christian with longtermist tendencies might also focus on evangelism, theorizing that eternity is forever and that what happens in the life to come is far more important than what happens on earth.
My guess is that EA reasoning about cause prio, rather than beliefs about the need to reduce animal suffering per se, would be the major stumbling block here. After all, companion-animal charities have long been popular in the US, and I don’t have any reason to think that US Christians were shunning them. But (e.g.) trying to quantify the moral weight of a chicken’s welfare in comparison to that of a human is probably more likely to upset someone coming from a distinctly Christian point of view than (say) the median adult in a developed country. Suggesting that the resulting number is in the single digits, or that the meat-eater problem is relevant to deciding whether to donate to global health charities, is even more likely to be perceived as off-putting.[3] Cf. the discussion of humans as being made in the image of God above.
Yes, although we don’t know what EA content the hypothetical person would find first (or early). If the first content they happen to see is about (e.g.) the meat-eater problem, they may not come back for a second helping even though they would have resonated with GH&D work. With GH&D declining in the meta, this may be a bigger issue than it would have been years ago.
Also, I think many people—Christian or not—would be less likely to engage with a community if a significant portion of community effort and energy was devoted to something they found silly, inappropriate, or inconsistent with their deeply-held values.[4]
“Full community” is not the greatest term. I mean something significantly more than an affinity group, but not necessarily something insular from other groups practicing EA-I. A full community can stand on its own two feet, as it were. To use a Christian metaphor, a church would ordinarily be a full community. One can receive the sacraments/ordinances, learn and study, obtain spiritual support and guidance, serve those who are less privileged, and get what we might consider the other key functions of a communal Christian life through a church. I’m less clear in my own mind on the key functions of a community practicing EA-I.
There are of course, many different views about what “made” means here!
I do not mean to express an opinion on the merits of these topics, or suggest that discussion of them should be avoided.
Again, I am not expressing endorsement of a norm that we shouldn’t talk about or do certain things because some group of people would object to that.
Thanks for your comment, lots of great insights.
Yeah, I would also imagine some Christians, especially more conservative ones, are being turned off by some cultural practices in (some) EA groups. And those who usually move in explicitly Christian social circles might find interacting with a secular community difficult regardless.
The eschatology question is interesting. I think it can still make sense to work on what amounts practically to x-risk prevention even when expecting humans to be around at the Second Coming of Christ (or some eschatological event in other religions). If God doesn’t want humans to go extinct, he could achieve this through human efforts to mitigate potential x-risks – the idea of God implementing his plans through the actions of humans is a familiar theme from the Bible. Also, setting humanity on a path to self-destruction that could only be halted by the return of Christ definitely doesn’t sound like the kind of thing God wants humans to do, so working against it would seem like a good thing.
But a belief in the Second Coming does reduce the size of the future (unless one expects it to occur very far away in the future), so it undermines the astronomical value of far future oriented interventions.
I think you’re right and it’s the priorisation more than the cause itself, I should have been more clear about that.
Also, one can think that x-risk work is also generally effective in mitigating near-x-risk (e.g., a pandemic that “only” kills 99% of us). Particularly given the existence of the Genesis flood narrative, I expect most Christians would accept the possibility of a mass catastrophe that killed billions but less than everyone.