Those guiding principles are good. However, I wished you would include one that was against doing massive harm to the world. CEA endorses the “Foundational Research Institute,” a pseudo-think tank that promotes dangerous ideas of mass-termination of human and non-human life, not excluding extinction. By promoting this organization, CEA is promoting human, animal, and environmental terrorism on the grandest scale. Self-styled “effective altruists” try to pass themselves off as benevolent, but the reality is that they themselves are one of the biggest threats to the world by promoting terrorism and anti-spirituality under the cloak of altruism.
Fair point about not doing harm but I feel like you’re giving the Foundational Research Institute a treatment which is both unfair and unnecessary to get your point across.
If it was the case that FRI was accurately characterized here, then do we know of other EA orgs that would promote mass termination of life? If not, then it it is a necessary example, plain and simple.
If it was the case that FRI was accurately characterized here, then do we know of other EA orgs that would promote mass termination of life?
Sure. MFA, ACE and other animal charities plan to drastically reduce or even eliminate entirely the population of farm animals. And poverty reduction charities drastically reduce the number of wild animals.
If not, then it it is a necessary example, plain and simple.
But it is not necessary—as you can see elsewhere in this thread, I raised an issue without providing an example at all.
The problem is that some EAs would have the amount of life in the universe reduced to zero permanently. (And don’t downvote this unless you personally know this to be false—it is unfortunately true)
If not, then it it is a necessary example, plain and simple.
But it is not necessary—as you can see elsewhere in this thread, I raised an issue without providing an example at all.
“An issue”? Austen was referring to problems where an organization affiliates with particular organizations that cause terror risk, which you don’t seem to have discussed anywhere. For this particular issue, FRI is an illustrative and irreplaceable example, although perhaps you could suggest an alternative way of raising this concern?
The problem is that some EAs would have the amount of life in the universe reduced to zero permanently. (And don’t downvote this unless you personally know this to be false—it is unfortunately true)
It’s a spurious standard. You seem to be drawing a line between mass termination of life and permanent mass termination of life just to make sure that FRI falls on the wrong side of a line. It seems like either could support ‘terrorism’. Animal liberationists actually do have a track record of engaging in various acts of violence and disruption in the past. The fact that their interests aren’t as comprehensive as some NUs’ are doesn’t change this.
“An issue”? Austen was referring to problems where an organization affiliates with particular organizations that cause terror risk, which you don’t seem to have discussed anywhere.
I’m not sure why the fact that my comment didn’t discuss terrorism implies that it fails to be a good example of raising a point without an example.
For this particular issue, FRI is an illustrative and irreplaceable example, although perhaps you could suggest an alternative way of raising this concern?
“”Not causing harm” should be one of the EA values?” Though it probably falls perfectly well under commitment to others anyway.
Many antinatalists who are unaffiliated with EA have similar beliefs. (eg, David Benatar, although I’m not sure whether he’s even a consequentialist at all.)
Benatar is a nonconsequentialist. At least, the antinatalist argument he gives is nonconsequentialist—grounded in rules of consent.
Not sure why that matters though. It just underscores a long tradition of nonconsequentialists who have ideas which are similar to negative utilitarianism. Austen’s restriction of the question to NU just excludes obviously relevant examples such as VHEMT.
Exactly, despite the upvotes, Soeren’s argument is ill-founded. It seems really important in situations like this that people vote on what they believe to be true based on reason and evidence, not based on uninformed guesses and motivated reasoning.
Soeren didn’t give an argument. He wrote a single sentence pointing out that the parent comment was giving FRI an unfair and unnecessary treatment. I don’t see what’s “ill founded” about that.
It seems really important in situations like this that people vote on what they believe to be true based on reason and evidence, not based on uninformed guesses and motivated reasoning.
Why is it more important now than in normal discourse? If someone decides to be deliberately obtuse and disrespectful, isn’t that the best time to revert to tribalism and ignore what they have to say?
He wrote a single sentence pointing out that the parent comment was giving FRI an unfair and unnecessary treatment. I don’t see what’s “ill founded” about that.
What’s ill-founded is that if you want to point out a problem where people affiliate with NU orgs that promote values which increase risk of terror, then it’s obviously necessary to name the orgs. Calling it “unnecessary” to treat that org is then a blatant non-sequitur, whether you call it an argument or an assertion is up to you.
Why is it more important now than in normal discourse? If someone decides to be deliberately obtuse and disrespectful, isn’t that the best time to revert to tribalism and ignore what they have to say?
Our ability to discern good arguments even when we don’t like them is what sets us apart from the post-fact age we’re increasingly surrounded by. It’s important to focus on these things when people are being tribal, because that’s when it’s hard. If you only engage with facts when it’s easy, then you’re going to end up mistaken about many of the most important issues.
What’s ill-founded is that if you want to point out a problem where people affiliate with NU orgs that promote values which increase risk of terror,
But they do not increase the risk of terror. Have you studied terrorism? Do you know about where it comes from and how to combat it? As someone who actually has (US military, international relations) I can tell you that this whole thing is beyond silly. Radicalization is a process, not a mere manner of reading philosophical papers, and it involves structural factors among disenfranchised people and communities as well as the use of explicitly radicalizing media. And it is used primarily as a tool for a broad variety of political ends, which could easily include the ends which all kinds of EAs espouse. Very rarely is destruction itself the objective of terrorism. Also, terrorism generally happens as a result of actors feeling that they have a lack of access to legitimate channels of influencing policy. The way that people have leapt to discussing this topic without considering these basic facts shows that they don’t have the relevant expertise to draw conclusions on this topic.
Calling it “unnecessary” to treat that org is then a blatant non-sequitur, whether you call it an argument or an assertion is up to you.
But Austen did not say “Not supporting terrorism should be an EA value.” He said that not causing harm should be an EA value.
Our ability to discern good arguments even when we don’t like them is what sets us apart from the post-fact age we’re increasingly surrounded by.
There are many distinctions between EA and whatever you mean by the (new?) “post-fact age”, but responding seriously to what essentially amounts to trolling doesn’t seem like a necessary one.
It’s important to focus on these things when people are being tribal, because that’s when it’s hard.
That doesn’t make any sense. Why should we focus more on things just because they’re hard? Doesn’t it make more sense to put effort somewhere where things are easier, so that we get more return on our efforts?
If you only engage with facts when it’s easy, then you’re going to end up mistaken about many of the most important issues.
But that seems wrong: one person’s complaints about NU, for instance, isn’t one of the most important issues. At the same time, we have perfectly good discussions of very important facts about cause prioritization in this forum where people are much more mature and reasonable than, say, Austen here is. So it seems like there isn’t a general relationship between how important a fact is and how disruptive commentators are when discussing it. At the very minimum, one might start from a faux clean slate where a new discussion is started separate from the original instigator—something which takes no time at all and enables a bit of a psychological restart. That seems to be strictly slightly better than encouraging trolling.
Those radicalization factors you mentioned increase the likelihood for terrorism but are not necessary. Saying that people don’t commit terror from reading philosophical papers and thus those papers are innocent and shouldn’t be criticized is a pretty weak argument. Of course, such papers can influence people. The radicalization process starts with philosophy, so to say that first step doesn’t matter because the subsequent steps aren’t yet publicly apparent shows that you are knowingly trying to allow this form of radicalization to flourish. Although, NUEs do in fact meet the other criteria you mentioned. For instance, I doubt that they have confidence in legitimately influencing policy (ie. convincing the government to burn down all the forests).
FRI and its parent EA Foundation state that they are not philosophy organizations and exists solely to incite action. I agree that terrorism has not in the past been motivated purely by destruction. That is something that atheist extremists who call themselves effective altruists are founding.
I am not a troll. I am concerned about public safety. My city almost burned to ashes last year due to a forest fire, and I don’t want others to have to go through that. Anybody read about all the people in Portugal dying from a forest fire recently? That’s the kind of thing that NUEs are promoting and I’m trying to prevent. If you’re wondering why I don’t elaborate my position on “EAs” promoting terrorism/genocide, it is for two reasons. One, it is self-evident if you read Tomasik and FRI materials (not all of it, but some articles). And two, I can easily cause a negative effect by connecting the dots for those susceptible to the message or giving them destructive ideas they may not have thought of.
Those radicalization factors you mentioned increase the likelihood for terrorism but are not necessary
Yeah, and you probably think that being a negative utilitarian increases the likelihood for terrorism, but it’s not necessary either. In the real world we deal with probabilities and expectations, not speculations and fantasies.
Saying that people don’t commit terror from reading philosophical papers and thus those papers are innocent and shouldn’t be criticized is a pretty weak argument. Of course, such papers can influence people. The radicalization process starts with philosophy
This is silly handwaving. The radicalization process starts with being born. It doesn’t matter where things ‘start’ in the abstract sense, what matters is what causes the actual phenomenon of terrorism to occur.
to say that first step doesn’t matter because the subsequent steps aren’t yet publicly apparent shows that you are knowingly trying to allow this form of radicalization to flourish
So your head is too far up your own ass to even accept the possibility that someone who has actually studied international relations and counterinsurgency strategy knows that you are full of shit. Cool.
I am not a troll. I am concerned about public safety.
You are a textbook concern troll.
My city almost burned to ashes last year due to a forest fire, and I don’t want others to have to go through that
Welcome to EA, honey. Everyone here is altruistic, you can’t get special treatment.
That’s the kind of thing that NUEs are promoting
But they’re not. You think they’re promoting it, or at least you want people to think they’re promoting it. But that’s your own opinion, so presenting it like this constitutes defamation.
If you’re wondering why I don’t elaborate my position on “EAs” promoting terrorism/genocide, it is for two reasons. One, it is self-evident if you read Tomasik and FRI materials (not all of it, but some articles).
But I have read those materials. And it’s not self-evident. And other people have read those articles and they don’t find them self-evident either. Actually, it’s self-evident that they don’t promote it, if you read some of their materials.
And two, I can easily cause a negative effect by connecting the dots for those susceptible to the message or giving them destructive ideas they may not have thought of.
What bullshit. If you actually worried about this then you wouldn’t be saying that it’s a direct, self-evident conclusion of their beliefs. So either you don’t know what you’re doing, or you’re arguing in bad faith. Probably both.
I mostly agree with you. It honestly does worry me that the mainstream EA movement has no qualms about associating with FRI, whose values, I would wager, conflict with the those of the majority of humankind. This is one of the reasons I have drifted away from identifying with EA lately.
Self-styled “effective altruists” try to pass themselves off as benevolent, but the reality is that they themselves are one of the biggest threats to the world by promoting terrorism and anti-spirituality under the cloak of altruism.
It’s a stretch to say FRI directly promotes terrorism; they make it clear on their website that they oppose violence and encourage cooperation with other (non-NU) value systems. The end result of their advocacy, however, may be less idealistic than they anticipate. (It’s not too hard to imagine a negative utilitarian Kaczynski, if their movement gains traction. I think there’s even a page on the FRI website where they mention that as a possible risk of advocating for suffering-focused ethics.)
I don’t know what you mean by “anti-spirituality”.
I know they don’t actually come out and recommend terrorism publicly… but they sure go as far as they can to entice terrorism without being prosecuted by the government as a terrorist organization. Of course, if they were explicit, they’d immediately be shut down and jailed by authorities.
I promise you this – all those who endorse this mass termination of life ideology are going to pay a price. Whether by police action or public scrutiny, they will be forced to publicly abandon their position at some point. I implore them to do it now, on their volition. No one will believe them if they conveniently change their minds about no-rules negative utilitarianism after facing public scrutiny or the law. Now is the time. I warned CEA about this years ago, yet they still promote FRI.
I actually respect austere population-control to protect quality of life, even through seemingly drastic means such as forced sterilization (in extreme scenarios only, of course). However, atheists don’t believe in any divine laws such as the sin of killing, are thus not bound by any rules. The type of negative utilitarianism popular in EA is definitely a brutal no-rules, mass killing-is-okay type. It is important to remember, also, that not everyone has good mental health. Some people have severe schizophrenia and could start a forest fire or kill many people to “prevent suffering” without thinking through all of the negative aspects of doing this. I think that the Future of Humanity Institute should add negative utilitarian atheism to their list of existential risks.
Anti-spirituality: Doesn’t have anything to do with NU or FRI, I probably should have left it out of my comment. It just means that many EAs use EA as a means to promote atheism/atheists. Considering about 95% of the world’s population are believers, they may have an issue with this aspect of the movement.
Of course, if they were explicit, they’d immediately be shut down and jailed by authorities.
I really don’t like how you are accusing people without evidence of intentionally promoting violence. This is borderline libel. I agree that someone could take their ideology and use it to justify violence, but I see no reason to believe that they are intentionally trying to “entice” such actions.
I really don’t like how you are accusing people without evidence of intentionally promoting violence. This is borderline libel. I agree that someone could take their ideology and use it to justify violence, but I see no reason to believe that they are intentionally trying to “entice” such actions.
Indeed, must focus on the battles we can win. There are two traps. One is to make false accusations. Currently, few negative utilitarians are promoting terrorism, and we should not make accusations that would suggest otherwise. Two is to stir up controversy. Telling negative utilitarians that they are terrorists could inflame them into actually behaving in a more hostile manner. It is like when people say that naming “radical islamic terrorism” is necessary to solve the problem. Perhaps, but it would be more useful to engage cooperatively with the religion of Islam to show that it is a religion of piece, and the same for utilitraianism.
The safe position that we should expect EA leaders to vigilantly oppose is not to promote values whose adoption would lead to large-scale terrorism. This is the hill that we should choose to die on. Specifically, if negative utilitarians believe in cooperation, and they believe that value-spreading is important, then they should be cooperative in the values that they spread. And this does not allow for spreading values that would lead to actions that are overwhelmingly repulsive to the vast majority of ethicists andd the general population on an astronomical scale. EA leaders must include CEA.
However, atheists don’t believe in any divine laws such as the sin of killing, are thus not bound by any rules.
I think your gripe is with consequentialism, not atheism per se. And don’t forget that there are plenty of theists who do horrible things, often in the name of their religion.
I think that the Future of Humanity Institute should add negative utilitarian atheism to their list of existential risks.
It just means that many EAs use EA as a means to promote atheism/atheists.
It is evident that the majority of EAs are atheist/irreligious, but I am not aware of any EA organizations actively promoting atheism or opposing theism. Who uses EA as a “means to promote atheism”?
Coincidentally, the closest example I can recall is Phil Torres’s work on religious eschatological fanaticism as a possible agential x-risk.
Roman Yampolskiy’s shortlist of potential agents who could bring about an end to the world (https://arxiv.org/ftp/arxiv/papers/1605/1605.02817.pdf) also includes Military, Government, Corporations, Villains, Black Hats, Doomsday Cults, Depressed, Psychopaths, Criminals, AI Risk Deniers, and AI Safety Researchers.
They encourage cooperation with other value systems to further their apocalyptic goals, but mostly to prevent others from opposing them. That is different from tempering “strong NU” with other value systems to arrive at more moderate conclusions.
LOOOOL about your optimism of people not following FRI’s advocacy as purely as they want! Lets hope so, eh?
Also, I am somewhat concerned that this comment has been downvoted so much. It’s the only really substantive criticism of the article (admittedly it isn’t great), and it is at −3, right at the bottom.
Near the top are several comments at +5 or something that are effectively just applause.
LOL. Typical of my comments. Gets almost no upvotes but I never receive any sensible counterarguments! People use the forum vote system to persuade (by social proof) without having a valid argument. I have yet to vote a comment (up or down) because I think people should think for themselves.
Those guiding principles are good. However, I wished you would include one that was against doing massive harm to the world. CEA endorses the “Foundational Research Institute,” a pseudo-think tank that promotes dangerous ideas of mass-termination of human and non-human life, not excluding extinction. By promoting this organization, CEA is promoting human, animal, and environmental terrorism on the grandest scale. Self-styled “effective altruists” try to pass themselves off as benevolent, but the reality is that they themselves are one of the biggest threats to the world by promoting terrorism and anti-spirituality under the cloak of altruism.
Fair point about not doing harm but I feel like you’re giving the Foundational Research Institute a treatment which is both unfair and unnecessary to get your point across.
If it was the case that FRI was accurately characterized here, then do we know of other EA orgs that would promote mass termination of life? If not, then it it is a necessary example, plain and simple.
Sure. MFA, ACE and other animal charities plan to drastically reduce or even eliminate entirely the population of farm animals. And poverty reduction charities drastically reduce the number of wild animals.
But it is not necessary—as you can see elsewhere in this thread, I raised an issue without providing an example at all.
The problem is that some EAs would have the amount of life in the universe reduced to zero permanently. (And don’t downvote this unless you personally know this to be false—it is unfortunately true)
“An issue”? Austen was referring to problems where an organization affiliates with particular organizations that cause terror risk, which you don’t seem to have discussed anywhere. For this particular issue, FRI is an illustrative and irreplaceable example, although perhaps you could suggest an alternative way of raising this concern?
It’s a spurious standard. You seem to be drawing a line between mass termination of life and permanent mass termination of life just to make sure that FRI falls on the wrong side of a line. It seems like either could support ‘terrorism’. Animal liberationists actually do have a track record of engaging in various acts of violence and disruption in the past. The fact that their interests aren’t as comprehensive as some NUs’ are doesn’t change this.
I’m not sure why the fact that my comment didn’t discuss terrorism implies that it fails to be a good example of raising a point without an example.
“”Not causing harm” should be one of the EA values?” Though it probably falls perfectly well under commitment to others anyway.
It’s the only negative utilitarianism promoting group I know of. Does anyone know of others (affiliated with EA or not)?
Many antinatalists who are unaffiliated with EA have similar beliefs. (eg, David Benatar, although I’m not sure whether he’s even a consequentialist at all.)
Benatar is a nonconsequentialist. At least, the antinatalist argument he gives is nonconsequentialist—grounded in rules of consent.
Not sure why that matters though. It just underscores a long tradition of nonconsequentialists who have ideas which are similar to negative utilitarianism. Austen’s restriction of the question to NU just excludes obviously relevant examples such as VHEMT.
Exactly, despite the upvotes, Soeren’s argument is ill-founded. It seems really important in situations like this that people vote on what they believe to be true based on reason and evidence, not based on uninformed guesses and motivated reasoning.
Soeren didn’t give an argument. He wrote a single sentence pointing out that the parent comment was giving FRI an unfair and unnecessary treatment. I don’t see what’s “ill founded” about that.
Why is it more important now than in normal discourse? If someone decides to be deliberately obtuse and disrespectful, isn’t that the best time to revert to tribalism and ignore what they have to say?
What’s ill-founded is that if you want to point out a problem where people affiliate with NU orgs that promote values which increase risk of terror, then it’s obviously necessary to name the orgs. Calling it “unnecessary” to treat that org is then a blatant non-sequitur, whether you call it an argument or an assertion is up to you.
Our ability to discern good arguments even when we don’t like them is what sets us apart from the post-fact age we’re increasingly surrounded by. It’s important to focus on these things when people are being tribal, because that’s when it’s hard. If you only engage with facts when it’s easy, then you’re going to end up mistaken about many of the most important issues.
But they do not increase the risk of terror. Have you studied terrorism? Do you know about where it comes from and how to combat it? As someone who actually has (US military, international relations) I can tell you that this whole thing is beyond silly. Radicalization is a process, not a mere manner of reading philosophical papers, and it involves structural factors among disenfranchised people and communities as well as the use of explicitly radicalizing media. And it is used primarily as a tool for a broad variety of political ends, which could easily include the ends which all kinds of EAs espouse. Very rarely is destruction itself the objective of terrorism. Also, terrorism generally happens as a result of actors feeling that they have a lack of access to legitimate channels of influencing policy. The way that people have leapt to discussing this topic without considering these basic facts shows that they don’t have the relevant expertise to draw conclusions on this topic.
But Austen did not say “Not supporting terrorism should be an EA value.” He said that not causing harm should be an EA value.
There are many distinctions between EA and whatever you mean by the (new?) “post-fact age”, but responding seriously to what essentially amounts to trolling doesn’t seem like a necessary one.
That doesn’t make any sense. Why should we focus more on things just because they’re hard? Doesn’t it make more sense to put effort somewhere where things are easier, so that we get more return on our efforts?
But that seems wrong: one person’s complaints about NU, for instance, isn’t one of the most important issues. At the same time, we have perfectly good discussions of very important facts about cause prioritization in this forum where people are much more mature and reasonable than, say, Austen here is. So it seems like there isn’t a general relationship between how important a fact is and how disruptive commentators are when discussing it. At the very minimum, one might start from a faux clean slate where a new discussion is started separate from the original instigator—something which takes no time at all and enables a bit of a psychological restart. That seems to be strictly slightly better than encouraging trolling.
Those radicalization factors you mentioned increase the likelihood for terrorism but are not necessary. Saying that people don’t commit terror from reading philosophical papers and thus those papers are innocent and shouldn’t be criticized is a pretty weak argument. Of course, such papers can influence people. The radicalization process starts with philosophy, so to say that first step doesn’t matter because the subsequent steps aren’t yet publicly apparent shows that you are knowingly trying to allow this form of radicalization to flourish. Although, NUEs do in fact meet the other criteria you mentioned. For instance, I doubt that they have confidence in legitimately influencing policy (ie. convincing the government to burn down all the forests).
FRI and its parent EA Foundation state that they are not philosophy organizations and exists solely to incite action. I agree that terrorism has not in the past been motivated purely by destruction. That is something that atheist extremists who call themselves effective altruists are founding.
I am not a troll. I am concerned about public safety. My city almost burned to ashes last year due to a forest fire, and I don’t want others to have to go through that. Anybody read about all the people in Portugal dying from a forest fire recently? That’s the kind of thing that NUEs are promoting and I’m trying to prevent. If you’re wondering why I don’t elaborate my position on “EAs” promoting terrorism/genocide, it is for two reasons. One, it is self-evident if you read Tomasik and FRI materials (not all of it, but some articles). And two, I can easily cause a negative effect by connecting the dots for those susceptible to the message or giving them destructive ideas they may not have thought of.
Yeah, and you probably think that being a negative utilitarian increases the likelihood for terrorism, but it’s not necessary either. In the real world we deal with probabilities and expectations, not speculations and fantasies.
This is silly handwaving. The radicalization process starts with being born. It doesn’t matter where things ‘start’ in the abstract sense, what matters is what causes the actual phenomenon of terrorism to occur.
So your head is too far up your own ass to even accept the possibility that someone who has actually studied international relations and counterinsurgency strategy knows that you are full of shit. Cool.
You are a textbook concern troll.
Welcome to EA, honey. Everyone here is altruistic, you can’t get special treatment.
But they’re not. You think they’re promoting it, or at least you want people to think they’re promoting it. But that’s your own opinion, so presenting it like this constitutes defamation.
But I have read those materials. And it’s not self-evident. And other people have read those articles and they don’t find them self-evident either. Actually, it’s self-evident that they don’t promote it, if you read some of their materials.
What bullshit. If you actually worried about this then you wouldn’t be saying that it’s a direct, self-evident conclusion of their beliefs. So either you don’t know what you’re doing, or you’re arguing in bad faith. Probably both.
Their entire website boils down to one big effort at brainwashing people into believing that terrorism is altruistic.
I mostly agree with you. It honestly does worry me that the mainstream EA movement has no qualms about associating with FRI, whose values, I would wager, conflict with the those of the majority of humankind. This is one of the reasons I have drifted away from identifying with EA lately.
It’s a stretch to say FRI directly promotes terrorism; they make it clear on their website that they oppose violence and encourage cooperation with other (non-NU) value systems. The end result of their advocacy, however, may be less idealistic than they anticipate. (It’s not too hard to imagine a negative utilitarian Kaczynski, if their movement gains traction. I think there’s even a page on the FRI website where they mention that as a possible risk of advocating for suffering-focused ethics.)
I don’t know what you mean by “anti-spirituality”.
I know they don’t actually come out and recommend terrorism publicly… but they sure go as far as they can to entice terrorism without being prosecuted by the government as a terrorist organization. Of course, if they were explicit, they’d immediately be shut down and jailed by authorities.
I promise you this – all those who endorse this mass termination of life ideology are going to pay a price. Whether by police action or public scrutiny, they will be forced to publicly abandon their position at some point. I implore them to do it now, on their volition. No one will believe them if they conveniently change their minds about no-rules negative utilitarianism after facing public scrutiny or the law. Now is the time. I warned CEA about this years ago, yet they still promote FRI.
I actually respect austere population-control to protect quality of life, even through seemingly drastic means such as forced sterilization (in extreme scenarios only, of course). However, atheists don’t believe in any divine laws such as the sin of killing, are thus not bound by any rules. The type of negative utilitarianism popular in EA is definitely a brutal no-rules, mass killing-is-okay type. It is important to remember, also, that not everyone has good mental health. Some people have severe schizophrenia and could start a forest fire or kill many people to “prevent suffering” without thinking through all of the negative aspects of doing this. I think that the Future of Humanity Institute should add negative utilitarian atheism to their list of existential risks.
Anti-spirituality: Doesn’t have anything to do with NU or FRI, I probably should have left it out of my comment. It just means that many EAs use EA as a means to promote atheism/atheists. Considering about 95% of the world’s population are believers, they may have an issue with this aspect of the movement.
I really don’t like how you are accusing people without evidence of intentionally promoting violence. This is borderline libel. I agree that someone could take their ideology and use it to justify violence, but I see no reason to believe that they are intentionally trying to “entice” such actions.
Indeed, must focus on the battles we can win. There are two traps. One is to make false accusations. Currently, few negative utilitarians are promoting terrorism, and we should not make accusations that would suggest otherwise. Two is to stir up controversy. Telling negative utilitarians that they are terrorists could inflame them into actually behaving in a more hostile manner. It is like when people say that naming “radical islamic terrorism” is necessary to solve the problem. Perhaps, but it would be more useful to engage cooperatively with the religion of Islam to show that it is a religion of piece, and the same for utilitraianism.
The safe position that we should expect EA leaders to vigilantly oppose is not to promote values whose adoption would lead to large-scale terrorism. This is the hill that we should choose to die on. Specifically, if negative utilitarians believe in cooperation, and they believe that value-spreading is important, then they should be cooperative in the values that they spread. And this does not allow for spreading values that would lead to actions that are overwhelmingly repulsive to the vast majority of ethicists andd the general population on an astronomical scale. EA leaders must include CEA.
I think your gripe is with consequentialism, not atheism per se. And don’t forget that there are plenty of theists who do horrible things, often in the name of their religion.
The X-Risks Institute, which is run by /u/philosophytorres, specializes in agential risks, and mentions NU as one such risk. I don’t whether FHI has ever worked on agential risks.
It is evident that the majority of EAs are atheist/irreligious, but I am not aware of any EA organizations actively promoting atheism or opposing theism. Who uses EA as a “means to promote atheism”?
Coincidentally, the closest example I can recall is Phil Torres’s work on religious eschatological fanaticism as a possible agential x-risk.
Roman Yampolskiy’s shortlist of potential agents who could bring about an end to the world (https://arxiv.org/ftp/arxiv/papers/1605/1605.02817.pdf) also includes Military, Government, Corporations, Villains, Black Hats, Doomsday Cults, Depressed, Psychopaths, Criminals, AI Risk Deniers, and AI Safety Researchers.
They encourage cooperation with other value systems to further their apocalyptic goals, but mostly to prevent others from opposing them. That is different from tempering “strong NU” with other value systems to arrive at more moderate conclusions.
LOOOOL about your optimism of people not following FRI’s advocacy as purely as they want! Lets hope so, eh?
Also, I am somewhat concerned that this comment has been downvoted so much. It’s the only really substantive criticism of the article (admittedly it isn’t great), and it is at −3, right at the bottom.
Near the top are several comments at +5 or something that are effectively just applause.
LOL. Typical of my comments. Gets almost no upvotes but I never receive any sensible counterarguments! People use the forum vote system to persuade (by social proof) without having a valid argument. I have yet to vote a comment (up or down) because I think people should think for themselves.
You can understand some of what people are downvoting you for by looking at which of your comments are most downvoted—ones where you’re very critical without much explanation and where you suggest that people in the community have bad motives: http://effective-altruism.com/ea/181/introducing_ceas_guiding_principles/ah7 http://effective-altruism.com/ea/181/introducing_ceas_guiding_principles/ah6 http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8p9
Well-explained criticisms won’t get downvoted this much.
Actually you got 7 upvotes and 6 downvotes, I can tell from hovering over the ‘1 point’.
Specifically?