Regarding: “Consideration of the far future is the strongest factor in favor of prioritizing animal advocacy for many long-time EAs, including myself.”
How do you see animal advocacy as a cause area stacking up against work on existential risks?
It is a bit surprising that such a small part of the article is explicitly concerned with the far future, if considerations of the far future is the strongest factor in favour of prioritising animal advocacy. In general, the amount of space one spends on a consideration should probably be at least roughly proportionate to its significance.
If it’s supposed to be introductory then it should probably focus on the simpler reasons why animal advocacy looks important. Plus, my understanding is that most animal advocates don’t focus as much about the far future, so even if the author does, it probably makes sense for the article to use arguments that convinced most people rather than the arguments that convinced him personally.
OK. Personally I would prefer the convention that everybody at the EA forum gives the reasons they actually believe in themselves. I think that is more in line with the EA credo of evidence and reason, and with intellectual honesty.
There is the elephant in the room that the real, big intersection of animal ethics and far-future concerns is very controversial. This includes reducing wild-animal suffering, the issue of digital sentience, and astronomical suffering. The EA Forum is sufficiently public its critics and deriders will cherry-pick and mine contentious issues which the EA community itself is divided on and which the majority reject as central to EA ideas, and use them as symbols of failure to strawman the everything about effective altruism. See, for example , this article. To not have ideas so controversial outside EA, let alone within it, so easily quotable by those who just want to take potshots at EA, seems to me a fair and pragmatic approach. If push really came to shove and you still wanted to see everyone give actual reasons they believe one approach is best, especially on the topic of animals and other non-humans over the long future, I propose a compromise.
A line could be added to this or future posts saying something like, “there are those concerned about how human activity and other events will impact the well-being of animals over the far future, such as on the scale of centuries or millenia. This is an issue of great consequence, but of course there is much uncertainty on the issue. So, in the mean time, these far-future concerns still practically play a small part in animal advocacy as a whole, and are relegated to research.”
The above is all true. Otherwise, the above could include embedded links to the Foundational Research Institute, where anyone who wishes to learn more can find a whole website dedicated to the topic of animal well-being in the far future. I think being maximally transparent at all times for the sake of transparency may have unintended and costly consequences. For due diligence, I’ll ask Brian Tomasik what he thinks. I expect he’ll agree with me.
Hmm, I’m somewhat atypical in usually maintaining the heuristic that one should generally be transparent even in cases where doing so seems like it might hurt one’s popularity or receptiveness to one’s ideas. So I’m not that worried about journalists quoting out of context, and I think it’d be a big cost to hamper honest discussion just because of that concern. (The cost in terms of worse discussion probably exceeds any benefits in terms of not turning people off?)
Many other EAs disagree with me. They make an interesting point that even if one’s ultimate goal is to help people discover their own idealized preferences, bridging inferential gaps slower can make it more likely that people will eventually update to “weird” positions that they will then endorse on reflection.
Yeah, it is a real toss-up here. Well, I mean, you know I perceive lots of merit in your above position, but I also see merit in the arguments of EAs who disagree with you.
I used the phrase “maximally transparent” above, as distinct from “transparent”. I’ll unpack that so it’s clearer what I mean and why. Transparency in expressing ideas in writing, at least, can be thought of as following the spirit and/or letter of Grice’s maxims.
Being “maximally transparent” would be following all of Grice’s maxims as much as possible. The ‘compromise’ statement I gave above, describing the role of far-future animal altruism, follows the maxims of quality and relation, but doesn’t fully satisfy the maxims of quantity and clarity (“manner” at the link; I prefer “clarity”, as well as being taught it that way). We’re volunteering as much info as necessary (as an intro to the topic), but not all the info which would be appropriate (as an intro to the topic).
So, we’re satisficing instead of optimizing for transparent communication on the prediction this will prevent the costs of journalists or pundits using our own words against us at a later date.
Is this transparent “enough”? Is it better to optimize for transparent communication, because we’re overrating the real costs of blowback? Ought we optimize communication for maximum transparency at all times because journalists and pundits who want to hurt EA will find a way to do so regardless? How would we make the implied prediction both explicit and testable? I don’t know. Those are questions I’d love to see answers to. The above ‘compromise’ position is as far as I’ve gotten.
Just to be clear, I do “believe in” the near-term reasons outlined in this article, even though far future arguments also matter a lot to me. I also think there’s a lot of overlap, e.g. if something is neglected now, that can be good reason to think it will continue being neglected. I don’t think this post deviated from evidence-based thinking, the use of reason, or intellectual honesty.
I think personal posts are important, but introductory content and topic summaries are also useful. Several people have asked for a post on “why animals matter” like this one, and I don’t think they’d have been nearly as interested in a post where >75% of the content was about the far future considerations.
Also, in case anyone missed it, I did mention this in the post: “Consideration of the far future is the strongest factor in favor of prioritizing animal advocacy for many long-time EAs, including myself.”
Great question! Yeah, I personally favor animal advocacy over reducing extinction risk. (I use existential risk to include both risks of extinction and risks of well-populated, but morally bad, e.g. dystopian, futures.) Here’s another blog post that talks about some things to consider when deciding which of these long-term risks to prioritize: http://effective-altruism.com/ea/t3/some_considerations_for_different_ways_to_reduce/
Also note that some work might both decrease extinction and quality risks, such as general EA movement-building and research. Also, “animal advocacy” is kind of a vague term, which could either refer to just “values spreading” (i.e. trying to inspire people, now and/or in the future, to have better values and/or act more morally), or just generally refer to “helping animals.” If it’s used as the latter, then it could include extinction risk if you think that will help future animals or animal-like beings (e.g. sentient machines).
Yep. I’ve used the “Tyrael” username on here for posts that I might have wanted to keep anonymous (largely due to the downvoting brigades), but ended up being okay with it being nonanonymous after the fact.
Evidence is (i) downvoting is on certain users/topics, rather than certain arguments/rhetoric, (ii) lots of downvotes relative to a small amount of negative comments, (iii) strange timing, e.g. I quickly got two downvotes on the OP before anyone had time to read it (<2 minutes).
I think it happens to me some, but I think it happens a lot to animal-focused content generally.
Edit: jtbc, I mean “systematically downvoting content that contributes to the discussion because you disagree with it, you don’t like the author, or other ‘improper’ reasons.” Maybe “brigades” was the wrong word if that suggests coordination, which i’m updating towards after searching online for more uses of the term. Though there might be coordination, not really sure.
Yep. I’ve used the “Tyrael” username on here for posts that I might have wanted to keep anonymous (largely due to the downvoting brigades), but ended up being okay with it being nonanonymous after the fact.
This is a nice article. Thanks for writing it.
Regarding: “Consideration of the far future is the strongest factor in favor of prioritizing animal advocacy for many long-time EAs, including myself.”
How do you see animal advocacy as a cause area stacking up against work on existential risks?
It is a bit surprising that such a small part of the article is explicitly concerned with the far future, if considerations of the far future is the strongest factor in favour of prioritising animal advocacy. In general, the amount of space one spends on a consideration should probably be at least roughly proportionate to its significance.
If it’s supposed to be introductory then it should probably focus on the simpler reasons why animal advocacy looks important. Plus, my understanding is that most animal advocates don’t focus as much about the far future, so even if the author does, it probably makes sense for the article to use arguments that convinced most people rather than the arguments that convinced him personally.
OK. Personally I would prefer the convention that everybody at the EA forum gives the reasons they actually believe in themselves. I think that is more in line with the EA credo of evidence and reason, and with intellectual honesty.
There is the elephant in the room that the real, big intersection of animal ethics and far-future concerns is very controversial. This includes reducing wild-animal suffering, the issue of digital sentience, and astronomical suffering. The EA Forum is sufficiently public its critics and deriders will cherry-pick and mine contentious issues which the EA community itself is divided on and which the majority reject as central to EA ideas, and use them as symbols of failure to strawman the everything about effective altruism. See, for example , this article. To not have ideas so controversial outside EA, let alone within it, so easily quotable by those who just want to take potshots at EA, seems to me a fair and pragmatic approach. If push really came to shove and you still wanted to see everyone give actual reasons they believe one approach is best, especially on the topic of animals and other non-humans over the long future, I propose a compromise.
A line could be added to this or future posts saying something like, “there are those concerned about how human activity and other events will impact the well-being of animals over the far future, such as on the scale of centuries or millenia. This is an issue of great consequence, but of course there is much uncertainty on the issue. So, in the mean time, these far-future concerns still practically play a small part in animal advocacy as a whole, and are relegated to research.”
The above is all true. Otherwise, the above could include embedded links to the Foundational Research Institute, where anyone who wishes to learn more can find a whole website dedicated to the topic of animal well-being in the far future. I think being maximally transparent at all times for the sake of transparency may have unintended and costly consequences. For due diligence, I’ll ask Brian Tomasik what he thinks. I expect he’ll agree with me.
Hmm, I’m somewhat atypical in usually maintaining the heuristic that one should generally be transparent even in cases where doing so seems like it might hurt one’s popularity or receptiveness to one’s ideas. So I’m not that worried about journalists quoting out of context, and I think it’d be a big cost to hamper honest discussion just because of that concern. (The cost in terms of worse discussion probably exceeds any benefits in terms of not turning people off?)
Many other EAs disagree with me. They make an interesting point that even if one’s ultimate goal is to help people discover their own idealized preferences, bridging inferential gaps slower can make it more likely that people will eventually update to “weird” positions that they will then endorse on reflection.
Yeah, it is a real toss-up here. Well, I mean, you know I perceive lots of merit in your above position, but I also see merit in the arguments of EAs who disagree with you.
I used the phrase “maximally transparent” above, as distinct from “transparent”. I’ll unpack that so it’s clearer what I mean and why. Transparency in expressing ideas in writing, at least, can be thought of as following the spirit and/or letter of Grice’s maxims.
Being “maximally transparent” would be following all of Grice’s maxims as much as possible. The ‘compromise’ statement I gave above, describing the role of far-future animal altruism, follows the maxims of quality and relation, but doesn’t fully satisfy the maxims of quantity and clarity (“manner” at the link; I prefer “clarity”, as well as being taught it that way). We’re volunteering as much info as necessary (as an intro to the topic), but not all the info which would be appropriate (as an intro to the topic).
So, we’re satisficing instead of optimizing for transparent communication on the prediction this will prevent the costs of journalists or pundits using our own words against us at a later date.
Is this transparent “enough”? Is it better to optimize for transparent communication, because we’re overrating the real costs of blowback? Ought we optimize communication for maximum transparency at all times because journalists and pundits who want to hurt EA will find a way to do so regardless? How would we make the implied prediction both explicit and testable? I don’t know. Those are questions I’d love to see answers to. The above ‘compromise’ position is as far as I’ve gotten.
Just to be clear, I do “believe in” the near-term reasons outlined in this article, even though far future arguments also matter a lot to me. I also think there’s a lot of overlap, e.g. if something is neglected now, that can be good reason to think it will continue being neglected. I don’t think this post deviated from evidence-based thinking, the use of reason, or intellectual honesty.
I think personal posts are important, but introductory content and topic summaries are also useful. Several people have asked for a post on “why animals matter” like this one, and I don’t think they’d have been nearly as interested in a post where >75% of the content was about the far future considerations.
Also, in case anyone missed it, I did mention this in the post: “Consideration of the far future is the strongest factor in favor of prioritizing animal advocacy for many long-time EAs, including myself.”
Great question! Yeah, I personally favor animal advocacy over reducing extinction risk. (I use existential risk to include both risks of extinction and risks of well-populated, but morally bad, e.g. dystopian, futures.) Here’s another blog post that talks about some things to consider when deciding which of these long-term risks to prioritize: http://effective-altruism.com/ea/t3/some_considerations_for_different_ways_to_reduce/
Also note that some work might both decrease extinction and quality risks, such as general EA movement-building and research. Also, “animal advocacy” is kind of a vague term, which could either refer to just “values spreading” (i.e. trying to inspire people, now and/or in the future, to have better values and/or act more morally), or just generally refer to “helping animals.” If it’s used as the latter, then it could include extinction risk if you think that will help future animals or animal-like beings (e.g. sentient machines).
Am I right in thinking you are the author of the linked post?
Yep. I’ve used the “Tyrael” username on here for posts that I might have wanted to keep anonymous (largely due to the downvoting brigades), but ended up being okay with it being nonanonymous after the fact.
Have you experienced downvoting brigades? How do you distinguish them from sincere negative feedback?
Evidence is (i) downvoting is on certain users/topics, rather than certain arguments/rhetoric, (ii) lots of downvotes relative to a small amount of negative comments, (iii) strange timing, e.g. I quickly got two downvotes on the OP before anyone had time to read it (<2 minutes).
I think it happens to me some, but I think it happens a lot to animal-focused content generally.
Edit: jtbc, I mean “systematically downvoting content that contributes to the discussion because you disagree with it, you don’t like the author, or other ‘improper’ reasons.” Maybe “brigades” was the wrong word if that suggests coordination, which i’m updating towards after searching online for more uses of the term. Though there might be coordination, not really sure.
So to clarify, the accounts:
thebestwecan
Tyrael
Jacy_Anthis2
are all yours?
The last account is presumably a dummy one created by mapping comments from other sites to the EA Forum, but yeah, the first two are mine.
What downvoting brigades are there?