OK. Personally I would prefer the convention that everybody at the EA forum gives the reasons they actually believe in themselves. I think that is more in line with the EA credo of evidence and reason, and with intellectual honesty.
There is the elephant in the room that the real, big intersection of animal ethics and far-future concerns is very controversial. This includes reducing wild-animal suffering, the issue of digital sentience, and astronomical suffering. The EA Forum is sufficiently public its critics and deriders will cherry-pick and mine contentious issues which the EA community itself is divided on and which the majority reject as central to EA ideas, and use them as symbols of failure to strawman the everything about effective altruism. See, for example , this article. To not have ideas so controversial outside EA, let alone within it, so easily quotable by those who just want to take potshots at EA, seems to me a fair and pragmatic approach. If push really came to shove and you still wanted to see everyone give actual reasons they believe one approach is best, especially on the topic of animals and other non-humans over the long future, I propose a compromise.
A line could be added to this or future posts saying something like, “there are those concerned about how human activity and other events will impact the well-being of animals over the far future, such as on the scale of centuries or millenia. This is an issue of great consequence, but of course there is much uncertainty on the issue. So, in the mean time, these far-future concerns still practically play a small part in animal advocacy as a whole, and are relegated to research.”
The above is all true. Otherwise, the above could include embedded links to the Foundational Research Institute, where anyone who wishes to learn more can find a whole website dedicated to the topic of animal well-being in the far future. I think being maximally transparent at all times for the sake of transparency may have unintended and costly consequences. For due diligence, I’ll ask Brian Tomasik what he thinks. I expect he’ll agree with me.
Hmm, I’m somewhat atypical in usually maintaining the heuristic that one should generally be transparent even in cases where doing so seems like it might hurt one’s popularity or receptiveness to one’s ideas. So I’m not that worried about journalists quoting out of context, and I think it’d be a big cost to hamper honest discussion just because of that concern. (The cost in terms of worse discussion probably exceeds any benefits in terms of not turning people off?)
Many other EAs disagree with me. They make an interesting point that even if one’s ultimate goal is to help people discover their own idealized preferences, bridging inferential gaps slower can make it more likely that people will eventually update to “weird” positions that they will then endorse on reflection.
Yeah, it is a real toss-up here. Well, I mean, you know I perceive lots of merit in your above position, but I also see merit in the arguments of EAs who disagree with you.
I used the phrase “maximally transparent” above, as distinct from “transparent”. I’ll unpack that so it’s clearer what I mean and why. Transparency in expressing ideas in writing, at least, can be thought of as following the spirit and/or letter of Grice’s maxims.
Being “maximally transparent” would be following all of Grice’s maxims as much as possible. The ‘compromise’ statement I gave above, describing the role of far-future animal altruism, follows the maxims of quality and relation, but doesn’t fully satisfy the maxims of quantity and clarity (“manner” at the link; I prefer “clarity”, as well as being taught it that way). We’re volunteering as much info as necessary (as an intro to the topic), but not all the info which would be appropriate (as an intro to the topic).
So, we’re satisficing instead of optimizing for transparent communication on the prediction this will prevent the costs of journalists or pundits using our own words against us at a later date.
Is this transparent “enough”? Is it better to optimize for transparent communication, because we’re overrating the real costs of blowback? Ought we optimize communication for maximum transparency at all times because journalists and pundits who want to hurt EA will find a way to do so regardless? How would we make the implied prediction both explicit and testable? I don’t know. Those are questions I’d love to see answers to. The above ‘compromise’ position is as far as I’ve gotten.
Just to be clear, I do “believe in” the near-term reasons outlined in this article, even though far future arguments also matter a lot to me. I also think there’s a lot of overlap, e.g. if something is neglected now, that can be good reason to think it will continue being neglected. I don’t think this post deviated from evidence-based thinking, the use of reason, or intellectual honesty.
I think personal posts are important, but introductory content and topic summaries are also useful. Several people have asked for a post on “why animals matter” like this one, and I don’t think they’d have been nearly as interested in a post where >75% of the content was about the far future considerations.
Also, in case anyone missed it, I did mention this in the post: “Consideration of the far future is the strongest factor in favor of prioritizing animal advocacy for many long-time EAs, including myself.”
OK. Personally I would prefer the convention that everybody at the EA forum gives the reasons they actually believe in themselves. I think that is more in line with the EA credo of evidence and reason, and with intellectual honesty.
There is the elephant in the room that the real, big intersection of animal ethics and far-future concerns is very controversial. This includes reducing wild-animal suffering, the issue of digital sentience, and astronomical suffering. The EA Forum is sufficiently public its critics and deriders will cherry-pick and mine contentious issues which the EA community itself is divided on and which the majority reject as central to EA ideas, and use them as symbols of failure to strawman the everything about effective altruism. See, for example , this article. To not have ideas so controversial outside EA, let alone within it, so easily quotable by those who just want to take potshots at EA, seems to me a fair and pragmatic approach. If push really came to shove and you still wanted to see everyone give actual reasons they believe one approach is best, especially on the topic of animals and other non-humans over the long future, I propose a compromise.
A line could be added to this or future posts saying something like, “there are those concerned about how human activity and other events will impact the well-being of animals over the far future, such as on the scale of centuries or millenia. This is an issue of great consequence, but of course there is much uncertainty on the issue. So, in the mean time, these far-future concerns still practically play a small part in animal advocacy as a whole, and are relegated to research.”
The above is all true. Otherwise, the above could include embedded links to the Foundational Research Institute, where anyone who wishes to learn more can find a whole website dedicated to the topic of animal well-being in the far future. I think being maximally transparent at all times for the sake of transparency may have unintended and costly consequences. For due diligence, I’ll ask Brian Tomasik what he thinks. I expect he’ll agree with me.
Hmm, I’m somewhat atypical in usually maintaining the heuristic that one should generally be transparent even in cases where doing so seems like it might hurt one’s popularity or receptiveness to one’s ideas. So I’m not that worried about journalists quoting out of context, and I think it’d be a big cost to hamper honest discussion just because of that concern. (The cost in terms of worse discussion probably exceeds any benefits in terms of not turning people off?)
Many other EAs disagree with me. They make an interesting point that even if one’s ultimate goal is to help people discover their own idealized preferences, bridging inferential gaps slower can make it more likely that people will eventually update to “weird” positions that they will then endorse on reflection.
Yeah, it is a real toss-up here. Well, I mean, you know I perceive lots of merit in your above position, but I also see merit in the arguments of EAs who disagree with you.
I used the phrase “maximally transparent” above, as distinct from “transparent”. I’ll unpack that so it’s clearer what I mean and why. Transparency in expressing ideas in writing, at least, can be thought of as following the spirit and/or letter of Grice’s maxims.
Being “maximally transparent” would be following all of Grice’s maxims as much as possible. The ‘compromise’ statement I gave above, describing the role of far-future animal altruism, follows the maxims of quality and relation, but doesn’t fully satisfy the maxims of quantity and clarity (“manner” at the link; I prefer “clarity”, as well as being taught it that way). We’re volunteering as much info as necessary (as an intro to the topic), but not all the info which would be appropriate (as an intro to the topic).
So, we’re satisficing instead of optimizing for transparent communication on the prediction this will prevent the costs of journalists or pundits using our own words against us at a later date.
Is this transparent “enough”? Is it better to optimize for transparent communication, because we’re overrating the real costs of blowback? Ought we optimize communication for maximum transparency at all times because journalists and pundits who want to hurt EA will find a way to do so regardless? How would we make the implied prediction both explicit and testable? I don’t know. Those are questions I’d love to see answers to. The above ‘compromise’ position is as far as I’ve gotten.
Just to be clear, I do “believe in” the near-term reasons outlined in this article, even though far future arguments also matter a lot to me. I also think there’s a lot of overlap, e.g. if something is neglected now, that can be good reason to think it will continue being neglected. I don’t think this post deviated from evidence-based thinking, the use of reason, or intellectual honesty.
I think personal posts are important, but introductory content and topic summaries are also useful. Several people have asked for a post on “why animals matter” like this one, and I don’t think they’d have been nearly as interested in a post where >75% of the content was about the far future considerations.
Also, in case anyone missed it, I did mention this in the post: “Consideration of the far future is the strongest factor in favor of prioritizing animal advocacy for many long-time EAs, including myself.”