Yeah, it is a real toss-up here. Well, I mean, you know I perceive lots of merit in your above position, but I also see merit in the arguments of EAs who disagree with you.
I used the phrase “maximally transparent” above, as distinct from “transparent”. I’ll unpack that so it’s clearer what I mean and why. Transparency in expressing ideas in writing, at least, can be thought of as following the spirit and/or letter of Grice’s maxims.
Being “maximally transparent” would be following all of Grice’s maxims as much as possible. The ‘compromise’ statement I gave above, describing the role of far-future animal altruism, follows the maxims of quality and relation, but doesn’t fully satisfy the maxims of quantity and clarity (“manner” at the link; I prefer “clarity”, as well as being taught it that way). We’re volunteering as much info as necessary (as an intro to the topic), but not all the info which would be appropriate (as an intro to the topic).
So, we’re satisficing instead of optimizing for transparent communication on the prediction this will prevent the costs of journalists or pundits using our own words against us at a later date.
Is this transparent “enough”? Is it better to optimize for transparent communication, because we’re overrating the real costs of blowback? Ought we optimize communication for maximum transparency at all times because journalists and pundits who want to hurt EA will find a way to do so regardless? How would we make the implied prediction both explicit and testable? I don’t know. Those are questions I’d love to see answers to. The above ‘compromise’ position is as far as I’ve gotten.
Yeah, it is a real toss-up here. Well, I mean, you know I perceive lots of merit in your above position, but I also see merit in the arguments of EAs who disagree with you.
I used the phrase “maximally transparent” above, as distinct from “transparent”. I’ll unpack that so it’s clearer what I mean and why. Transparency in expressing ideas in writing, at least, can be thought of as following the spirit and/or letter of Grice’s maxims.
Being “maximally transparent” would be following all of Grice’s maxims as much as possible. The ‘compromise’ statement I gave above, describing the role of far-future animal altruism, follows the maxims of quality and relation, but doesn’t fully satisfy the maxims of quantity and clarity (“manner” at the link; I prefer “clarity”, as well as being taught it that way). We’re volunteering as much info as necessary (as an intro to the topic), but not all the info which would be appropriate (as an intro to the topic).
So, we’re satisficing instead of optimizing for transparent communication on the prediction this will prevent the costs of journalists or pundits using our own words against us at a later date.
Is this transparent “enough”? Is it better to optimize for transparent communication, because we’re overrating the real costs of blowback? Ought we optimize communication for maximum transparency at all times because journalists and pundits who want to hurt EA will find a way to do so regardless? How would we make the implied prediction both explicit and testable? I don’t know. Those are questions I’d love to see answers to. The above ‘compromise’ position is as far as I’ve gotten.