I think I’m a somewhat high-scrupulosity person. When people say “EAs should abide by deontological rule X”, I hear: “EAs could get cancelled in the future if they violate rule X” and also: “the point of this deontological rule is that you abide by it in all cases, even in cases where it seems like a bad idea for other reasons”.
To be clear, I never said the word “deontological” in this thread before, and when I searched for it on this post, almost all references are by you, except in a single comment by freedomandutility. I think it’s possible you were overreacting to someone’s poor choice of words that I didn’t understand as literal because the literal understanding is pretty clearly silly. (On the other hand I note that this comment thread started before that comment).
I also think your threat model of what causes cancellation in this community happens to be really poor, if you think it primarily results from the breaking of specific soft taboos even for extremely reasonable and obvious-to-everyone exigencies. It’s possible I have an illusion of transparency here because I’m quite familiar with this community, and maybe you’re really new to it?[1] But I really think you’re vastly overestimating both cancellation risk in general and in this community specifically.
(b) I don’t care to get cancelled in the future for violating a bad rule that EAs came to accept uncritically.
Why? If EAs are so rigid that they literally uncritically follow overly prescriptive rules hashed out in EA Forum comments without allowing for exceptions for extreme exigencies, and they believe this so firmly that they cancel people over it, why do you want to be in this community? To the extent that you think community EA is valuable because it helps you be a better person, have more impact etc, being cancelled from it because people are totally inept is doing you (and the world) a favor. Then you can be free to do more important things rather than be surrounded by ethically incompetent people. [edited this paragraph to tone down language slightly]
I take the task of moral philosophy as identifying and debating edge cases, and I find this rather enjoyable even though it can trigger my scrupulosity.
I think this is a pretty standard position among philosophically minded people. I disagree with the standard position; I think ethics is already amazingly hard in the mainline case, and longtermism even more so, there’s no reason to overcomplicate things when reality is already complicated enough. My guess is that we are nowhere near philosophically competent enough to be trying to solve the edge cases (especially in the comment threads of unrelated topics) when we don’t even have a handle on the hard problems that are practically relevant.
Maybe we need a concept of “soft norms” which are a “yellow flag” to trigger debate if they’re getting violated.
To be clear, all norms already work this way. Like, I view approximately all norms this way, though in some cases the flags are red rather than yellow and in some cases the debate ought to be before the action (cf your reference to killing being justified during wartime; I’d rather people not kill first and then debate the ethics of it later).
But if you’re really new to this community, why do you care about being cancelled? And also, surely other communities aren’t insanely rigidly deontological? Even religions have concepts like Pikuach nefesh, the idea that (almost) all religious taboos can be violated for sufficiently important exigencies like saving lives.
I tend to be very concerned about hidden self-serving motives in myself and other people. This was my biggest takeaway from the FTX incident.
So regarding “extremely reasonable and obvious-to-everyone exigencies”, and “being cancelled … because people are total morons”—well, it seems potentially self-serving to say
I’m going to break this rule because it’s obvious to everyone that breaking it is OK in this case. In fact, anyone who thinks it is wrong for me to break this rule is a total moron, and I should do myself a favor by ignoring their ethically incompetent opinion.
I know you work in longtermist grantmaking—I can’t speak for you, but if I was a grantmaker and someone said that to me during a call, I wouldn’t exactly consider it a good sign. Seems to betray a lack of self-skepticism if nothing else.
Regarding the cluelessness stuff, it feels entangled with the deontological stuff to me, in the sense that one argument for deontological rules is that they help protect you from your own ignorance, and lack of imagination regarding how things could go wrong.
BTW, please don’t feel obligated to continue replying to me—I get the sense that I’m still aggravating you, and I don’t have a clear model for how to not do this.
I appreciate the apology.
To be clear, I never said the word “deontological” in this thread before, and when I searched for it on this post, almost all references are by you, except in a single comment by freedomandutility. I think it’s possible you were overreacting to someone’s poor choice of words that I didn’t understand as literal because the literal understanding is pretty clearly silly. (On the other hand I note that this comment thread started before that comment).
I also think your threat model of what causes cancellation in this community happens to be really poor, if you think it primarily results from the breaking of specific soft taboos even for extremely reasonable and obvious-to-everyone exigencies. It’s possible I have an illusion of transparency here because I’m quite familiar with this community, and maybe you’re really new to it?[1] But I really think you’re vastly overestimating both cancellation risk in general and in this community specifically.
Why? If EAs are so rigid that they literally uncritically follow overly prescriptive rules hashed out in EA Forum comments without allowing for exceptions for extreme exigencies, and they believe this so firmly that they cancel people over it, why do you want to be in this community? To the extent that you think community EA is valuable because it helps you be a better person, have more impact etc, being cancelled from it because people are totally inept is doing you (and the world) a favor. Then you can be free to do more important things rather than be surrounded by ethically incompetent people. [edited this paragraph to tone down language slightly]
I think this is a pretty standard position among philosophically minded people. I disagree with the standard position; I think ethics is already amazingly hard in the mainline case, and longtermism even more so, there’s no reason to overcomplicate things when reality is already complicated enough. My guess is that we are nowhere near philosophically competent enough to be trying to solve the edge cases (especially in the comment threads of unrelated topics) when we don’t even have a handle on the hard problems that are practically relevant.
To be clear, all norms already work this way. Like, I view approximately all norms this way, though in some cases the flags are red rather than yellow and in some cases the debate ought to be before the action (cf your reference to killing being justified during wartime; I’d rather people not kill first and then debate the ethics of it later).
But if you’re really new to this community, why do you care about being cancelled? And also, surely other communities aren’t insanely rigidly deontological? Even religions have concepts like Pikuach nefesh, the idea that (almost) all religious taboos can be violated for sufficiently important exigencies like saving lives.
I tend to be very concerned about hidden self-serving motives in myself and other people. This was my biggest takeaway from the FTX incident.
So regarding “extremely reasonable and obvious-to-everyone exigencies”, and “being cancelled … because people are total morons”—well, it seems potentially self-serving to say
I know you work in longtermist grantmaking—I can’t speak for you, but if I was a grantmaker and someone said that to me during a call, I wouldn’t exactly consider it a good sign. Seems to betray a lack of self-skepticism if nothing else.
Regarding the cluelessness stuff, it feels entangled with the deontological stuff to me, in the sense that one argument for deontological rules is that they help protect you from your own ignorance, and lack of imagination regarding how things could go wrong.
BTW, please don’t feel obligated to continue replying to me—I get the sense that I’m still aggravating you, and I don’t have a clear model for how to not do this.