I think the probability of your “year 2300” statement is very low.
One meta-point I’m trying to make here is I don’t think we should be too hasty to derive + enforce very general ethical rules after examining a single case study. Ben’s account of Nonlinear’s behavior is troubling, and I hope the leadership takes a hard look in the mirror, but it’s important for us as a movement to learn the right lessons.
Thanks for bringing up the public vs private use of reason thing. A lot of my thinking on these questions was shaped by reading a book about the US in the antebellum + Civil War period. As I remember, in signing the Emancipation Proclamation, Abraham Lincoln was acting as an ethical innovator. (Advisors suggested he wait until right after a major Union victory to sign the proclamation, in order to better sell the North on the concept.) It does seem to me that recommending “Abraham Lincoln shouldn’t have signed the Emancipation Proclamation” is a pretty serious hit for an ethical rule to take.
(Note that an abolitionist soldier who fought well for the Union in the Civil War would be violating the deontological principle “don’t kill people” in order to produce a tiny shift in the probability of a hypothetical future benefit. And sure enough, in that soldier’s far future, we look back on that soldier as a hero. Furthermore, an analogous “year 2023″ statement would appear to miss the point—many 2023 people think that killing is generally wrong, and also that the abolitionist soldier’s actions are justified in context.)
Another case where the “leaders shouldn’t be moral innovators” principle fails by my lights: is it ethical to persuade people in AI care about animals to a greater degree than people in the general population care about animals? I would say yes.
Another point re: leaders who innovate morally—as e.g. Holly discusses in this thread, EA has a long history of encouraging weirdness and experimentation. From my perspective, freedomandutility is attempting to innovate on this by making us all sticklers for following the law. And you, as an EA leader, appear to be endorsing this innovation. You might say that innovating by advocating inaction is different than innovating by advocating action, but (a) I’m a tad skeptical of act/omission distinctions and (b) endorsing an asymmetry like this could create a ratchet where EAs act less and less, because leaders are way more comfortable advocating inaction than action.
Re: crisis decisionmaking—my sense is that many EAs feel we are in a crisis, or on the verge of a crisis. So I do think this is a good time to discuss what’s ethically acceptable in a crisis, and what ethical rules would’ve performed well in past crises. (For example, one could argue that in a time of crisis, it is especially important to support rather than undermine your friends & allies, and Nonlinear’s leadership violated this principle.)
Thanks for engaging given your limited free time. I’m eager to read pushback from people who aren’t Linch as well.
I feel like my main actual position here is something like “just be cool bro” and you’re like “what does ‘being cool’ actually mean? For that matter, what does ‘bro’ mean? Isn’t that kinda sexist?” And I’m like “okay here’s one operationalization of “being cool” And here’s an operationalization of “bro” that doesn’t have sexist connotations.” and you’re like “edge case 17 here, Steve Jobs was super cool despite wearing a black turtleneck for many years and believing in homeopathy” and my actual position is like “okay but does that even matter; like what’s your Bayesian update for “asshole” vs “actually supercool on a deep level” on someone who consistently goes around saying that being cool is for scrubs, but okay, here’s another careful attempt to define ‘being cool’ in a way that gets around that edge case” and you’re like “edge case 31” and I’m like “okay I give up.”
Like I’m often on the other side of “precision of language is important” but here I’m not even sure you believe that the disagreements are semantic. I feel like some people (fortunately a minority) in these parts think that social norms need to be given at the level of precision that’s necessary to align an AGI, and I’m like, jesus fuck this is a good way to not have any norms at all.
I think I’m a somewhat high-scrupulosity person. When people say “EAs should abide by deontological rule X”, I hear: “EAs could get cancelled in the future if they violate rule X” and also: “the point of this deontological rule is that you abide by it in all cases, even in cases where it seems like a bad idea for other reasons”.
Some of the deontological rules people are suggesting in this thread are rules I can think of good reasons to violate—sometimes, what seem to me like very good reasons. So I push back on them because (a) I want people to critique my thinking, so I can update away from violating the proposed rule if necessary (related to your public vs private use of reason point?) and (b) I don’t care to get cancelled in the future for violating a bad rule that EAs came to accept uncritically.
I take the task of moral philosophy as identifying and debating edge cases, and I find this rather enjoyable even though it can trigger my scrupulosity. But your point that excess debate could result in no norms at all is an interesting one. Maybe we need a concept of “soft norms” which are a “yellow flag” to trigger debate if they’re getting violated.
I think I’m a somewhat high-scrupulosity person. When people say “EAs should abide by deontological rule X”, I hear: “EAs could get cancelled in the future if they violate rule X” and also: “the point of this deontological rule is that you abide by it in all cases, even in cases where it seems like a bad idea for other reasons”.
To be clear, I never said the word “deontological” in this thread before, and when I searched for it on this post, almost all references are by you, except in a single comment by freedomandutility. I think it’s possible you were overreacting to someone’s poor choice of words that I didn’t understand as literal because the literal understanding is pretty clearly silly. (On the other hand I note that this comment thread started before that comment).
I also think your threat model of what causes cancellation in this community happens to be really poor, if you think it primarily results from the breaking of specific soft taboos even for extremely reasonable and obvious-to-everyone exigencies. It’s possible I have an illusion of transparency here because I’m quite familiar with this community, and maybe you’re really new to it?[1] But I really think you’re vastly overestimating both cancellation risk in general and in this community specifically.
(b) I don’t care to get cancelled in the future for violating a bad rule that EAs came to accept uncritically.
Why? If EAs are so rigid that they literally uncritically follow overly prescriptive rules hashed out in EA Forum comments without allowing for exceptions for extreme exigencies, and they believe this so firmly that they cancel people over it, why do you want to be in this community? To the extent that you think community EA is valuable because it helps you be a better person, have more impact etc, being cancelled from it because people are totally inept is doing you (and the world) a favor. Then you can be free to do more important things rather than be surrounded by ethically incompetent people. [edited this paragraph to tone down language slightly]
I take the task of moral philosophy as identifying and debating edge cases, and I find this rather enjoyable even though it can trigger my scrupulosity.
I think this is a pretty standard position among philosophically minded people. I disagree with the standard position; I think ethics is already amazingly hard in the mainline case, and longtermism even more so, there’s no reason to overcomplicate things when reality is already complicated enough. My guess is that we are nowhere near philosophically competent enough to be trying to solve the edge cases (especially in the comment threads of unrelated topics) when we don’t even have a handle on the hard problems that are practically relevant.
Maybe we need a concept of “soft norms” which are a “yellow flag” to trigger debate if they’re getting violated.
To be clear, all norms already work this way. Like, I view approximately all norms this way, though in some cases the flags are red rather than yellow and in some cases the debate ought to be before the action (cf your reference to killing being justified during wartime; I’d rather people not kill first and then debate the ethics of it later).
But if you’re really new to this community, why do you care about being cancelled? And also, surely other communities aren’t insanely rigidly deontological? Even religions have concepts like Pikuach nefesh, the idea that (almost) all religious taboos can be violated for sufficiently important exigencies like saving lives.
I tend to be very concerned about hidden self-serving motives in myself and other people. This was my biggest takeaway from the FTX incident.
So regarding “extremely reasonable and obvious-to-everyone exigencies”, and “being cancelled … because people are total morons”—well, it seems potentially self-serving to say
I’m going to break this rule because it’s obvious to everyone that breaking it is OK in this case. In fact, anyone who thinks it is wrong for me to break this rule is a total moron, and I should do myself a favor by ignoring their ethically incompetent opinion.
I know you work in longtermist grantmaking—I can’t speak for you, but if I was a grantmaker and someone said that to me during a call, I wouldn’t exactly consider it a good sign. Seems to betray a lack of self-skepticism if nothing else.
Regarding the cluelessness stuff, it feels entangled with the deontological stuff to me, in the sense that one argument for deontological rules is that they help protect you from your own ignorance, and lack of imagination regarding how things could go wrong.
BTW, please don’t feel obligated to continue replying to me—I get the sense that I’m still aggravating you, and I don’t have a clear model for how to not do this.
I think the probability of your “year 2300” statement is very low.
One meta-point I’m trying to make here is I don’t think we should be too hasty to derive + enforce very general ethical rules after examining a single case study. Ben’s account of Nonlinear’s behavior is troubling, and I hope the leadership takes a hard look in the mirror, but it’s important for us as a movement to learn the right lessons.
Thanks for bringing up the public vs private use of reason thing. A lot of my thinking on these questions was shaped by reading a book about the US in the antebellum + Civil War period. As I remember, in signing the Emancipation Proclamation, Abraham Lincoln was acting as an ethical innovator. (Advisors suggested he wait until right after a major Union victory to sign the proclamation, in order to better sell the North on the concept.) It does seem to me that recommending “Abraham Lincoln shouldn’t have signed the Emancipation Proclamation” is a pretty serious hit for an ethical rule to take.
(Note that an abolitionist soldier who fought well for the Union in the Civil War would be violating the deontological principle “don’t kill people” in order to produce a tiny shift in the probability of a hypothetical future benefit. And sure enough, in that soldier’s far future, we look back on that soldier as a hero. Furthermore, an analogous “year 2023″ statement would appear to miss the point—many 2023 people think that killing is generally wrong, and also that the abolitionist soldier’s actions are justified in context.)
Another case where the “leaders shouldn’t be moral innovators” principle fails by my lights: is it ethical to persuade people in AI care about animals to a greater degree than people in the general population care about animals? I would say yes.
Another point re: leaders who innovate morally—as e.g. Holly discusses in this thread, EA has a long history of encouraging weirdness and experimentation. From my perspective, freedomandutility is attempting to innovate on this by making us all sticklers for following the law. And you, as an EA leader, appear to be endorsing this innovation. You might say that innovating by advocating inaction is different than innovating by advocating action, but (a) I’m a tad skeptical of act/omission distinctions and (b) endorsing an asymmetry like this could create a ratchet where EAs act less and less, because leaders are way more comfortable advocating inaction than action.
Re: crisis decisionmaking—my sense is that many EAs feel we are in a crisis, or on the verge of a crisis. So I do think this is a good time to discuss what’s ethically acceptable in a crisis, and what ethical rules would’ve performed well in past crises. (For example, one could argue that in a time of crisis, it is especially important to support rather than undermine your friends & allies, and Nonlinear’s leadership violated this principle.)
Thanks for engaging given your limited free time. I’m eager to read pushback from people who aren’t Linch as well.
I feel like my main actual position here is something like “just be cool bro” and you’re like “what does ‘being cool’ actually mean? For that matter, what does ‘bro’ mean? Isn’t that kinda sexist?” And I’m like “okay here’s one operationalization of “being cool” And here’s an operationalization of “bro” that doesn’t have sexist connotations.” and you’re like “edge case 17 here, Steve Jobs was super cool despite wearing a black turtleneck for many years and believing in homeopathy” and my actual position is like “okay but does that even matter; like what’s your Bayesian update for “asshole” vs “actually supercool on a deep level” on someone who consistently goes around saying that being cool is for scrubs, but okay, here’s another careful attempt to define ‘being cool’ in a way that gets around that edge case” and you’re like “edge case 31” and I’m like “okay I give up.”
Like I’m often on the other side of “precision of language is important” but here I’m not even sure you believe that the disagreements are semantic. I feel like some people (fortunately a minority) in these parts think that social norms need to be given at the level of precision that’s necessary to align an AGI, and I’m like, jesus fuck this is a good way to not have any norms at all.
Sorry, I didn’t mean to antagonize you that way.
I think I’m a somewhat high-scrupulosity person. When people say “EAs should abide by deontological rule X”, I hear: “EAs could get cancelled in the future if they violate rule X” and also: “the point of this deontological rule is that you abide by it in all cases, even in cases where it seems like a bad idea for other reasons”.
Some of the deontological rules people are suggesting in this thread are rules I can think of good reasons to violate—sometimes, what seem to me like very good reasons. So I push back on them because (a) I want people to critique my thinking, so I can update away from violating the proposed rule if necessary (related to your public vs private use of reason point?) and (b) I don’t care to get cancelled in the future for violating a bad rule that EAs came to accept uncritically.
I take the task of moral philosophy as identifying and debating edge cases, and I find this rather enjoyable even though it can trigger my scrupulosity. But your point that excess debate could result in no norms at all is an interesting one. Maybe we need a concept of “soft norms” which are a “yellow flag” to trigger debate if they’re getting violated.
I appreciate the apology.
To be clear, I never said the word “deontological” in this thread before, and when I searched for it on this post, almost all references are by you, except in a single comment by freedomandutility. I think it’s possible you were overreacting to someone’s poor choice of words that I didn’t understand as literal because the literal understanding is pretty clearly silly. (On the other hand I note that this comment thread started before that comment).
I also think your threat model of what causes cancellation in this community happens to be really poor, if you think it primarily results from the breaking of specific soft taboos even for extremely reasonable and obvious-to-everyone exigencies. It’s possible I have an illusion of transparency here because I’m quite familiar with this community, and maybe you’re really new to it?[1] But I really think you’re vastly overestimating both cancellation risk in general and in this community specifically.
Why? If EAs are so rigid that they literally uncritically follow overly prescriptive rules hashed out in EA Forum comments without allowing for exceptions for extreme exigencies, and they believe this so firmly that they cancel people over it, why do you want to be in this community? To the extent that you think community EA is valuable because it helps you be a better person, have more impact etc, being cancelled from it because people are totally inept is doing you (and the world) a favor. Then you can be free to do more important things rather than be surrounded by ethically incompetent people. [edited this paragraph to tone down language slightly]
I think this is a pretty standard position among philosophically minded people. I disagree with the standard position; I think ethics is already amazingly hard in the mainline case, and longtermism even more so, there’s no reason to overcomplicate things when reality is already complicated enough. My guess is that we are nowhere near philosophically competent enough to be trying to solve the edge cases (especially in the comment threads of unrelated topics) when we don’t even have a handle on the hard problems that are practically relevant.
To be clear, all norms already work this way. Like, I view approximately all norms this way, though in some cases the flags are red rather than yellow and in some cases the debate ought to be before the action (cf your reference to killing being justified during wartime; I’d rather people not kill first and then debate the ethics of it later).
But if you’re really new to this community, why do you care about being cancelled? And also, surely other communities aren’t insanely rigidly deontological? Even religions have concepts like Pikuach nefesh, the idea that (almost) all religious taboos can be violated for sufficiently important exigencies like saving lives.
I tend to be very concerned about hidden self-serving motives in myself and other people. This was my biggest takeaway from the FTX incident.
So regarding “extremely reasonable and obvious-to-everyone exigencies”, and “being cancelled … because people are total morons”—well, it seems potentially self-serving to say
I know you work in longtermist grantmaking—I can’t speak for you, but if I was a grantmaker and someone said that to me during a call, I wouldn’t exactly consider it a good sign. Seems to betray a lack of self-skepticism if nothing else.
Regarding the cluelessness stuff, it feels entangled with the deontological stuff to me, in the sense that one argument for deontological rules is that they help protect you from your own ignorance, and lack of imagination regarding how things could go wrong.
BTW, please don’t feel obligated to continue replying to me—I get the sense that I’m still aggravating you, and I don’t have a clear model for how to not do this.