Can you give a probability (conditional upon the world existing, humans being alive, etc) for the following statement
In the year 2300, the majority of ethical philosophers would consider the claim “it is an ethically permissible and perhaps even obligatory act of civil disobedience for employers to pressure their employees to drive cars without a driver’s license, in situations similar to that faced by people living in the United States in 2021.” to be correct and perhaps even self-evident.
(if you don’t like the operationalization of ethical philosophers as a stand-in for moral progress, I’m happy for you to come up with a better one).
___
Taken a step back, I do want to preserve the “philosophy club” aspect of EA[1]. I think certain EA (ethics) discussions, in their naïveté, willingness to think outside the box, follow arguments to their logical conclusions, high decoupling, are at its best quite valuable and insightful in discovering moral innovations and alternative empirical worldviews.
And often you need to think pretty radical ideas before coming up with fairly sensible solutions. For example, without Brian Tomasik’s early work on Wild Animal Suffering, we would not have sensible interventions on humane pesticides and shrimp welfare. Without early figures like Bentham, Singer, Parfit, Bostrom, Yudkowsky, Shulman, etc, being very willing to our movement would be meaningfully quite different.
But I think people need to be careful about “ethical innovation” once you start having significant effects in the world, lead nonprofits, fund projects, run large fintech companies, etc.
I think Kant’s distinction between the public and private use of reason is relevant here. Namely “public use of reason” is what you advocate for or ideas you explore in the public arena. Whereas “private use of reason” is what you actually do in your own life (including both personally and professionally, but especially professionally). It’s a bit confusing because most people have very different intuitions for the relevant bits of public v private axes. But I think the core conclusion in my lights is that you can get most of the benefits[2] of ethical innovation at the discourse stage without having to bite the bullet that you also need companies, governments, etc, to be run by “ethical innovators.”
So in the alleged Nonlinear example, it is one thing for Emerson or Kat, in their public commentator capacities as (eg) EA Forum commentators or op-ed writers, to (eg) advance a position that driving license laws are immoral and classist and we should get rid of them, while being careful that they are not speaking for any organization or in a professional capacity, etc. It is quite another, in their capacities as nonprofit managers, to mandate that an employee or semi-employee to illegally drive a car without a license.
Now in the demarcation above, it will seem like I should be happy (morally and epistemically if not intellectually) with “novel” ethical positions advanced on the forum, as the EA forum is a discussion venue rather than an action venue. However, when people discuss bizarre edge cases about breaking the law without (eg) a careful public/private demarcation, in the context of a discussion where long-time forum members have been alleged to break laws in a situation that’s clearly not one of the bizarre edge cases, I worry that both the commentators and onlookers don’t in fact have a clean separation in their heads of what’s okay to say vs. do, including, unfortunately, people with quite a bit of power in absolute terms.
(Also I’m limited on free time and I can’t tell if you’re trolling etc, so I’m probably not going to comment further on this thread, sorry).
Actual philosophers, in comparison, feel relatively more sterile and uncreative at least in my discussions with them. (Though ofc I might not be clever/charismatic enough to elicit the best opinions out of them; also outgroup homogeneity is a thing). At least when I talked to analytic philosophy grad students, I’m often reminded by this quote from a profile on Parfit:
He attended a lecture by a Continental philosopher that addressed some important subject such as suicide or the meaning of life, but he couldn’t understand any of it. He went to hear an analytic philosopher who spoke on a trivial topic but was quite lucid. He wondered whether it was more likely that Continental philosophers would become more lucid or analytic philosophers less trivial. He decided that the second was more likely, and returned to Oxford
A “conformity in action but not in speech” may not be enough to prevent acute moral crises, like the Holocaust. But it’s one of the few non-technological factors that I expect to have a chance at driving steady moral progress, in the long run.
I think the probability of your “year 2300” statement is very low.
One meta-point I’m trying to make here is I don’t think we should be too hasty to derive + enforce very general ethical rules after examining a single case study. Ben’s account of Nonlinear’s behavior is troubling, and I hope the leadership takes a hard look in the mirror, but it’s important for us as a movement to learn the right lessons.
Thanks for bringing up the public vs private use of reason thing. A lot of my thinking on these questions was shaped by reading a book about the US in the antebellum + Civil War period. As I remember, in signing the Emancipation Proclamation, Abraham Lincoln was acting as an ethical innovator. (Advisors suggested he wait until right after a major Union victory to sign the proclamation, in order to better sell the North on the concept.) It does seem to me that recommending “Abraham Lincoln shouldn’t have signed the Emancipation Proclamation” is a pretty serious hit for an ethical rule to take.
(Note that an abolitionist soldier who fought well for the Union in the Civil War would be violating the deontological principle “don’t kill people” in order to produce a tiny shift in the probability of a hypothetical future benefit. And sure enough, in that soldier’s far future, we look back on that soldier as a hero. Furthermore, an analogous “year 2023″ statement would appear to miss the point—many 2023 people think that killing is generally wrong, and also that the abolitionist soldier’s actions are justified in context.)
Another case where the “leaders shouldn’t be moral innovators” principle fails by my lights: is it ethical to persuade people in AI care about animals to a greater degree than people in the general population care about animals? I would say yes.
Another point re: leaders who innovate morally—as e.g. Holly discusses in this thread, EA has a long history of encouraging weirdness and experimentation. From my perspective, freedomandutility is attempting to innovate on this by making us all sticklers for following the law. And you, as an EA leader, appear to be endorsing this innovation. You might say that innovating by advocating inaction is different than innovating by advocating action, but (a) I’m a tad skeptical of act/omission distinctions and (b) endorsing an asymmetry like this could create a ratchet where EAs act less and less, because leaders are way more comfortable advocating inaction than action.
Re: crisis decisionmaking—my sense is that many EAs feel we are in a crisis, or on the verge of a crisis. So I do think this is a good time to discuss what’s ethically acceptable in a crisis, and what ethical rules would’ve performed well in past crises. (For example, one could argue that in a time of crisis, it is especially important to support rather than undermine your friends & allies, and Nonlinear’s leadership violated this principle.)
Thanks for engaging given your limited free time. I’m eager to read pushback from people who aren’t Linch as well.
I feel like my main actual position here is something like “just be cool bro” and you’re like “what does ‘being cool’ actually mean? For that matter, what does ‘bro’ mean? Isn’t that kinda sexist?” And I’m like “okay here’s one operationalization of “being cool” And here’s an operationalization of “bro” that doesn’t have sexist connotations.” and you’re like “edge case 17 here, Steve Jobs was super cool despite wearing a black turtleneck for many years and believing in homeopathy” and my actual position is like “okay but does that even matter; like what’s your Bayesian update for “asshole” vs “actually supercool on a deep level” on someone who consistently goes around saying that being cool is for scrubs, but okay, here’s another careful attempt to define ‘being cool’ in a way that gets around that edge case” and you’re like “edge case 31” and I’m like “okay I give up.”
Like I’m often on the other side of “precision of language is important” but here I’m not even sure you believe that the disagreements are semantic. I feel like some people (fortunately a minority) in these parts think that social norms need to be given at the level of precision that’s necessary to align an AGI, and I’m like, jesus fuck this is a good way to not have any norms at all.
I think I’m a somewhat high-scrupulosity person. When people say “EAs should abide by deontological rule X”, I hear: “EAs could get cancelled in the future if they violate rule X” and also: “the point of this deontological rule is that you abide by it in all cases, even in cases where it seems like a bad idea for other reasons”.
Some of the deontological rules people are suggesting in this thread are rules I can think of good reasons to violate—sometimes, what seem to me like very good reasons. So I push back on them because (a) I want people to critique my thinking, so I can update away from violating the proposed rule if necessary (related to your public vs private use of reason point?) and (b) I don’t care to get cancelled in the future for violating a bad rule that EAs came to accept uncritically.
I take the task of moral philosophy as identifying and debating edge cases, and I find this rather enjoyable even though it can trigger my scrupulosity. But your point that excess debate could result in no norms at all is an interesting one. Maybe we need a concept of “soft norms” which are a “yellow flag” to trigger debate if they’re getting violated.
I think I’m a somewhat high-scrupulosity person. When people say “EAs should abide by deontological rule X”, I hear: “EAs could get cancelled in the future if they violate rule X” and also: “the point of this deontological rule is that you abide by it in all cases, even in cases where it seems like a bad idea for other reasons”.
To be clear, I never said the word “deontological” in this thread before, and when I searched for it on this post, almost all references are by you, except in a single comment by freedomandutility. I think it’s possible you were overreacting to someone’s poor choice of words that I didn’t understand as literal because the literal understanding is pretty clearly silly. (On the other hand I note that this comment thread started before that comment).
I also think your threat model of what causes cancellation in this community happens to be really poor, if you think it primarily results from the breaking of specific soft taboos even for extremely reasonable and obvious-to-everyone exigencies. It’s possible I have an illusion of transparency here because I’m quite familiar with this community, and maybe you’re really new to it?[1] But I really think you’re vastly overestimating both cancellation risk in general and in this community specifically.
(b) I don’t care to get cancelled in the future for violating a bad rule that EAs came to accept uncritically.
Why? If EAs are so rigid that they literally uncritically follow overly prescriptive rules hashed out in EA Forum comments without allowing for exceptions for extreme exigencies, and they believe this so firmly that they cancel people over it, why do you want to be in this community? To the extent that you think community EA is valuable because it helps you be a better person, have more impact etc, being cancelled from it because people are totally inept is doing you (and the world) a favor. Then you can be free to do more important things rather than be surrounded by ethically incompetent people. [edited this paragraph to tone down language slightly]
I take the task of moral philosophy as identifying and debating edge cases, and I find this rather enjoyable even though it can trigger my scrupulosity.
I think this is a pretty standard position among philosophically minded people. I disagree with the standard position; I think ethics is already amazingly hard in the mainline case, and longtermism even more so, there’s no reason to overcomplicate things when reality is already complicated enough. My guess is that we are nowhere near philosophically competent enough to be trying to solve the edge cases (especially in the comment threads of unrelated topics) when we don’t even have a handle on the hard problems that are practically relevant.
Maybe we need a concept of “soft norms” which are a “yellow flag” to trigger debate if they’re getting violated.
To be clear, all norms already work this way. Like, I view approximately all norms this way, though in some cases the flags are red rather than yellow and in some cases the debate ought to be before the action (cf your reference to killing being justified during wartime; I’d rather people not kill first and then debate the ethics of it later).
But if you’re really new to this community, why do you care about being cancelled? And also, surely other communities aren’t insanely rigidly deontological? Even religions have concepts like Pikuach nefesh, the idea that (almost) all religious taboos can be violated for sufficiently important exigencies like saving lives.
I tend to be very concerned about hidden self-serving motives in myself and other people. This was my biggest takeaway from the FTX incident.
So regarding “extremely reasonable and obvious-to-everyone exigencies”, and “being cancelled … because people are total morons”—well, it seems potentially self-serving to say
I’m going to break this rule because it’s obvious to everyone that breaking it is OK in this case. In fact, anyone who thinks it is wrong for me to break this rule is a total moron, and I should do myself a favor by ignoring their ethically incompetent opinion.
I know you work in longtermist grantmaking—I can’t speak for you, but if I was a grantmaker and someone said that to me during a call, I wouldn’t exactly consider it a good sign. Seems to betray a lack of self-skepticism if nothing else.
Regarding the cluelessness stuff, it feels entangled with the deontological stuff to me, in the sense that one argument for deontological rules is that they help protect you from your own ignorance, and lack of imagination regarding how things could go wrong.
BTW, please don’t feel obligated to continue replying to me—I get the sense that I’m still aggravating you, and I don’t have a clear model for how to not do this.
Can you give a probability (conditional upon the world existing, humans being alive, etc) for the following statement
(if you don’t like the operationalization of ethical philosophers as a stand-in for moral progress, I’m happy for you to come up with a better one).
___
Taken a step back, I do want to preserve the “philosophy club” aspect of EA[1]. I think certain EA (ethics) discussions, in their naïveté, willingness to think outside the box, follow arguments to their logical conclusions, high decoupling, are at its best quite valuable and insightful in discovering moral innovations and alternative empirical worldviews.
And often you need to think pretty radical ideas before coming up with fairly sensible solutions. For example, without Brian Tomasik’s early work on Wild Animal Suffering, we would not have sensible interventions on humane pesticides and shrimp welfare. Without early figures like Bentham, Singer, Parfit, Bostrom, Yudkowsky, Shulman, etc, being very willing to our movement would be meaningfully quite different.
But I think people need to be careful about “ethical innovation” once you start having significant effects in the world, lead nonprofits, fund projects, run large fintech companies, etc.
I think Kant’s distinction between the public and private use of reason is relevant here. Namely “public use of reason” is what you advocate for or ideas you explore in the public arena. Whereas “private use of reason” is what you actually do in your own life (including both personally and professionally, but especially professionally). It’s a bit confusing because most people have very different intuitions for the relevant bits of public v private axes. But I think the core conclusion in my lights is that you can get most of the benefits[2] of ethical innovation at the discourse stage without having to bite the bullet that you also need companies, governments, etc, to be run by “ethical innovators.”
So in the alleged Nonlinear example, it is one thing for Emerson or Kat, in their public commentator capacities as (eg) EA Forum commentators or op-ed writers, to (eg) advance a position that driving license laws are immoral and classist and we should get rid of them, while being careful that they are not speaking for any organization or in a professional capacity, etc. It is quite another, in their capacities as nonprofit managers, to mandate that an employee or semi-employee to illegally drive a car without a license.
Now in the demarcation above, it will seem like I should be happy (morally and epistemically if not intellectually) with “novel” ethical positions advanced on the forum, as the EA forum is a discussion venue rather than an action venue. However, when people discuss bizarre edge cases about breaking the law without (eg) a careful public/private demarcation, in the context of a discussion where long-time forum members have been alleged to break laws in a situation that’s clearly not one of the bizarre edge cases, I worry that both the commentators and onlookers don’t in fact have a clean separation in their heads of what’s okay to say vs. do, including, unfortunately, people with quite a bit of power in absolute terms.
(Also I’m limited on free time and I can’t tell if you’re trolling etc, so I’m probably not going to comment further on this thread, sorry).
Actual philosophers, in comparison, feel relatively more sterile and uncreative at least in my discussions with them. (Though ofc I might not be clever/charismatic enough to elicit the best opinions out of them; also outgroup homogeneity is a thing). At least when I talked to analytic philosophy grad students, I’m often reminded by this quote from a profile on Parfit:
A “conformity in action but not in speech” may not be enough to prevent acute moral crises, like the Holocaust. But it’s one of the few non-technological factors that I expect to have a chance at driving steady moral progress, in the long run.
I think the probability of your “year 2300” statement is very low.
One meta-point I’m trying to make here is I don’t think we should be too hasty to derive + enforce very general ethical rules after examining a single case study. Ben’s account of Nonlinear’s behavior is troubling, and I hope the leadership takes a hard look in the mirror, but it’s important for us as a movement to learn the right lessons.
Thanks for bringing up the public vs private use of reason thing. A lot of my thinking on these questions was shaped by reading a book about the US in the antebellum + Civil War period. As I remember, in signing the Emancipation Proclamation, Abraham Lincoln was acting as an ethical innovator. (Advisors suggested he wait until right after a major Union victory to sign the proclamation, in order to better sell the North on the concept.) It does seem to me that recommending “Abraham Lincoln shouldn’t have signed the Emancipation Proclamation” is a pretty serious hit for an ethical rule to take.
(Note that an abolitionist soldier who fought well for the Union in the Civil War would be violating the deontological principle “don’t kill people” in order to produce a tiny shift in the probability of a hypothetical future benefit. And sure enough, in that soldier’s far future, we look back on that soldier as a hero. Furthermore, an analogous “year 2023″ statement would appear to miss the point—many 2023 people think that killing is generally wrong, and also that the abolitionist soldier’s actions are justified in context.)
Another case where the “leaders shouldn’t be moral innovators” principle fails by my lights: is it ethical to persuade people in AI care about animals to a greater degree than people in the general population care about animals? I would say yes.
Another point re: leaders who innovate morally—as e.g. Holly discusses in this thread, EA has a long history of encouraging weirdness and experimentation. From my perspective, freedomandutility is attempting to innovate on this by making us all sticklers for following the law. And you, as an EA leader, appear to be endorsing this innovation. You might say that innovating by advocating inaction is different than innovating by advocating action, but (a) I’m a tad skeptical of act/omission distinctions and (b) endorsing an asymmetry like this could create a ratchet where EAs act less and less, because leaders are way more comfortable advocating inaction than action.
Re: crisis decisionmaking—my sense is that many EAs feel we are in a crisis, or on the verge of a crisis. So I do think this is a good time to discuss what’s ethically acceptable in a crisis, and what ethical rules would’ve performed well in past crises. (For example, one could argue that in a time of crisis, it is especially important to support rather than undermine your friends & allies, and Nonlinear’s leadership violated this principle.)
Thanks for engaging given your limited free time. I’m eager to read pushback from people who aren’t Linch as well.
I feel like my main actual position here is something like “just be cool bro” and you’re like “what does ‘being cool’ actually mean? For that matter, what does ‘bro’ mean? Isn’t that kinda sexist?” And I’m like “okay here’s one operationalization of “being cool” And here’s an operationalization of “bro” that doesn’t have sexist connotations.” and you’re like “edge case 17 here, Steve Jobs was super cool despite wearing a black turtleneck for many years and believing in homeopathy” and my actual position is like “okay but does that even matter; like what’s your Bayesian update for “asshole” vs “actually supercool on a deep level” on someone who consistently goes around saying that being cool is for scrubs, but okay, here’s another careful attempt to define ‘being cool’ in a way that gets around that edge case” and you’re like “edge case 31” and I’m like “okay I give up.”
Like I’m often on the other side of “precision of language is important” but here I’m not even sure you believe that the disagreements are semantic. I feel like some people (fortunately a minority) in these parts think that social norms need to be given at the level of precision that’s necessary to align an AGI, and I’m like, jesus fuck this is a good way to not have any norms at all.
Sorry, I didn’t mean to antagonize you that way.
I think I’m a somewhat high-scrupulosity person. When people say “EAs should abide by deontological rule X”, I hear: “EAs could get cancelled in the future if they violate rule X” and also: “the point of this deontological rule is that you abide by it in all cases, even in cases where it seems like a bad idea for other reasons”.
Some of the deontological rules people are suggesting in this thread are rules I can think of good reasons to violate—sometimes, what seem to me like very good reasons. So I push back on them because (a) I want people to critique my thinking, so I can update away from violating the proposed rule if necessary (related to your public vs private use of reason point?) and (b) I don’t care to get cancelled in the future for violating a bad rule that EAs came to accept uncritically.
I take the task of moral philosophy as identifying and debating edge cases, and I find this rather enjoyable even though it can trigger my scrupulosity. But your point that excess debate could result in no norms at all is an interesting one. Maybe we need a concept of “soft norms” which are a “yellow flag” to trigger debate if they’re getting violated.
I appreciate the apology.
To be clear, I never said the word “deontological” in this thread before, and when I searched for it on this post, almost all references are by you, except in a single comment by freedomandutility. I think it’s possible you were overreacting to someone’s poor choice of words that I didn’t understand as literal because the literal understanding is pretty clearly silly. (On the other hand I note that this comment thread started before that comment).
I also think your threat model of what causes cancellation in this community happens to be really poor, if you think it primarily results from the breaking of specific soft taboos even for extremely reasonable and obvious-to-everyone exigencies. It’s possible I have an illusion of transparency here because I’m quite familiar with this community, and maybe you’re really new to it?[1] But I really think you’re vastly overestimating both cancellation risk in general and in this community specifically.
Why? If EAs are so rigid that they literally uncritically follow overly prescriptive rules hashed out in EA Forum comments without allowing for exceptions for extreme exigencies, and they believe this so firmly that they cancel people over it, why do you want to be in this community? To the extent that you think community EA is valuable because it helps you be a better person, have more impact etc, being cancelled from it because people are totally inept is doing you (and the world) a favor. Then you can be free to do more important things rather than be surrounded by ethically incompetent people. [edited this paragraph to tone down language slightly]
I think this is a pretty standard position among philosophically minded people. I disagree with the standard position; I think ethics is already amazingly hard in the mainline case, and longtermism even more so, there’s no reason to overcomplicate things when reality is already complicated enough. My guess is that we are nowhere near philosophically competent enough to be trying to solve the edge cases (especially in the comment threads of unrelated topics) when we don’t even have a handle on the hard problems that are practically relevant.
To be clear, all norms already work this way. Like, I view approximately all norms this way, though in some cases the flags are red rather than yellow and in some cases the debate ought to be before the action (cf your reference to killing being justified during wartime; I’d rather people not kill first and then debate the ethics of it later).
But if you’re really new to this community, why do you care about being cancelled? And also, surely other communities aren’t insanely rigidly deontological? Even religions have concepts like Pikuach nefesh, the idea that (almost) all religious taboos can be violated for sufficiently important exigencies like saving lives.
I tend to be very concerned about hidden self-serving motives in myself and other people. This was my biggest takeaway from the FTX incident.
So regarding “extremely reasonable and obvious-to-everyone exigencies”, and “being cancelled … because people are total morons”—well, it seems potentially self-serving to say
I know you work in longtermist grantmaking—I can’t speak for you, but if I was a grantmaker and someone said that to me during a call, I wouldn’t exactly consider it a good sign. Seems to betray a lack of self-skepticism if nothing else.
Regarding the cluelessness stuff, it feels entangled with the deontological stuff to me, in the sense that one argument for deontological rules is that they help protect you from your own ignorance, and lack of imagination regarding how things could go wrong.
BTW, please don’t feel obligated to continue replying to me—I get the sense that I’m still aggravating you, and I don’t have a clear model for how to not do this.