There’s a time and place to discuss exceptions to ethics and when goals might justify the means, but this post clearly isn’t it.
I agree that the more inquisitive posts are more interesting, but the goal of this post is clearly not meant to reflect deeply on what to learn from the situation. It’s RP giving an update/statement that’s legally robust and shares the most important details relevant to RP’s functioning
I read the original comment not as an exhortation to always include lots of nuanced reflection in mostly-unrelated posts, but to have a norm that on the forum, the time and place to write sentences that you do not think are actually true as stated is “never (except maybe April Fools)”.
The change I’d like to see in this post isn’t a five-paragraph footnote on morality, just the replacement of a sentence that I don’t think they actually believe with one they do. I think that environments where it is considered a faux pas to point out “actually, I don’t think you can have a justified belief in the thing you said” are extremely corrosive to the epistemics of a community hosting those environments, and it’s worth pushing back on them pretty strongly.
Ah I didn’t mean to apply Habryka’s comment to be faux pas. That’s awkward phrasing of mine. I just meant to say that the points he raises feel irrelevant to this post and its context.
There’s a time and place to discuss exceptions to ethics and when goals might justify the means, but this post clearly isn’t it.
Wait, if now isn’t the time to be specific about what actions we actually condemn and what actual ethical lines to draw, when is it? Clearly one of the primary things that this post is trying to communicate is that Rethink condemns certain actions at FTX. It seems extremely important (and highly deceptive to do otherwise) to be accurate in what it condemns, and in what ways.
Like, let’s look ahead a few months. Some lower-level FTX employee is accused of having committed some minor fraud with good ethical justification that actually looks reasonable according to RP leadership, so they make a statement coming out in defense of that person.
Do you not expect this to create strong feelings of betrayal in previous readers of this post, and a strong feeling of having been lied to? Many people right now are looking for reassurance about where the actual ethical lines are that EA is drawing. Trying to reassure those people seems like one of the primary goals of this post.
But this post appears to me be basically deceptive about where those lines are, or massively premature in its conviction on where to make commitments for the future (like, I think it would both have quite bad consequences to defend an individual who had committed ethically justifiable fraud, and also a mistake to later on condemn that individual because I guess RP has now committed to a stance of being against all fraud, independently of its circumstances, with this post written as is).
I think one of the primary functions of this post is to reassure readers about what kind of behavior we consider acceptable and what kind of behavior we do not consider acceptable. Being inaccurate or deceptive about that line is a big deal. I think indeed being accurate about those lines is probably the most important component of posts like this, and the component that will have the longest-ranging consequences.
There is of-course an easy way out, which is to just express uncertainty in where the ethical lines are, or to just not make extremely strong statements about where the lines are in the first place that you don’t believe. I think we are still learning about what happened. Holden’s post and ARC’s posts for example do not strike me as overstepping what they believe or know.
Many people right now are looking for reassurance about where the actual ethical lines are that EA is drawing. Trying to reassure those people seems like one of the primary goals of this post.
(speaking for myself. Was not involved in drafting the post, though I read an earlier version of it) FWIW this is very much not how I read the post, which is more like “organizational updates in light of FTX crashing.” RP’s financial position, legal position, approach to risk, and future hiring plans all seem to be relevant here, at least for current and future collaborators, funders, and employees. They also take up more lines than the paragraph you focused on, and carry more information than discussions about EA ethical lines, which are quite plentiful in the forum and elsewhere.
That’s possible! My guess is most readers are more interested in the condemnation part though, given the overwhelming support that posts like this have received, which have basically no content besides condemnation (and IMO with even bigger problems on being inaccurate about where to draw ethical lines).
It is plausible that RP primarily aimed to just give an organizational update, though I do think de-facto the condemnation part will just end up being more important and have a greater effect on the world and will be referred back to more frequently than the other stuff, so there might just be a genuine mismatch between the primary goals that RP has with this post, and where the majority of the effect of this post will come from.
My guess is most readers are more interested in the condemnation part though, given the overwhelming support that posts like this have received, which have basically no content besides condemnation (and IMO with even bigger problems on being inaccurate about where to draw ethical lines).
I think my post is quite clear about what sort of fraud I am talking about. If you look at the reasons that I give in my post for why fraud is wrong, they clearly don’t apply to any of examples of justifiable lying that you’ve provided here (lying to Nazis, doing the least fraudulent thing in a catch-22, lying by accident, etc.).
In particular, if we take the lying to Nazis example and see what the reasons I provide say:
When we, as humans, consider whether or not it makes sense to break the rules for our own benefit, we are running on corrupted hardware: we are very good at justifying to ourselves that seizing money and power for own benefit is really for the good of everyone. If I found myself in a situation where it seemed to me like seizing power for myself was net good, I would worry that in fact I was fooling myself—and even if I was pretty sure I wasn’t fooling myself, I would still worry that I was falling prey to the unilateralist’s curse if it wasn’t very clearly a good idea to others as well.
This clearly doesn’t apply to lying to Nazis, since it’s not a situation where money and power are being seized for oneself.
Additionally, if you’re familiar with decision theory, you’ll know that credibly pre-committing to follow certain principles—such as never engaging in fraud—is extremely advantageous, as it makes clear to other agents that you are a trustworthy actor who can be relied upon. In my opinion, I think such strategies of credible pre-commitments are extremely important for cooperation and coordination.
I think the fact that you would lie to a Nazi makes you more trustworthy for coordination and cooperation, not less.
Furthermore, I will point out, if FTX did engage in fraud here, it was clearly in fact not a good idea in this case: I think the lasting consequences to EA—and the damage caused by FTX to all of their customers and employees—will likely outweigh the altruistic funding already provided by FTX to effective causes.
And in the case of lying to Nazis, the consequences are clearly positive.
I am working on a longer response to your post, so not going to reply to you here in much depth.
Responding to this specific comment:
I don’t think your line of argumentation here makes much sense (making very broad statements like “Fraud in the service of Effective Altruism is unacceptable” but then saying “well, but of course only the kind of fraud for which I gave specific counterarguments”). Your post did not indicate that it was talking about any narrower definition of fraud, and I am confident (based on multiple conversations I’ve had about it with) that it was being read by other readers as arguing for a broad definition of fraud. If you actually think it should only apply to a narrower definition of fraud, then I think you should add a disclaimer to the top explaining what kind of fraud you are talking about, or change the title.
I think you’re wrong about how most people would interpret the post. I predict that if readers were polled on whether or not the post agreed with “lying to Nazis is wrong” the results would be heavily in favor of “no, the post does not agree with that.” If you actually had a poll that showed the opposite I would definitely update.
I think the nazi example is too loaded for various reasons (and triggers people’s “well, this is clearly some kind of thought experiment” sensors).
I think there are a number of other examples that I have listed in the comments to this post that I think would show this. E.g. something in the space of “jewish person lies about their religious affiliation in order to escape discrimination that’s unfair to them for something like scholarship money, of which they then donate a portion (partially because they do want to offset the harm that came from being dishonest)”, is I think a better experiment here.
I think people would interpret your post being pretty clearly and strongly against, in a way that doesn’t seem very justified to me (my model of whether this is OK is pretty context-dependent, to be clear).
Adding on to my other reply: from my perspective, I think that if I say “category A is bad because X, Y, Z” and you’re like “but edge case B!” and edge case B doesn’t satisfy X, Y, or Z, then clearly I’m not including it in category A.
That sounds like a fully generalized defense against all counterarguments, and I don’t think is how discourse usually works. If you say “proposition A is true about category B, for reasons X, Y, Z” and someone else is like “but here is an argument C for why proposition A is not true about category B”, then of course you don’t get to be like, “oh, well, I of course meant the subset of category B where argument C doesn’t hold”.
If I say “being honest is bad because sometimes people use true information against you” and you say “but sometimes they won’t though and actually use it to help you”, then I can’t say “well, of course I didn’t include that case when I was talking about ‘being honest’, I was just talking about being honest to people who don’t care about you”.
Or less abstractly, when you argue that giving money to GiveWell is good because money donated there can go much farther than otherwise, and then GiveWell turns out to have defrauded the money, then you don’t get to be like “oh, well, of course, in that case giving money to GiveWell was bad, and I meant to exclude the case where GiveWell was defrauding money, so my original post is still correct”.
That sounds like a fully generalized defense against all counterarguments, and I don’t think is how discourse usually works.
It’s clearly not fully general because it only applies to excluding edge cases that don’t satisfy the reasons I explicitly state in the post.
If you say “proposition A is true about category B, for reasons X, Y, Z” and someone else is like “but here is an argument C for why proposition A is not true about category B”, then of course you don’t get to be like, “oh, well, I of course meant the subset of category B where argument C doesn’t hold”.
Sure, but that’s not what happened. There are some pretty big disanalogies between the scenarios you’re describing and what actually happened:
The question is about what activities belong to the vague, poorly defined category of “fraud,” not about the truth of some clearly stated “proposition A.” When someone says “category A has property X,” for any vague category A—which is basically all categories of things—there will always be edge cases where it’s not clear.
You’re not presenting some new “argument C” for why fraud is good actually. You’re just saying there are edge cases where my arguments don’t apply. Which is obviously correct! But just because there are always edge cases for all categories—it’s effectively just an objection to the use of categories at all.
Furthermore, in this case, I pretty clearly laid out exactly why I thought fraud was bad. Which gives you a lot of evidence to figure out what class of things I was centrally pointing to when using “fraud” as a general category, and it’s pretty clear based on those reasons that the examples you’re providing don’t fit into that category.
The question is about what activities belong to the vague, poorly defined category of “fraud,” not about the truth of some clearly stated “proposition A.” When someone says “category A has property X,” for any vague category A—which is basically all categories of things—there will always be edge cases where it’s not clear.
I mean, indeed the combination of “fraud is a vague, poorly defined category” together with a strong condemnation of said “fraud”, without much explicit guidance on what kind of thing you are talking about, is what I am objecting to in your post (among some other things, but again, seems better to leave that up to my more thorough response).
I think you are vastly overestimating how transparent the boundaries of the fraud concept are that you are trying to point to. Like, I don’t know whether you meant to include half of the examples I listed on this thread, and I don’t think other readers of your post do. Nevertheless you called for strong condemnation of that ill-defined category.
I think the average reader of your post will leave with a feeling that they are supposed to be backing up some kind of clear line, because that’s the language that your post is written in. But there is no clear line, and your post does not actually meaningfully commit us to anything, or should serve as any kind of clear sign to the external world about where our ethical lines are.
Of course we oppose fraud of the type that Sam committed, that fraud exploded violently and was also incredibly reckless and was likely even net-negative by Sam’s own goals, but that’s obvious and not an interesting statement and is not actually what your post is primarily saying (indeed, it is saying that we should condemn fraud independently of the details of the FTX case, whatever that means).
I think what we owe the world is both reflection about where our actual lines are (and how the ones that we did indeed have might have contributed to this situation), as well as honest and precise statements about what kinds of things we might actually consider doing in the future. I don’t think your post is helping with either, but instead feels to me like an inwards-directed applause light for “fraud bad”, in a way that does not give people who have genuine concerns about where our moral lines are (which includes me) much comfort or reassurance.
I mean, indeed the combination of “fraud is a vague, poorly defined category” together with a strong condemnation of said “fraud”, without much explicit guidance on what kind of thing you are talking about, is what I am objecting to in your post.
I guess I don’t really think this is a problem. We’re perfectly comfortable with statements like “murder is wrong” while also understanding that “but killing Hitler would be okay.” I don’t mean to say that talking about the edge cases isn’t ever helpful—in fact, I think it can be quite useful to try to be clear about what’s happening on the edges in certain cases, since it can sometimes be quite relevant. But I don’t see that as a reason to object to someone saying “murder is wrong.”
To be clear, if your criticism is “the post doesn’t say much beyond the obvious,” I think that’s basically correct—it was a short post and wasn’t intended to accomplish much more than basic common knowledge building around this sort of fraud being bad even when done with ostensibly altruistic motivations. And I agree that further posts discussing more clearly how to think about various edge cases would be a valuable contribution to the ongoing discussion (though I don’t personally plan to write such a post because I think I have more valuable things to do with my time).
However, if your criticism is “your post says edge case B is bad but edge case B is actually good,” I think that’s a pretty silly criticism that seems like it just doesn’t really understand or engage with the inherent fuzziness of conceptual categories.
think what we owe the world is both reflection about where our actual lines are (and how the ones that we did indeed have might have contributed to this situation), as well as honest and precise statements about what kinds of things we might actually consider doing in the future.
I actually state in the post that I agree with this. From my post:
In that spirit, I think it’s worth us carefully confronting the moral question here: is fraud in the service of raising money for effective causes wrong?
Perhaps that is not as clear as you would like, but like I said it was a short post. And that sentence is pretty clearly saying that I think it’s worthwhile for us to try to carefully confront the moral question of what is okay and what is not—which the post then attempts to start the discussion on by providing some of what I think.
I do think your post is making actually answering that question as a community harder, because you yourself answer that question with “we unequivocally need to condemn this behavior” in a form that implies strong moral censure to anyone who argues the opposite.
You also said that we should do so independently of the facts of the FTX case, which feels weird to me, because I sure think the details of the case are very relevant to what ethical lines I want to draw in the future.
The section you quote here reads to me as a rhetorical question. You say “carefully”, but you just answer the question yourself in the next sentence and say that the answer “clearly” is the way you say it is. I don’t think your post invites discussion or discourse about where the lines of fraud are, or when we do think deception is acceptable, or generally reflecting on our moral principles.
in a form that implies strong moral censure to anyone who argues the opposite
I don’t think this and didn’t say it. If you have any quotes from the post that you think say this, I’d be happy to edit it to be more clear, but from my perspective it feels like you’re inventing a straw man to be mad at rather than actually engaging with what I said.
You also said that we should do so independently of the facts of the FTX case, which feels weird to me, because I sure think the details of the case are very relevant to what ethical lines I want to draw in the future.
I think that, for the most part, you should be drawing your ethical boundaries in a way that is logically prior to learning about these sorts of facts. Otherwise it’s very hard to cooperate with you, for example.
The section you quote here reads to me as a rhetorical question.
It isn’t intended as a rhetorical question. I am being quite sincere there, though rereading it, I see how you could be confused. I just edited that section to the following:
In that spirit, I think it’s worth us carefully confronting the moral question here: is fraud in the service of raising money for effective causes wrong? This is a thorny moral question that is worth nuanced discussion, and I don’t claim to have all the answers.
Nevertheless, I think fraud in the service of effective altruism is basically unacceptable—and that’s as someone who is about as hardcore of a total utilitarian as it is possible to be.
I don’t think this and didn’t say it. If you have any quotes from the post that you think say this, I’d be happy to edit it to be more clear, but from my perspective it feels like you’re inventing a straw man to be mad at rather than actually engaging with what I said.
I mean, the title of your post starts with “We must be very clear”. This at least to me communicated an attitude that discourages people prominently associated with EA going like “I don’t know man, I don’t think I stand behind this”. I don’t really know what other purpose the “we must be very clear” here serves besides trying to indicate that you think it’s very important that EA projects a unified front here.
And I think independently of your intention, I am confident that your post has also not made other people excited about discussing the actual ethical lines here, based on conversations I’ve had with other people about how they relate to your post (many of which like the post, but exactly for the reason that they don’t want to see people defending fraud, which would look quite bad for us).
I think that, for the most part, you should be drawing your ethical boundaries in a way that is logically prior to learning about these sorts of facts. Otherwise it’s very hard to cooperate with you, for example.
Yeah, I think I disagree with this. I think most of my ethical boundaries are pretty contingent on facts about history and what kind of cognitive algorithms seem to be perform well or badly, and indeed almost all my curiosities when trying to actually genuinely answer the question of when fraud is acceptable consist of questions about the empirical details of the world, like “to what degree is your environment coercive so that fraud is justified?” and “to what degree is fraud widespread?” and “how many people does fraud seem to hurt?”, and so on.
I don’t think this makes me harder to coordinate with. Indeed, I think being receptive to empirical feedback about ethical rules is I think quite important for being able to be cooperated with, since it gives people the confidence that I will update on evidence that some cognitive strategy, or some attitude, or some moral perspective causes harm.
I don’t really know what other purpose the “we must be very clear” here serves besides trying to indicate that you think it’s very important that EA projects a unified front here.
I am absolutely intending to communicate that I think it would be good for people to say that they think fraud is bad. But that doesn’t mean that I think we should condemn people who disagree regarding whether saying that is good or not. Rather, I think discussion about whether it’s a good idea for people to condemn fraud seems great to me, and my post was an attempt to provide my (short, abbreviated) take on that question.
Like, let’s look ahead a few months. Some lower-level FTX employee is accused of having committed some minor fraud with good ethical justification that actually looks reasonable according to RP leadership, so they make a statement coming out in defense of that person.
Do you not expect this to create strong feelings of betrayal in previous readers of this post, and a strong feeling of having been lied to?
I broadly agree with your comments otherwise, but in fact in this hypothetical I expect most readers of this post would not feel betrayed or lied to. It’s really uncommon for people to interpret words literally; I think the standard interpretation of the condemnation part of this post will be something along the lines of “stealing $8b from customers is bad” rather than the literal thing that was written. (Or at least that’ll be the standard interpretation for people who haven’t read the comments.)
The negative consequence I’d point to is that you lose the ability to convey information in cases where it matters. If Rethink says “X is bad, we should do something about it” I’m more likely to ignore it than if you said it.
Yeah, sorry, I think you are right that as phrased this is incorrect. I think my phrasing implies I am talking about the average or median reader, who I don’t expect to react in this way.
Across EA, I do expect reactions to be pretty split. I do expect many of the most engaged EAs to have taken statements like this pretty literally and to feel quite betrayed (while I also think that in-general the vast majority of people will have interpreted the statements as being more about mood-affiliation and to have not really been intended to convey information).
I do think that at least for me, and many people I know, my engagement with EA is pretty conditional on exactly the ability for people in EA to make ethical statements and actually mean them, in the sense of being interested in following through with the consequences of those statements, and to try to make many different ethical statements consistent, and losing that ability I do think would lose a lot of what makes EA valuable, at least for me and many people I know.
Fwiw I’d also say that most of “the most engaged EAs” would not feel betrayed or lied to (for the same reasons), though I would be more uncertain about that. Mostly I’m predicting that there’s pretty strong selection bias in the people you’re thinking of and you’d have to really precisely pin them down (e.g. maybe something like “rationalist-adjacent highly engaged EAs who have spent a long time thinking meta-honesty and glomarization”) before it would become true that a majority of them would feel betrayed or lied to.
That’s plausible, though I do think I would take a bet here if we could somehow operationalize it. I do think I have to adjust for a bunch of selection effects in my thinking, and so am not super confident here, but still a bit above 50%.
There’s a time and place to discuss exceptions to ethics and when goals might justify the means, but this post clearly isn’t it.
Other folks in this comment thread mentioned that Ollie’s request doesn’t require any long philosophical analyses; it just requires leaving out sentences that are hyperbole.
I want to separately bid for a norm on the EA Forum that we err on the side of “encouraging factual discussion at awkward times and in awkward places”, as opposed to erring on the side of “people wait around for a maximally clear social sign al that it’s Okay to voice their thoughts”. If a post like this belongs on the EA Forum at all, then I think it should be fine to do our normal EA-Forum thing of nitpicking phrasings, asking follow-up questions, etc.
It’s RP giving an update/statement that’s legally robust
I don’t think that in this case, saying false things improves RP’s legal situation. I’d assume the goal is reputational (send the right social signals to EAs and random-journalists-and-social-media-users-paying-attention-to-EA), as opposed to legal.
But yes, there might be legal reasons to leave out the sentence altogether, if the alternative is to try to hammer out a much more concrete and detailed version of the sentence? Also, this is a co-written post, and it can be hard to phrase those in ways that are agreeable to every co-author.
I personally mainly disagree with Oliver on the above thread—however, given that there is disagreement, it seems very healthy to me for there to be an open discussion on it.
In this case the issue doesn’t seem scary to discuss publicly. If this were about a much more directly controversial and serious issue, say about public allegations about individuals, that’s where I’d prefer trying to begin it privately first.
> I don’t think that in this case, saying false things improves RP’s legal situation. I’d assume the goal is reputational
I personally didn’t see this as a legal statement, as much as a public statement meant for the community at large.
There’s a time and place to discuss exceptions to ethics and when goals might justify the means, but this post clearly isn’t it.
I agree that the more inquisitive posts are more interesting, but the goal of this post is clearly not meant to reflect deeply on what to learn from the situation. It’s RP giving an update/statement that’s legally robust and shares the most important details relevant to RP’s functioning
I read the original comment not as an exhortation to always include lots of nuanced reflection in mostly-unrelated posts, but to have a norm that on the forum, the time and place to write sentences that you do not think are actually true as stated is “never (except maybe April Fools)”.
The change I’d like to see in this post isn’t a five-paragraph footnote on morality, just the replacement of a sentence that I don’t think they actually believe with one they do. I think that environments where it is considered a faux pas to point out “actually, I don’t think you can have a justified belief in the thing you said” are extremely corrosive to the epistemics of a community hosting those environments, and it’s worth pushing back on them pretty strongly.
Also note that their statement included ”...that occurred at FTX”. So not any potential fraud anywhere.
Ah I didn’t mean to apply Habryka’s comment to be faux pas. That’s awkward phrasing of mine. I just meant to say that the points he raises feel irrelevant to this post and its context.
Wait, if now isn’t the time to be specific about what actions we actually condemn and what actual ethical lines to draw, when is it? Clearly one of the primary things that this post is trying to communicate is that Rethink condemns certain actions at FTX. It seems extremely important (and highly deceptive to do otherwise) to be accurate in what it condemns, and in what ways.
Like, let’s look ahead a few months. Some lower-level FTX employee is accused of having committed some minor fraud with good ethical justification that actually looks reasonable according to RP leadership, so they make a statement coming out in defense of that person.
Do you not expect this to create strong feelings of betrayal in previous readers of this post, and a strong feeling of having been lied to? Many people right now are looking for reassurance about where the actual ethical lines are that EA is drawing. Trying to reassure those people seems like one of the primary goals of this post.
But this post appears to me be basically deceptive about where those lines are, or massively premature in its conviction on where to make commitments for the future (like, I think it would both have quite bad consequences to defend an individual who had committed ethically justifiable fraud, and also a mistake to later on condemn that individual because I guess RP has now committed to a stance of being against all fraud, independently of its circumstances, with this post written as is).
I think one of the primary functions of this post is to reassure readers about what kind of behavior we consider acceptable and what kind of behavior we do not consider acceptable. Being inaccurate or deceptive about that line is a big deal. I think indeed being accurate about those lines is probably the most important component of posts like this, and the component that will have the longest-ranging consequences.
There is of-course an easy way out, which is to just express uncertainty in where the ethical lines are, or to just not make extremely strong statements about where the lines are in the first place that you don’t believe. I think we are still learning about what happened. Holden’s post and ARC’s posts for example do not strike me as overstepping what they believe or know.
(speaking for myself. Was not involved in drafting the post, though I read an earlier version of it) FWIW this is very much not how I read the post, which is more like “organizational updates in light of FTX crashing.” RP’s financial position, legal position, approach to risk, and future hiring plans all seem to be relevant here, at least for current and future collaborators, funders, and employees. They also take up more lines than the paragraph you focused on, and carry more information than discussions about EA ethical lines, which are quite plentiful in the forum and elsewhere.
That’s possible! My guess is most readers are more interested in the condemnation part though, given the overwhelming support that posts like this have received, which have basically no content besides condemnation (and IMO with even bigger problems on being inaccurate about where to draw ethical lines).
It is plausible that RP primarily aimed to just give an organizational update, though I do think de-facto the condemnation part will just end up being more important and have a greater effect on the world and will be referred back to more frequently than the other stuff, so there might just be a genuine mismatch between the primary goals that RP has with this post, and where the majority of the effect of this post will come from.
I think my post is quite clear about what sort of fraud I am talking about. If you look at the reasons that I give in my post for why fraud is wrong, they clearly don’t apply to any of examples of justifiable lying that you’ve provided here (lying to Nazis, doing the least fraudulent thing in a catch-22, lying by accident, etc.).
In particular, if we take the lying to Nazis example and see what the reasons I provide say:
This clearly doesn’t apply to lying to Nazis, since it’s not a situation where money and power are being seized for oneself.
I think the fact that you would lie to a Nazi makes you more trustworthy for coordination and cooperation, not less.
And in the case of lying to Nazis, the consequences are clearly positive.
I am working on a longer response to your post, so not going to reply to you here in much depth.
Responding to this specific comment:
I don’t think your line of argumentation here makes much sense (making very broad statements like “Fraud in the service of Effective Altruism is unacceptable” but then saying “well, but of course only the kind of fraud for which I gave specific counterarguments”). Your post did not indicate that it was talking about any narrower definition of fraud, and I am confident (based on multiple conversations I’ve had about it with) that it was being read by other readers as arguing for a broad definition of fraud. If you actually think it should only apply to a narrower definition of fraud, then I think you should add a disclaimer to the top explaining what kind of fraud you are talking about, or change the title.
I think you’re wrong about how most people would interpret the post. I predict that if readers were polled on whether or not the post agreed with “lying to Nazis is wrong” the results would be heavily in favor of “no, the post does not agree with that.” If you actually had a poll that showed the opposite I would definitely update.
I think the nazi example is too loaded for various reasons (and triggers people’s “well, this is clearly some kind of thought experiment” sensors).
I think there are a number of other examples that I have listed in the comments to this post that I think would show this. E.g. something in the space of “jewish person lies about their religious affiliation in order to escape discrimination that’s unfair to them for something like scholarship money, of which they then donate a portion (partially because they do want to offset the harm that came from being dishonest)”, is I think a better experiment here.
I think people would interpret your post being pretty clearly and strongly against, in a way that doesn’t seem very justified to me (my model of whether this is OK is pretty context-dependent, to be clear).
Adding on to my other reply: from my perspective, I think that if I say “category A is bad because X, Y, Z” and you’re like “but edge case B!” and edge case B doesn’t satisfy X, Y, or Z, then clearly I’m not including it in category A.
That sounds like a fully generalized defense against all counterarguments, and I don’t think is how discourse usually works. If you say “proposition A is true about category B, for reasons X, Y, Z” and someone else is like “but here is an argument C for why proposition A is not true about category B”, then of course you don’t get to be like, “oh, well, I of course meant the subset of category B where argument C doesn’t hold”.
If I say “being honest is bad because sometimes people use true information against you” and you say “but sometimes they won’t though and actually use it to help you”, then I can’t say “well, of course I didn’t include that case when I was talking about ‘being honest’, I was just talking about being honest to people who don’t care about you”.
Or less abstractly, when you argue that giving money to GiveWell is good because money donated there can go much farther than otherwise, and then GiveWell turns out to have defrauded the money, then you don’t get to be like “oh, well, of course, in that case giving money to GiveWell was bad, and I meant to exclude the case where GiveWell was defrauding money, so my original post is still correct”.
It’s clearly not fully general because it only applies to excluding edge cases that don’t satisfy the reasons I explicitly state in the post.
Sure, but that’s not what happened. There are some pretty big disanalogies between the scenarios you’re describing and what actually happened:
The question is about what activities belong to the vague, poorly defined category of “fraud,” not about the truth of some clearly stated “proposition A.” When someone says “category A has property X,” for any vague category A—which is basically all categories of things—there will always be edge cases where it’s not clear.
You’re not presenting some new “argument C” for why fraud is good actually. You’re just saying there are edge cases where my arguments don’t apply. Which is obviously correct! But just because there are always edge cases for all categories—it’s effectively just an objection to the use of categories at all.
Furthermore, in this case, I pretty clearly laid out exactly why I thought fraud was bad. Which gives you a lot of evidence to figure out what class of things I was centrally pointing to when using “fraud” as a general category, and it’s pretty clear based on those reasons that the examples you’re providing don’t fit into that category.
I mean, indeed the combination of “fraud is a vague, poorly defined category” together with a strong condemnation of said “fraud”, without much explicit guidance on what kind of thing you are talking about, is what I am objecting to in your post (among some other things, but again, seems better to leave that up to my more thorough response).
I think you are vastly overestimating how transparent the boundaries of the fraud concept are that you are trying to point to. Like, I don’t know whether you meant to include half of the examples I listed on this thread, and I don’t think other readers of your post do. Nevertheless you called for strong condemnation of that ill-defined category.
I think the average reader of your post will leave with a feeling that they are supposed to be backing up some kind of clear line, because that’s the language that your post is written in. But there is no clear line, and your post does not actually meaningfully commit us to anything, or should serve as any kind of clear sign to the external world about where our ethical lines are.
Of course we oppose fraud of the type that Sam committed, that fraud exploded violently and was also incredibly reckless and was likely even net-negative by Sam’s own goals, but that’s obvious and not an interesting statement and is not actually what your post is primarily saying (indeed, it is saying that we should condemn fraud independently of the details of the FTX case, whatever that means).
I think what we owe the world is both reflection about where our actual lines are (and how the ones that we did indeed have might have contributed to this situation), as well as honest and precise statements about what kinds of things we might actually consider doing in the future. I don’t think your post is helping with either, but instead feels to me like an inwards-directed applause light for “fraud bad”, in a way that does not give people who have genuine concerns about where our moral lines are (which includes me) much comfort or reassurance.
I guess I don’t really think this is a problem. We’re perfectly comfortable with statements like “murder is wrong” while also understanding that “but killing Hitler would be okay.” I don’t mean to say that talking about the edge cases isn’t ever helpful—in fact, I think it can be quite useful to try to be clear about what’s happening on the edges in certain cases, since it can sometimes be quite relevant. But I don’t see that as a reason to object to someone saying “murder is wrong.”
To be clear, if your criticism is “the post doesn’t say much beyond the obvious,” I think that’s basically correct—it was a short post and wasn’t intended to accomplish much more than basic common knowledge building around this sort of fraud being bad even when done with ostensibly altruistic motivations. And I agree that further posts discussing more clearly how to think about various edge cases would be a valuable contribution to the ongoing discussion (though I don’t personally plan to write such a post because I think I have more valuable things to do with my time).
However, if your criticism is “your post says edge case B is bad but edge case B is actually good,” I think that’s a pretty silly criticism that seems like it just doesn’t really understand or engage with the inherent fuzziness of conceptual categories.
I actually state in the post that I agree with this. From my post:
Perhaps that is not as clear as you would like, but like I said it was a short post. And that sentence is pretty clearly saying that I think it’s worthwhile for us to try to carefully confront the moral question of what is okay and what is not—which the post then attempts to start the discussion on by providing some of what I think.
I do think your post is making actually answering that question as a community harder, because you yourself answer that question with “we unequivocally need to condemn this behavior” in a form that implies strong moral censure to anyone who argues the opposite.
You also said that we should do so independently of the facts of the FTX case, which feels weird to me, because I sure think the details of the case are very relevant to what ethical lines I want to draw in the future.
The section you quote here reads to me as a rhetorical question. You say “carefully”, but you just answer the question yourself in the next sentence and say that the answer “clearly” is the way you say it is. I don’t think your post invites discussion or discourse about where the lines of fraud are, or when we do think deception is acceptable, or generally reflecting on our moral principles.
I don’t think this and didn’t say it. If you have any quotes from the post that you think say this, I’d be happy to edit it to be more clear, but from my perspective it feels like you’re inventing a straw man to be mad at rather than actually engaging with what I said.
I think that, for the most part, you should be drawing your ethical boundaries in a way that is logically prior to learning about these sorts of facts. Otherwise it’s very hard to cooperate with you, for example.
It isn’t intended as a rhetorical question. I am being quite sincere there, though rereading it, I see how you could be confused. I just edited that section to the following:
I mean, the title of your post starts with “We must be very clear”. This at least to me communicated an attitude that discourages people prominently associated with EA going like “I don’t know man, I don’t think I stand behind this”. I don’t really know what other purpose the “we must be very clear” here serves besides trying to indicate that you think it’s very important that EA projects a unified front here.
And I think independently of your intention, I am confident that your post has also not made other people excited about discussing the actual ethical lines here, based on conversations I’ve had with other people about how they relate to your post (many of which like the post, but exactly for the reason that they don’t want to see people defending fraud, which would look quite bad for us).
Yeah, I think I disagree with this. I think most of my ethical boundaries are pretty contingent on facts about history and what kind of cognitive algorithms seem to be perform well or badly, and indeed almost all my curiosities when trying to actually genuinely answer the question of when fraud is acceptable consist of questions about the empirical details of the world, like “to what degree is your environment coercive so that fraud is justified?” and “to what degree is fraud widespread?” and “how many people does fraud seem to hurt?”, and so on.
I don’t think this makes me harder to coordinate with. Indeed, I think being receptive to empirical feedback about ethical rules is I think quite important for being able to be cooperated with, since it gives people the confidence that I will update on evidence that some cognitive strategy, or some attitude, or some moral perspective causes harm.
I am absolutely intending to communicate that I think it would be good for people to say that they think fraud is bad. But that doesn’t mean that I think we should condemn people who disagree regarding whether saying that is good or not. Rather, I think discussion about whether it’s a good idea for people to condemn fraud seems great to me, and my post was an attempt to provide my (short, abbreviated) take on that question.
I broadly agree with your comments otherwise, but in fact in this hypothetical I expect most readers of this post would not feel betrayed or lied to. It’s really uncommon for people to interpret words literally; I think the standard interpretation of the condemnation part of this post will be something along the lines of “stealing $8b from customers is bad” rather than the literal thing that was written. (Or at least that’ll be the standard interpretation for people who haven’t read the comments.)
The negative consequence I’d point to is that you lose the ability to convey information in cases where it matters. If Rethink says “X is bad, we should do something about it” I’m more likely to ignore it than if you said it.
Yeah, sorry, I think you are right that as phrased this is incorrect. I think my phrasing implies I am talking about the average or median reader, who I don’t expect to react in this way.
Across EA, I do expect reactions to be pretty split. I do expect many of the most engaged EAs to have taken statements like this pretty literally and to feel quite betrayed (while I also think that in-general the vast majority of people will have interpreted the statements as being more about mood-affiliation and to have not really been intended to convey information).
I do think that at least for me, and many people I know, my engagement with EA is pretty conditional on exactly the ability for people in EA to make ethical statements and actually mean them, in the sense of being interested in following through with the consequences of those statements, and to try to make many different ethical statements consistent, and losing that ability I do think would lose a lot of what makes EA valuable, at least for me and many people I know.
Fwiw I’d also say that most of “the most engaged EAs” would not feel betrayed or lied to (for the same reasons), though I would be more uncertain about that. Mostly I’m predicting that there’s pretty strong selection bias in the people you’re thinking of and you’d have to really precisely pin them down (e.g. maybe something like “rationalist-adjacent highly engaged EAs who have spent a long time thinking meta-honesty and glomarization”) before it would become true that a majority of them would feel betrayed or lied to.
That’s plausible, though I do think I would take a bet here if we could somehow operationalize it. I do think I have to adjust for a bunch of selection effects in my thinking, and so am not super confident here, but still a bit above 50%.
Other folks in this comment thread mentioned that Ollie’s request doesn’t require any long philosophical analyses; it just requires leaving out sentences that are hyperbole.
I want to separately bid for a norm on the EA Forum that we err on the side of “encouraging factual discussion at awkward times and in awkward places”, as opposed to erring on the side of “people wait around for a maximally clear social sign al that it’s Okay to voice their thoughts”. If a post like this belongs on the EA Forum at all, then I think it should be fine to do our normal EA-Forum thing of nitpicking phrasings, asking follow-up questions, etc.
I don’t think that in this case, saying false things improves RP’s legal situation. I’d assume the goal is reputational (send the right social signals to EAs and random-journalists-and-social-media-users-paying-attention-to-EA), as opposed to legal.
But yes, there might be legal reasons to leave out the sentence altogether, if the alternative is to try to hammer out a much more concrete and detailed version of the sentence? Also, this is a co-written post, and it can be hard to phrase those in ways that are agreeable to every co-author.
I basically agree.
I personally mainly disagree with Oliver on the above thread—however, given that there is disagreement, it seems very healthy to me for there to be an open discussion on it.
In this case the issue doesn’t seem scary to discuss publicly. If this were about a much more directly controversial and serious issue, say about public allegations about individuals, that’s where I’d prefer trying to begin it privately first.
> I don’t think that in this case, saying false things improves RP’s legal situation. I’d assume the goal is reputational
I personally didn’t see this as a legal statement, as much as a public statement meant for the community at large.