Like, let’s look ahead a few months. Some lower-level FTX employee is accused of having committed some minor fraud with good ethical justification that actually looks reasonable according to RP leadership, so they make a statement coming out in defense of that person.
Do you not expect this to create strong feelings of betrayal in previous readers of this post, and a strong feeling of having been lied to?
I broadly agree with your comments otherwise, but in fact in this hypothetical I expect most readers of this post would not feel betrayed or lied to. It’s really uncommon for people to interpret words literally; I think the standard interpretation of the condemnation part of this post will be something along the lines of “stealing $8b from customers is bad” rather than the literal thing that was written. (Or at least that’ll be the standard interpretation for people who haven’t read the comments.)
The negative consequence I’d point to is that you lose the ability to convey information in cases where it matters. If Rethink says “X is bad, we should do something about it” I’m more likely to ignore it than if you said it.
Yeah, sorry, I think you are right that as phrased this is incorrect. I think my phrasing implies I am talking about the average or median reader, who I don’t expect to react in this way.
Across EA, I do expect reactions to be pretty split. I do expect many of the most engaged EAs to have taken statements like this pretty literally and to feel quite betrayed (while I also think that in-general the vast majority of people will have interpreted the statements as being more about mood-affiliation and to have not really been intended to convey information).
I do think that at least for me, and many people I know, my engagement with EA is pretty conditional on exactly the ability for people in EA to make ethical statements and actually mean them, in the sense of being interested in following through with the consequences of those statements, and to try to make many different ethical statements consistent, and losing that ability I do think would lose a lot of what makes EA valuable, at least for me and many people I know.
Fwiw I’d also say that most of “the most engaged EAs” would not feel betrayed or lied to (for the same reasons), though I would be more uncertain about that. Mostly I’m predicting that there’s pretty strong selection bias in the people you’re thinking of and you’d have to really precisely pin them down (e.g. maybe something like “rationalist-adjacent highly engaged EAs who have spent a long time thinking meta-honesty and glomarization”) before it would become true that a majority of them would feel betrayed or lied to.
That’s plausible, though I do think I would take a bet here if we could somehow operationalize it. I do think I have to adjust for a bunch of selection effects in my thinking, and so am not super confident here, but still a bit above 50%.
I broadly agree with your comments otherwise, but in fact in this hypothetical I expect most readers of this post would not feel betrayed or lied to. It’s really uncommon for people to interpret words literally; I think the standard interpretation of the condemnation part of this post will be something along the lines of “stealing $8b from customers is bad” rather than the literal thing that was written. (Or at least that’ll be the standard interpretation for people who haven’t read the comments.)
The negative consequence I’d point to is that you lose the ability to convey information in cases where it matters. If Rethink says “X is bad, we should do something about it” I’m more likely to ignore it than if you said it.
Yeah, sorry, I think you are right that as phrased this is incorrect. I think my phrasing implies I am talking about the average or median reader, who I don’t expect to react in this way.
Across EA, I do expect reactions to be pretty split. I do expect many of the most engaged EAs to have taken statements like this pretty literally and to feel quite betrayed (while I also think that in-general the vast majority of people will have interpreted the statements as being more about mood-affiliation and to have not really been intended to convey information).
I do think that at least for me, and many people I know, my engagement with EA is pretty conditional on exactly the ability for people in EA to make ethical statements and actually mean them, in the sense of being interested in following through with the consequences of those statements, and to try to make many different ethical statements consistent, and losing that ability I do think would lose a lot of what makes EA valuable, at least for me and many people I know.
Fwiw I’d also say that most of “the most engaged EAs” would not feel betrayed or lied to (for the same reasons), though I would be more uncertain about that. Mostly I’m predicting that there’s pretty strong selection bias in the people you’re thinking of and you’d have to really precisely pin them down (e.g. maybe something like “rationalist-adjacent highly engaged EAs who have spent a long time thinking meta-honesty and glomarization”) before it would become true that a majority of them would feel betrayed or lied to.
That’s plausible, though I do think I would take a bet here if we could somehow operationalize it. I do think I have to adjust for a bunch of selection effects in my thinking, and so am not super confident here, but still a bit above 50%.