Yeah, sorry, I think you are right that as phrased this is incorrect. I think my phrasing implies I am talking about the average or median reader, who I don’t expect to react in this way.
Across EA, I do expect reactions to be pretty split. I do expect many of the most engaged EAs to have taken statements like this pretty literally and to feel quite betrayed (while I also think that in-general the vast majority of people will have interpreted the statements as being more about mood-affiliation and to have not really been intended to convey information).
I do think that at least for me, and many people I know, my engagement with EA is pretty conditional on exactly the ability for people in EA to make ethical statements and actually mean them, in the sense of being interested in following through with the consequences of those statements, and to try to make many different ethical statements consistent, and losing that ability I do think would lose a lot of what makes EA valuable, at least for me and many people I know.
Fwiw I’d also say that most of “the most engaged EAs” would not feel betrayed or lied to (for the same reasons), though I would be more uncertain about that. Mostly I’m predicting that there’s pretty strong selection bias in the people you’re thinking of and you’d have to really precisely pin them down (e.g. maybe something like “rationalist-adjacent highly engaged EAs who have spent a long time thinking meta-honesty and glomarization”) before it would become true that a majority of them would feel betrayed or lied to.
That’s plausible, though I do think I would take a bet here if we could somehow operationalize it. I do think I have to adjust for a bunch of selection effects in my thinking, and so am not super confident here, but still a bit above 50%.
Yeah, sorry, I think you are right that as phrased this is incorrect. I think my phrasing implies I am talking about the average or median reader, who I don’t expect to react in this way.
Across EA, I do expect reactions to be pretty split. I do expect many of the most engaged EAs to have taken statements like this pretty literally and to feel quite betrayed (while I also think that in-general the vast majority of people will have interpreted the statements as being more about mood-affiliation and to have not really been intended to convey information).
I do think that at least for me, and many people I know, my engagement with EA is pretty conditional on exactly the ability for people in EA to make ethical statements and actually mean them, in the sense of being interested in following through with the consequences of those statements, and to try to make many different ethical statements consistent, and losing that ability I do think would lose a lot of what makes EA valuable, at least for me and many people I know.
Fwiw I’d also say that most of “the most engaged EAs” would not feel betrayed or lied to (for the same reasons), though I would be more uncertain about that. Mostly I’m predicting that there’s pretty strong selection bias in the people you’re thinking of and you’d have to really precisely pin them down (e.g. maybe something like “rationalist-adjacent highly engaged EAs who have spent a long time thinking meta-honesty and glomarization”) before it would become true that a majority of them would feel betrayed or lied to.
That’s plausible, though I do think I would take a bet here if we could somehow operationalize it. I do think I have to adjust for a bunch of selection effects in my thinking, and so am not super confident here, but still a bit above 50%.