Downvoted. I disagree quite strongly on points one and four, but that’s a discussion for another day; I downvoted because point three is harmful.
If people with a long history of criticising EA have indeed claimed X for a long time, while EA-at-large has said not-X; and X is compatible with the events of the past week, while not-X is not (or is less obviously compatible, or renders those events more unexpected); then rational Bayesians should update towards the people with the long history of criticising EA. Just apply Bayes’ rule: if P(events of the last week | X) > P(events of the last week | not-X), then you should increase your credence in X upon observing the events of the last week.
This reasoning holds whether or not these critics are speaking in bad faith, have personal issues with EA, or are acting irrationally. If being a bad-faith critic of EA provides you with betterpredictive power than being a relatively-uncritical member of the movement, then you should update so that you are closer to being a bad-faith critic of EA than to being a relatively-uncritical member of the movement. You probably shouldn’t go all the way there (better to stop in the middle, somewhere around ‘good-faith critic’ or ‘EA adjacent’ or ‘EA but quite suspicious of the movement’s leadership’), but updating in that direction is the rational Bayesian thing to do.
To be sure, there’s always a worry that the critics have fudged or falsified their predictions, saying something vaguely critical in the past which has since been sharpened into ‘Several months ago, I predicted that this exact thing would happen!’ This is the ‘predicting the next recession’ effect, and we should be vigilant about it. But while this is definitely happening in a lot of cases, in some of the most high-profile ones I don’t think it applies: I think there were relatively concrete predictions made that a crisis of power and leadership of pretty much this kind was enabled by EA structures, and these predictions seem to have been closer to the mark than anything EA-at-large thought might happen.
I think there is a further sense, that many EAs seem to feel that their error was less one of prediction than of creativity: it’s less that they made the wrong call on a variety of questions, but simply that they didn’t ask those questions. This is obviously not true of all EAs, but it is definitely true of some. In cases like this, listening more closely to critics—even bad faith ones! - can open your mind up to a variety of different positions and reasoning styles that previously were not even present in your mind. This is not always inherently good, of course, but if an EA has reason to think that they have made a failure of creativity then it seems like a very positive way to go.
For more context about my worries: I think that it is possible that OP might be including me, and some things I have tweeted, in point three. I have quite a small follower count and nothing I wrote ‘blew up’ or anything, so it’s definitely very unlikely; but I did tweet out several things pretty heavily critical of the movement in recent days which very strongly pattern-match the description given above, including pointing out that prior criticisms predicted these events pretty well, and having relatively well-known EAs reaching out to me about what I had written. Certainly, I ‘felt seen’ (as it were) while reading this post.
I don’t think I am a ‘nefarious actor’, or have a history of ‘hating EA’, but I worry that in some segments of EA (not the whole of EA—some people have gone full self-flagellation, but in some segments) these kinds of terms are getting slung around far too liberally as part of a more general circling-the-wagons trend. And I worry that posts like this one legitimise slinging these terms around in this manner, by encouraging the thought that EA critics who are engaging in some (sometimes fully-justified) ‘told you so’ are just bad actors trying to destroy their tribe. EA needs to be more, not less, open to listening to critics—even bad-faith critics—after a disaster like this one. This is good Bayesianism, but it’s also just proper humility.
I think it is very difficult to litigate point three further without putting certain people on trial and getting into their personal details, which I am not interested in doing and don’t think is a good use of the Forum. For what it’s worth, I haven’t seen your Twitter or anything from you.
I should have emphasized more that there are consistent critics of EA who I don’t think are acting in bad faith at all. Stuart Buck seems to have been right early on a number of things, for example.
Your Bayesian argument may apply in some cases but it fails in others (for instance, when X = EAs are eugenicists).
Just apply Bayes’ rule: if P(events of the last week | X) > P(events of the last week | not-X), then you should increase your credence in X upon observing the events of the last week.
I also emphasize there are a few people who I have strong reason to believe are “deliberate effort to sow division within the EA movement” and this was the focus of my comment, publicly evidenced (NB: this is a very small part of my overall evidence) by them “taking glee in this disaster or mocking the appearances and personal writing of FTX/Alameda employees.” I do not think a productive conversation is possible in these cases.
I’m not sure what you mean by saying that my Bayesian argument fails in some cases? ‘P(X|E)>P(X) if and only if P(E|X)>P(E|not-X)’ is a theorem in the probability calculus (assuming no probabilities with value zero or one). If the likelihood ratio of X given E is greater than one, then upon observing E you should rationally update towards X.
If you just mean that there are some values of X which do not explain the events of the last week, such that P(events of the last week | X) ≤ P(events of the last week | not-X), this is true but trivial. Your post was about cases where ‘this catastrophe is in line with X thing [critics] already believed’. In these cases, the rational thing to do is to update toward critics.
Downvoted. I disagree quite strongly on points one and four, but that’s a discussion for another day; I downvoted because point three is harmful.
If people with a long history of criticising EA have indeed claimed X for a long time, while EA-at-large has said not-X; and X is compatible with the events of the past week, while not-X is not (or is less obviously compatible, or renders those events more unexpected); then rational Bayesians should update towards the people with the long history of criticising EA. Just apply Bayes’ rule: if P(events of the last week | X) > P(events of the last week | not-X), then you should increase your credence in X upon observing the events of the last week.
This reasoning holds whether or not these critics are speaking in bad faith, have personal issues with EA, or are acting irrationally. If being a bad-faith critic of EA provides you with better predictive power than being a relatively-uncritical member of the movement, then you should update so that you are closer to being a bad-faith critic of EA than to being a relatively-uncritical member of the movement. You probably shouldn’t go all the way there (better to stop in the middle, somewhere around ‘good-faith critic’ or ‘EA adjacent’ or ‘EA but quite suspicious of the movement’s leadership’), but updating in that direction is the rational Bayesian thing to do.
To be sure, there’s always a worry that the critics have fudged or falsified their predictions, saying something vaguely critical in the past which has since been sharpened into ‘Several months ago, I predicted that this exact thing would happen!’ This is the ‘predicting the next recession’ effect, and we should be vigilant about it. But while this is definitely happening in a lot of cases, in some of the most high-profile ones I don’t think it applies: I think there were relatively concrete predictions made that a crisis of power and leadership of pretty much this kind was enabled by EA structures, and these predictions seem to have been closer to the mark than anything EA-at-large thought might happen.
I think there is a further sense, that many EAs seem to feel that their error was less one of prediction than of creativity: it’s less that they made the wrong call on a variety of questions, but simply that they didn’t ask those questions. This is obviously not true of all EAs, but it is definitely true of some. In cases like this, listening more closely to critics—even bad faith ones! - can open your mind up to a variety of different positions and reasoning styles that previously were not even present in your mind. This is not always inherently good, of course, but if an EA has reason to think that they have made a failure of creativity then it seems like a very positive way to go.
For more context about my worries: I think that it is possible that OP might be including me, and some things I have tweeted, in point three. I have quite a small follower count and nothing I wrote ‘blew up’ or anything, so it’s definitely very unlikely; but I did tweet out several things pretty heavily critical of the movement in recent days which very strongly pattern-match the description given above, including pointing out that prior criticisms predicted these events pretty well, and having relatively well-known EAs reaching out to me about what I had written. Certainly, I ‘felt seen’ (as it were) while reading this post.
I don’t think I am a ‘nefarious actor’, or have a history of ‘hating EA’, but I worry that in some segments of EA (not the whole of EA—some people have gone full self-flagellation, but in some segments) these kinds of terms are getting slung around far too liberally as part of a more general circling-the-wagons trend. And I worry that posts like this one legitimise slinging these terms around in this manner, by encouraging the thought that EA critics who are engaging in some (sometimes fully-justified) ‘told you so’ are just bad actors trying to destroy their tribe. EA needs to be more, not less, open to listening to critics—even bad-faith critics—after a disaster like this one. This is good Bayesianism, but it’s also just proper humility.
I think it is very difficult to litigate point three further without putting certain people on trial and getting into their personal details, which I am not interested in doing and don’t think is a good use of the Forum. For what it’s worth, I haven’t seen your Twitter or anything from you.
I should have emphasized more that there are consistent critics of EA who I don’t think are acting in bad faith at all. Stuart Buck seems to have been right early on a number of things, for example.
Your Bayesian argument may apply in some cases but it fails in others (for instance, when X = EAs are eugenicists).
I also emphasize there are a few people who I have strong reason to believe are “deliberate effort to sow division within the EA movement” and this was the focus of my comment, publicly evidenced (NB: this is a very small part of my overall evidence) by them “taking glee in this disaster or mocking the appearances and personal writing of FTX/Alameda employees.” I do not think a productive conversation is possible in these cases.
I’m not sure what you mean by saying that my Bayesian argument fails in some cases? ‘P(X|E)>P(X) if and only if P(E|X)>P(E|not-X)’ is a theorem in the probability calculus (assuming no probabilities with value zero or one). If the likelihood ratio of X given E is greater than one, then upon observing E you should rationally update towards X.
If you just mean that there are some values of X which do not explain the events of the last week, such that P(events of the last week | X) ≤ P(events of the last week | not-X), this is true but trivial. Your post was about cases where ‘this catastrophe is in line with X thing [critics] already believed’. In these cases, the rational thing to do is to update toward critics.