I think it is very difficult to litigate point three further without putting certain people on trial and getting into their personal details, which I am not interested in doing and don’t think is a good use of the Forum. For what it’s worth, I haven’t seen your Twitter or anything from you.
I should have emphasized more that there are consistent critics of EA who I don’t think are acting in bad faith at all. Stuart Buck seems to have been right early on a number of things, for example.
Your Bayesian argument may apply in some cases but it fails in others (for instance, when X = EAs are eugenicists).
Just apply Bayes’ rule: if P(events of the last week | X) > P(events of the last week | not-X), then you should increase your credence in X upon observing the events of the last week.
I also emphasize there are a few people who I have strong reason to believe are “deliberate effort to sow division within the EA movement” and this was the focus of my comment, publicly evidenced (NB: this is a very small part of my overall evidence) by them “taking glee in this disaster or mocking the appearances and personal writing of FTX/Alameda employees.” I do not think a productive conversation is possible in these cases.
I’m not sure what you mean by saying that my Bayesian argument fails in some cases? ‘P(X|E)>P(X) if and only if P(E|X)>P(E|not-X)’ is a theorem in the probability calculus (assuming no probabilities with value zero or one). If the likelihood ratio of X given E is greater than one, then upon observing E you should rationally update towards X.
If you just mean that there are some values of X which do not explain the events of the last week, such that P(events of the last week | X) ≤ P(events of the last week | not-X), this is true but trivial. Your post was about cases where ‘this catastrophe is in line with X thing [critics] already believed’. In these cases, the rational thing to do is to update toward critics.
I think it is very difficult to litigate point three further without putting certain people on trial and getting into their personal details, which I am not interested in doing and don’t think is a good use of the Forum. For what it’s worth, I haven’t seen your Twitter or anything from you.
I should have emphasized more that there are consistent critics of EA who I don’t think are acting in bad faith at all. Stuart Buck seems to have been right early on a number of things, for example.
Your Bayesian argument may apply in some cases but it fails in others (for instance, when X = EAs are eugenicists).
I also emphasize there are a few people who I have strong reason to believe are “deliberate effort to sow division within the EA movement” and this was the focus of my comment, publicly evidenced (NB: this is a very small part of my overall evidence) by them “taking glee in this disaster or mocking the appearances and personal writing of FTX/Alameda employees.” I do not think a productive conversation is possible in these cases.
I’m not sure what you mean by saying that my Bayesian argument fails in some cases? ‘P(X|E)>P(X) if and only if P(E|X)>P(E|not-X)’ is a theorem in the probability calculus (assuming no probabilities with value zero or one). If the likelihood ratio of X given E is greater than one, then upon observing E you should rationally update towards X.
If you just mean that there are some values of X which do not explain the events of the last week, such that P(events of the last week | X) ≤ P(events of the last week | not-X), this is true but trivial. Your post was about cases where ‘this catastrophe is in line with X thing [critics] already believed’. In these cases, the rational thing to do is to update toward critics.