Virtually all of them are about the consequences of EAs gaining some measure of power or influence. This seems like much stronger evidence about the consequences of EAs gaining power than how NonLinear treated two interns.
I don’t think this reasoning adequately factors in the negative effects of having people of poor character gain power and influence. Those effects can be hard to detect until something blows up. So evidence of EAs badly mistreating interns, running fraudulent crypto schemes, and committing sexual misconduct would be germane to the possibility that some other EAs have similar characterological deficits that will ultimately result in their rise to influence and power being a net bad thing.
In many cases, the community did not detect and/or appropriately react to evidence of bad character, before things publicly blew up in a year of scandals, and thus did not prevent the individuals involved from gaining or retaining power and influence. That does not inspire confidence that everyone who currently has power and/or influence is of non-bad character. Furthermore, although the smaller scandals are, well, smaller . . . their existence reduces the likelihood that the failure to detect and/or react to SBF’s bad character was a one-off lapse.
In the comment that originated this thread, titotal made a good point about the need for counterfactual analysis. I think this factor is relatively weak for something like AI safety, where the EA contribution is very distinct. But it is a much bigger issue for things like mistreating interns or sexual misconduct, because I am not aware of any serious evidence that EA has these problems at higher than expected* rates.
* there is some subtly here with what the correct comparison is—e.g. are we controlling for demographics? polyamory? - but I have never seen any such statistical analysis with or without such controls.
I don’t think this reasoning adequately factors in the negative effects of having people of poor character gain power and influence. Those effects can be hard to detect until something blows up. So evidence of EAs badly mistreating interns, running fraudulent crypto schemes, and committing sexual misconduct would be germane to the possibility that some other EAs have similar characterological deficits that will ultimately result in their rise to influence and power being a net bad thing.
In many cases, the community did not detect and/or appropriately react to evidence of bad character, before things publicly blew up in a year of scandals, and thus did not prevent the individuals involved from gaining or retaining power and influence. That does not inspire confidence that everyone who currently has power and/or influence is of non-bad character. Furthermore, although the smaller scandals are, well, smaller . . . their existence reduces the likelihood that the failure to detect and/or react to SBF’s bad character was a one-off lapse.
In the comment that originated this thread, titotal made a good point about the need for counterfactual analysis. I think this factor is relatively weak for something like AI safety, where the EA contribution is very distinct. But it is a much bigger issue for things like mistreating interns or sexual misconduct, because I am not aware of any serious evidence that EA has these problems at higher than expected* rates.
* there is some subtly here with what the correct comparison is—e.g. are we controlling for demographics? polyamory? - but I have never seen any such statistical analysis with or without such controls.