While I generally agree that they almost certainly have more information on what happened, which is why I’m not really certain on this theory, my main reason here is that for the most part, AI safety as a cause basically managed to get away with incredibly weak standards of evidence for a long time, until the deep learning era in 2019-, especially with all the evolution analogies, and even now it still tends to have very low standards (though I do believe it’s slowly improving right now). This probably influenced a lot of EA safetyists like Ilya, who almost certainly imbibed the norms of the AI safety field, and one of them is that there is a very low standard of evidence needed to claim big things, and that’s going to conflict with corporate/legal standards of evidence.
But I don’t think most people who hold influential positions within EA (or EA-minded people who hold influential positions in the world at large, for that matter) are likely to be that superficial in their analysis of things. (In particular, I’m strongly disagreeing with the idea that it’s likely that the board “basically had no evidence except speculation from the EA/LW forum”. I think one thing EA is unusually good at – or maybe I should say “some/many parts of EA are unusually good at” – is hiring people for important roles who think for themselves and have generally good takes about things and acknowledge the possibility of being wrong about stuff. [Not to say that there isn’t any groupthink among EAs. Also, “unusually good” isn’t necessarily that high of a bar.])
I agree with this weakly, in the sense that being high up in EA is at least a slight update towards them actually thinking through things and being able to make actual cases. My disagreement here is that this effect is probably not strong enough to wash away the cultural effects of operating in a cause area where they don’t need to meet any standard of evidence except long-winded blog posts and getting rewarded, for many reasons.
Also, the board second-guessed it’s decision, which would be evidence for the theory that they couldn’t make a case that actually abided to the standard of evidence for a corporate/legal setting.
If it was any other cause like say GiveWell or some other causes in EA, I would trust them much more that they do have good reason. But AI safety has been so reliant on very low-non-existent standards of evidence or epistemics that they probably couldn’t explain themselves in a way that would abide by the strictness of a corporate/legal standard of evidence.
Edit: The firing wasn’t because of safety related concerns.
While I generally agree that they almost certainly have more information on what happened, which is why I’m not really certain on this theory, my main reason here is that for the most part, AI safety as a cause basically managed to get away with incredibly weak standards of evidence for a long time, until the deep learning era in 2019-, especially with all the evolution analogies, and even now it still tends to have very low standards (though I do believe it’s slowly improving right now). This probably influenced a lot of EA safetyists like Ilya, who almost certainly imbibed the norms of the AI safety field, and one of them is that there is a very low standard of evidence needed to claim big things, and that’s going to conflict with corporate/legal standards of evidence.
I agree with this weakly, in the sense that being high up in EA is at least a slight update towards them actually thinking through things and being able to make actual cases. My disagreement here is that this effect is probably not strong enough to wash away the cultural effects of operating in a cause area where they don’t need to meet any standard of evidence except long-winded blog posts and getting rewarded, for many reasons.
Also, the board second-guessed it’s decision, which would be evidence for the theory that they couldn’t make a case that actually abided to the standard of evidence for a corporate/legal setting.
If it was any other cause like say GiveWell or some other causes in EA, I would trust them much more that they do have good reason. But AI safety has been so reliant on very low-non-existent standards of evidence or epistemics that they probably couldn’t explain themselves in a way that would abide by the strictness of a corporate/legal standard of evidence.
Edit: The firing wasn’t because of safety related concerns.
Why did you unendorse?
I unendorsed primarily because apparently, the board didn’t fire because of safety concerns, though I’m not sure this is accurate.