I was also surprised by this, and I wonder how many people interpreted “It is acceptable for an EA org to break minor laws” as “It is acceptable for an EA org to break laws willy-nilly as long as it feels like the laws are ‘minor’”, rather than interpreting it as “It is acceptable for an EA org to break at least one minor law ever”.
How easy is it to break literally zero laws? There are an awful lot of laws on the books in the US, many of which aren’t enforced.
Yep, I think this is a big problem.
More generally, I think a lot of EAs give lip service to the value of people trying weird new ambitious things, “adopt a hits-based approach”, “if you’re never failing then you’re playing it too safe”, etc.; but then we harshly punish visible failures, especially ones that are the least bit weird. In cases like those, I think the main solution is to be more forgiving of failures, rather than to give up on ambitious projects.
From my perspective, none of this is particularly relevant to what bothers me about Ben’s post and Nonlinear’s response. My biggest concern about Nonlinear is their attempt to pressure people into silence (via lawsuits, bizarre veiled threats, etc.), and “I really wish EAs would experiment more with coercing and threatening each other” is not an example of the kind of experimentalism I’m talking about when I say that EAs should be willing to try and fail at more things (!).
“Keep EA weird” does not entail “have low ethical standards”. Weirdness is not an excuse for genuinely unethical conduct.
I think the failures that seem like the biggest deal to me (Nonlinear threatening people and trying to shut down criticism and frighten people) genuinely are matters of character and lack of integrity, not matters of bad engineering. I agree that not all of the failures in Ben’s OP are necessarily related to any character/integrity issues, and I generally like the lens you’re recommending for most cases; I just don’t think it’s the right lens here.