Can you give examples of EAs harshly punishing visible failures that werenât matters of genuine unethical conduct? I can think of some pretty big visible failures that didnât lead to any significant backlash (and actually get held up as positive examples of orgs taking responsibility). For example, Evidence Action discovering that No Lean Season didnât work and terminating it, or GiveDirectlyâs recent fraud problems after suspending some of their standard processes to get out money in a war zone. Maybe people have different standards for failure in longtermist/âmeta EA stuff?
To add sources to recent examples that come to mind that broadly support MHRâs point above RE: visible (ex post) failures that donât seem to be harshly punished, (most seem somewhere between neutral to supportive, at least publicly).
Some failures that came with a larger proportion of critical feedback probably include the Carrick Flynn campaign (1, 2, 3), but even here âharshly punishâ seems like an overstatement. HLI also comes to mind (and despite highly critical commentary in earlier posts, I think the highly positive response to this specific post is telling).
============
On the extent to which Nonlinearâs failures relate to integrity /â engineering, I think Iâm sympathetic to both Robâs view:
I think the failures that seem like the biggest deal to me (Nonlinear threatening people and trying to shut down criticism and frighten people) genuinely are matters of character and lack of integrity, not matters of bad engineering.
As well as Hollyâs:
If you wouldnât have looked at it before it imploded and thought the engineering was bad, I think thatâs the biggest thing that needs to change. Iâm concerned that people still think that if you have good enough character (or are smart enough, etc), you donât need good boundaries and systems.
but do not think these are necessarily mutually exclusive. Specifically, it sounds like Rob is mainly thinking about the source of the concerns, and Holly is thinking about what to do going forwards. And it might be the case that the most helpful actionable steps going forward are things that look more like improving boundaries and systems, regardless of whether you believe failures specific to Nonlinear are caused by deficiencies in integrity or engineering.
That said, I agree with Robâs point that the most significant allegations raised about Nonlinear quite clearly do not fit the category of âappropriate experimentation that the community would approve ofâ, under almost all reasonable perspectives.
I was thinking of murkier cases like the cancelation of Leverage and people taking small infractions on SBFâs part as foreshadowing of the fall of FTX (which I donât think was enough of an indications), but admittedly those all involve parties that are guilty of something. Maybe Iâm just trying too hard to be fair or treat people the way I want to be treated when I make a mistake.
Can you give examples of EAs harshly punishing visible failures that werenât matters of genuine unethical conduct? I can think of some pretty big visible failures that didnât lead to any significant backlash (and actually get held up as positive examples of orgs taking responsibility). For example, Evidence Action discovering that No Lean Season didnât work and terminating it, or GiveDirectlyâs recent fraud problems after suspending some of their standard processes to get out money in a war zone. Maybe people have different standards for failure in longtermist/âmeta EA stuff?
To add sources to recent examples that come to mind that broadly support MHRâs point above RE: visible (ex post) failures that donât seem to be harshly punished, (most seem somewhere between neutral to supportive, at least publicly).
Lightcone
Alvea
ALERT
AI Safety Support
EA hub
No Lean Season
Some failures that came with a larger proportion of critical feedback probably include the Carrick Flynn campaign (1, 2, 3), but even here âharshly punishâ seems like an overstatement. HLI also comes to mind (and despite highly critical commentary in earlier posts, I think the highly positive response to this specific post is telling).
============
On the extent to which Nonlinearâs failures relate to integrity /â engineering, I think Iâm sympathetic to both Robâs view:
As well as Hollyâs:
but do not think these are necessarily mutually exclusive.
Specifically, it sounds like Rob is mainly thinking about the source of the concerns, and Holly is thinking about what to do going forwards. And it might be the case that the most helpful actionable steps going forward are things that look more like improving boundaries and systems, regardless of whether you believe failures specific to Nonlinear are caused by deficiencies in integrity or engineering.
That said, I agree with Robâs point that the most significant allegations raised about Nonlinear quite clearly do not fit the category of âappropriate experimentation that the community would approve ofâ, under almost all reasonable perspectives.
I was thinking of murkier cases like the cancelation of Leverage and people taking small infractions on SBFâs part as foreshadowing of the fall of FTX (which I donât think was enough of an indications), but admittedly those all involve parties that are guilty of something. Maybe Iâm just trying too hard to be fair or treat people the way I want to be treated when I make a mistake.