Can you give examples of EAs harshly punishing visible failures that weren’t matters of genuine unethical conduct? I can think of some pretty big visible failures that didn’t lead to any significant backlash (and actually get held up as positive examples of orgs taking responsibility). For example, Evidence Action discovering that No Lean Season didn’t work and terminating it, or GiveDirectly’s recent fraud problems after suspending some of their standard processes to get out money in a war zone. Maybe people have different standards for failure in longtermist/meta EA stuff?
To add sources to recent examples that come to mind that broadly support MHR’s point above RE: visible (ex post) failures that don’t seem to be harshly punished, (most seem somewhere between neutral to supportive, at least publicly).
Some failures that came with a larger proportion of critical feedback probably include the Carrick Flynn campaign (1, 2, 3), but even here “harshly punish” seems like an overstatement. HLI also comes to mind (and despite highly critical commentary in earlier posts, I think the highly positive response to this specific post is telling).
============
On the extent to which Nonlinear’s failures relate to integrity / engineering, I think I’m sympathetic to both Rob’s view:
I think the failures that seem like the biggest deal to me (Nonlinear threatening people and trying to shut down criticism and frighten people) genuinely are matters of character and lack of integrity, not matters of bad engineering.
As well as Holly’s:
If you wouldn’t have looked at it before it imploded and thought the engineering was bad, I think that’s the biggest thing that needs to change. I’m concerned that people still think that if you have good enough character (or are smart enough, etc), you don’t need good boundaries and systems.
but do not think these are necessarily mutually exclusive. Specifically, it sounds like Rob is mainly thinking about the source of the concerns, and Holly is thinking about what to do going forwards. And it might be the case that the most helpful actionable steps going forward are things that look more like improving boundaries and systems, regardless of whether you believe failures specific to Nonlinear are caused by deficiencies in integrity or engineering.
That said, I agree with Rob’s point that the most significant allegations raised about Nonlinear quite clearly do not fit the category of ‘appropriate experimentation that the community would approve of’, under almost all reasonable perspectives.
I was thinking of murkier cases like the cancelation of Leverage and people taking small infractions on SBF’s part as foreshadowing of the fall of FTX (which I don’t think was enough of an indications), but admittedly those all involve parties that are guilty of something. Maybe I’m just trying too hard to be fair or treat people the way I want to be treated when I make a mistake.
Can you give examples of EAs harshly punishing visible failures that weren’t matters of genuine unethical conduct? I can think of some pretty big visible failures that didn’t lead to any significant backlash (and actually get held up as positive examples of orgs taking responsibility). For example, Evidence Action discovering that No Lean Season didn’t work and terminating it, or GiveDirectly’s recent fraud problems after suspending some of their standard processes to get out money in a war zone. Maybe people have different standards for failure in longtermist/meta EA stuff?
To add sources to recent examples that come to mind that broadly support MHR’s point above RE: visible (ex post) failures that don’t seem to be harshly punished, (most seem somewhere between neutral to supportive, at least publicly).
Lightcone
Alvea
ALERT
AI Safety Support
EA hub
No Lean Season
Some failures that came with a larger proportion of critical feedback probably include the Carrick Flynn campaign (1, 2, 3), but even here “harshly punish” seems like an overstatement. HLI also comes to mind (and despite highly critical commentary in earlier posts, I think the highly positive response to this specific post is telling).
============
On the extent to which Nonlinear’s failures relate to integrity / engineering, I think I’m sympathetic to both Rob’s view:
As well as Holly’s:
but do not think these are necessarily mutually exclusive.
Specifically, it sounds like Rob is mainly thinking about the source of the concerns, and Holly is thinking about what to do going forwards. And it might be the case that the most helpful actionable steps going forward are things that look more like improving boundaries and systems, regardless of whether you believe failures specific to Nonlinear are caused by deficiencies in integrity or engineering.
That said, I agree with Rob’s point that the most significant allegations raised about Nonlinear quite clearly do not fit the category of ‘appropriate experimentation that the community would approve of’, under almost all reasonable perspectives.
I was thinking of murkier cases like the cancelation of Leverage and people taking small infractions on SBF’s part as foreshadowing of the fall of FTX (which I don’t think was enough of an indications), but admittedly those all involve parties that are guilty of something. Maybe I’m just trying too hard to be fair or treat people the way I want to be treated when I make a mistake.