I think it’s not quite right that low trust is costlier than high trust. Low trust is costly when things are going well. There’s kind of a slow burn of additional cost.
But high trust is very costly when bad actors, corruption or mistakes arise that a low trust community would have preempted. So the cost is lumpier, cheap in the good times and expensive in the bad.
(I read fairly quickly so may have missed where you clarified this.)
High-trust assumes both good motivations and competence. High trust is nice because it makes things go smoother. But if there are any badly motivated or incompetent actors, insisting on high trust creates conditions for repeated devastating impacts. To further insist on high trust after significant shocks means people who no longer trust good motivations and competence leave.
FTX was a high-trust/bad actor shock event. The movement probably needs to operate for a bit in a low-trust environment to earn back the conditions that allow high-trust. Or, the movement can insist on high-trust at the expense of losing members who no longer feel comfortable or safe trusting others completely.
FTX was a high-trust/bad actor shock event. The movement probably needs to operate for a bit in a low-trust environment to earn back the conditions that allow high-trust. Or, the movement can insist on high-trust at the expense of losing members who no longer feel comfortable or safe trusting others completely.
I was arguing to the contrary: the inquiries post-FTX have shown very little untrustworthy behaviour in the community as a whole. So if anything we should regard this as a validation of our trust.
Certainly I wouldn’t trust FTX after this, but I think extending the reduced trust to the rest of the community in any significant way is mistaken.
Perhaps you’re worried about more things like FTX happening in the future. To that I would just accept the risk—being high-trust means you can get suckered. If it doesn’t happen too often, maybe it’s worth the cost. And right after the event is the classic time for people to highly overestimate how likely such events will be (or how likely we should have considered them to be in the past).
The key actors involved in FTX were extremely close to the EA community. SBF became involved after a 1:1? conversation with Will MacAskill, worked at CEA for a short while, held prime speaking slots at EAG, and set up and funded a key organization (FTX fund). Caroline held an officer position in her university EA group. It’s fair to say the people at the center of the fraud were embedded and more tightly aligned with the EA movement than most people connected with EA. It’s a classic example of high-trust / bad actors—it only takes a few of them to cause serious damage.
Is this just a black swan event? Perhaps.
Are there more bad actors in the EA community? Perhaps.
You are certainly welcome to keeping treating EA as high-trust community, but others have good reason not to.
SBF became involved after a 1:1? conversation with Will MacAskill, worked at CEA for a short while, held prime speaking slots at EAG, and set up and funded a key organization (FTX fund).
Only the last point seems concerning to me because Sam was working closely together with figures very central within EA at a time when some of the red flags should already have been visible. By contrast, I think it’s unreasonable to hold anyone responsible for another person’s negative impact if you motivate them to do something in a 1-on-1 conversation or if some “bad actor” briefly worked at [central EA organization]. We can’t be responsible for the behavior of everyone we ever interact with! It’s not always possible to vet people’s character in a single conversation or even during a brief period of working together. (And sometimes people get more corrupted over time – though I’d expect there to be early warning signs pretty much always.) I think the EA community made big mistakes with respect to FTX, but that’s precisely because many EAs have interacted with Sam over many years and worked closely together with him before the collapse.
“[T]he inquiries post-FTX” have been largely deflected due to legal concerns. I’m not going to second-guess the presumed advice of the relevant lawyers here, but that’s hardly the same as there having been a public and independent investigation that reveals the potential concerns to have been unfounded. Given the absence of any real information (other than knowing that someone senior heard that SBF was under criminal investigation, shared that report, and got muted responses), the range of plausible narratives in my view ranges from no errors to ordinary errors of judgment to severe errors of judgment (which would provide significant reason to believe that the relevant actors’ judgment is “untrustworthy” in the sense of not being reliable) to willful blindness.
The really tricky issue is something like second impact syndrome, where death or severe injury occurs because the head was hit a second time before healing from the first.
So I would be a little more careful for a few years for EA.
I guess I find your proposal that EA operate in low trust mode for a bit to win back the conditions that allow high trust confusing because I would expect that shifting to low trust mode than shifting back to high trust would be very hard, as in almost never the kind of thing that happens.
EA started in low-trust mode (e.g. iirc early GiveWell was even suspicious of the notion of QALYs which is why they came up with their own metrics) and gradually shifted towards higher trust. So it seems plausible to me that we can go back to low-trust mode and slowly win back trust, though maybe this will take too long and EA will die or fade out to irrelevancy before then.
This is quite interesting and reminds me of a short option position as a previous hedge fund manager—you earn time decay or option premium when things are going well or stable, and then once in a while you take a big hit (and a lot of of people/orgs do not survive the hit). This is not a strategy I follow from a risk adjusted return point of view on a longer term perspective. I would not like to be short put option but rather be long call option and try to minimise my time decay or option premium. The latter is more work and time consuming but I have managed to construct very large option structured positions with almost no time decay as a hedge fund manager. In EA terms some of the ways I would like to structure long call options on EA whilst minimising risks would be look for strong founders and team, neglected with large convex upside, tractable and cost effective even with base case delivery (GWWC, Founders Pledge and Longview were good examples of this), and continue to fund promising ones until other funders come in.
As a general observation I think EA overemphasise expected return and not enough on risk adjusted return, especially when in some cases some sensible risk management can reduce risk a lot without reducing expected return much. (eg ensuring we have experienced operational, legal, regulatory and risk management expertises). This may have something to do with our very long impact time horizons and EAs preference to work things out from base.
I also like to emphasise that it does not always have to be bad actors, but could also be people acting outside their level of expertise and/or competence in good faith. And trust perhaps like market cycles can be oversupplied at times and in certain areas and under supplied at other times and areas.
I think it’s more clear as a two-by-two matrix, with trustworthiness vs trust. In rough order of goodness:
High Trust, High Trustworthiness: Marriages, friends, some religions, small teams. Very good.
Low Trust, High Trustworthiness: The experience of many good people in normal countries, large corporations, etc. Wasteful, frustrating, and maybe the best people will leave.
Low Trust, Low Trustworthiness: Much of the world is like this. Not ideal at all but we have systems to minimise the damage.
High Trust, Low Trustworthiness: Often caused by new entrants exploiting a poorly protected community. Get scammed or robbed.
I think it’s not quite right that low trust is costlier than high trust. Low trust is costly when things are going well. There’s kind of a slow burn of additional cost.
But high trust is very costly when bad actors, corruption or mistakes arise that a low trust community would have preempted. So the cost is lumpier, cheap in the good times and expensive in the bad.
(I read fairly quickly so may have missed where you clarified this.)
To re-frame this:
best: high-trust / good actors
good: low-trust / good actors
manageable: low-trust / bad actors
devastating: high-trust / bad actors
High-trust assumes both good motivations and competence. High trust is nice because it makes things go smoother. But if there are any badly motivated or incompetent actors, insisting on high trust creates conditions for repeated devastating impacts. To further insist on high trust after significant shocks means people who no longer trust good motivations and competence leave.
FTX was a high-trust/bad actor shock event. The movement probably needs to operate for a bit in a low-trust environment to earn back the conditions that allow high-trust. Or, the movement can insist on high-trust at the expense of losing members who no longer feel comfortable or safe trusting others completely.
I was arguing to the contrary: the inquiries post-FTX have shown very little untrustworthy behaviour in the community as a whole. So if anything we should regard this as a validation of our trust.
Certainly I wouldn’t trust FTX after this, but I think extending the reduced trust to the rest of the community in any significant way is mistaken.
Perhaps you’re worried about more things like FTX happening in the future. To that I would just accept the risk—being high-trust means you can get suckered. If it doesn’t happen too often, maybe it’s worth the cost. And right after the event is the classic time for people to highly overestimate how likely such events will be (or how likely we should have considered them to be in the past).
The key actors involved in FTX were extremely close to the EA community. SBF became involved after a 1:1? conversation with Will MacAskill, worked at CEA for a short while, held prime speaking slots at EAG, and set up and funded a key organization (FTX fund). Caroline held an officer position in her university EA group. It’s fair to say the people at the center of the fraud were embedded and more tightly aligned with the EA movement than most people connected with EA. It’s a classic example of high-trust / bad actors—it only takes a few of them to cause serious damage.
Is this just a black swan event? Perhaps. Are there more bad actors in the EA community? Perhaps.
You are certainly welcome to keeping treating EA as high-trust community, but others have good reason not to.
Only the last point seems concerning to me because Sam was working closely together with figures very central within EA at a time when some of the red flags should already have been visible. By contrast, I think it’s unreasonable to hold anyone responsible for another person’s negative impact if you motivate them to do something in a 1-on-1 conversation or if some “bad actor” briefly worked at [central EA organization]. We can’t be responsible for the behavior of everyone we ever interact with! It’s not always possible to vet people’s character in a single conversation or even during a brief period of working together. (And sometimes people get more corrupted over time – though I’d expect there to be early warning signs pretty much always.) I think the EA community made big mistakes with respect to FTX, but that’s precisely because many EAs have interacted with Sam over many years and worked closely together with him before the collapse.
“[T]he inquiries post-FTX” have been largely deflected due to legal concerns. I’m not going to second-guess the presumed advice of the relevant lawyers here, but that’s hardly the same as there having been a public and independent investigation that reveals the potential concerns to have been unfounded. Given the absence of any real information (other than knowing that someone senior heard that SBF was under criminal investigation, shared that report, and got muted responses), the range of plausible narratives in my view ranges from no errors to ordinary errors of judgment to severe errors of judgment (which would provide significant reason to believe that the relevant actors’ judgment is “untrustworthy” in the sense of not being reliable) to willful blindness.
The really tricky issue is something like second impact syndrome, where death or severe injury occurs because the head was hit a second time before healing from the first.
So I would be a little more careful for a few years for EA.
I guess I find your proposal that EA operate in low trust mode for a bit to win back the conditions that allow high trust confusing because I would expect that shifting to low trust mode than shifting back to high trust would be very hard, as in almost never the kind of thing that happens.
EA started in low-trust mode (e.g. iirc early GiveWell was even suspicious of the notion of QALYs which is why they came up with their own metrics) and gradually shifted towards higher trust. So it seems plausible to me that we can go back to low-trust mode and slowly win back trust, though maybe this will take too long and EA will die or fade out to irrelevancy before then.
This is quite interesting and reminds me of a short option position as a previous hedge fund manager—you earn time decay or option premium when things are going well or stable, and then once in a while you take a big hit (and a lot of of people/orgs do not survive the hit). This is not a strategy I follow from a risk adjusted return point of view on a longer term perspective. I would not like to be short put option but rather be long call option and try to minimise my time decay or option premium. The latter is more work and time consuming but I have managed to construct very large option structured positions with almost no time decay as a hedge fund manager. In EA terms some of the ways I would like to structure long call options on EA whilst minimising risks would be look for strong founders and team, neglected with large convex upside, tractable and cost effective even with base case delivery (GWWC, Founders Pledge and Longview were good examples of this), and continue to fund promising ones until other funders come in.
As a general observation I think EA overemphasise expected return and not enough on risk adjusted return, especially when in some cases some sensible risk management can reduce risk a lot without reducing expected return much. (eg ensuring we have experienced operational, legal, regulatory and risk management expertises). This may have something to do with our very long impact time horizons and EAs preference to work things out from base.
I also like to emphasise that it does not always have to be bad actors, but could also be people acting outside their level of expertise and/or competence in good faith. And trust perhaps like market cycles can be oversupplied at times and in certain areas and under supplied at other times and areas.
I think of it in terms of “Justified trust”. What we want is a high degree of justified trust.
If a group shouldn’t be trusted, but is, then that would be unjustified trust.
We want to maximize justified trust and minimize unjustified trust.
If trust isn’t justified, then you would want corresponding levels of trust.
Generally, [unjustified trust] < [low-trust, not justified] < [justified trust]
No, I didn’t talk about this. I agree that you can frame low-trust as a trade where you exchange lower catastrophe risk for higher ongoing costs.
Through that lens a decent summary of my argument is:
People underestimate the ongoing costs of low-trust
We are not actually at much risk of catastrophe from being high-trust
Therefore we should stick with high trust
Yep, I agree. I see it as high trust > low trust, but being mislead about which one you are in isn’t a good move.
I think it’s more clear as a two-by-two matrix, with trustworthiness vs trust. In rough order of goodness:
High Trust, High Trustworthiness: Marriages, friends, some religions, small teams. Very good.
Low Trust, High Trustworthiness: The experience of many good people in normal countries, large corporations, etc. Wasteful, frustrating, and maybe the best people will leave.
Low Trust, Low Trustworthiness: Much of the world is like this. Not ideal at all but we have systems to minimise the damage.
High Trust, Low Trustworthiness: Often caused by new entrants exploiting a poorly protected community. Get scammed or robbed.
And so I claim: we have High Trustworthiness, so moving down the column to Low Trust is just shooting ourselves in the foot.