Thank you for this post. I think I broadly agree. I feel:
a) some resistance around your framing and valuation of investment in community building in some of its forms, though I appreciate your arguments and don’t really dispute them—it’s more a case of how I would weigh them up against other factors.
Namely, one lesson I take from the FTX affair and other issues that have come up in our movement in recent months/years is that having a more open, inclusive, dynamic community that ‘recruits’ among more diverse socio-economic/cultural backgrounds (even if new entrants largely stay on the periphery of e.g. key decision-making as they build a clear model of how EA looks at the world, get a better sense of what intervening in their fields of interest might look like if they join as students/early career professionals rather than high-ranking experts) could give our movement more resilience by increasing the aggregate balance in what we expect to be normal (in terms of ways of securing capital, degrees of power concentration and risk taking we deem acceptable, etc. - note that I’m not advocating against centralised functioning for certain aspects of the cooperative work we set out to do).
If anything, perhaps the fact that ‘core EA’ seems to not really have seen the FTX debacle coming nor be too troubled by a select few deciding how money gets allocated despite the stakes and the degree of uncertainty and volatility of everything we’re tinkering with suggests to me that we should invest in what I’ll call higher perspectival pluralism, for want of a better descriptor (I’m writing on a very noisy broken down train and my attention is pretty badly challenged by that, in case any of this doesn’t feel too smooth to read). I realise this must seem abhorrent to read to some people—I’m not saying expertise should not play a major role in decision making in EA, which means a lot of decision making shouldn’t be democratic in the sense where everyone’s opinion, however well or poorly informed, counts equally. But I think it’s worth considering that there’s a lot of steering of the movement that exceeds the bounds of technical decisions best made by people who really know their field. Norms around behaviour and risk taking are examples of what I mean by ‘steering of the movement’.
b) curiosity around how you construe the increased ‘moderation’ you find yourself drawn to. It’d be interesting to flesh out what this entails. My sense is that this ‘moderation’ is essentially a response to the ‘naïve optimising’ that has been discussed in some previous comments. It’s a sound response, in my view, and it bears breaking down into something with higher descriptive signal.
The way I would propose to conceive of this ‘moderation’ goes something like this—excessively optimising for an outcome fails to support the resilience of the wider system (whichever system we may be looking at in a given instance) by removing the ‘inefficiencies’ (e.g. redundancies) that provide buffers against shocks and distortions, including unknown unknowns. Decreased resilience is bad. It’s not compatible with sustained robustness—it displaces pressure, often in poorly monitored ways that end up blowing up in our faces unexpectedly. Or worse, the faces of other stakeholders who had been minding their own business all along. To be clear, my understanding is you are mostly considering embracing more moderate views with regards to EA’s potential to achieve a flawless record, to be a desirable social bubble, etc. - I think my take on this applies to this too (and again, I agree with your general sentiment and am simply trying to clarify what moderation might mean in this context).
In other words, welcoming some degree of inefficiency could actually be a good way to translate the epistemic humility most of us strive for (and the moderation you speak of) into the design and implementation of our initiatives. I’d like to see any approaches to what this could look like from a design/operational research perspective, if anyone has come up with even tentative models for this.
So my sense is that we should: - be wary of endeavours like FTX that are willing to compromise important aspects of how we’re trying to impact the world for the sake of maximal effectiveness, and— encourage people to build sustainably robust impact across the board rather than achieving Everything Right Away With Maximal ROI but with serious tunnel vision. In other words, valuing effectiveness means, under conditions of uncertainty, sacrificing some efficiency (as a side note, I suppose this is one of the underlying assumptions for point #a above). Obviously this wouldn’t be the case if we were omniscient, but I don’t think any of us is arguing that, especially post-FTX. So until then, I think we should value slack more highly across the board.
Thank you for this post. I think I broadly agree. I feel:
a) some resistance around your framing and valuation of investment in community building in some of its forms, though I appreciate your arguments and don’t really dispute them—it’s more a case of how I would weigh them up against other factors.
Namely, one lesson I take from the FTX affair and other issues that have come up in our movement in recent months/years is that having a more open, inclusive, dynamic community that ‘recruits’ among more diverse socio-economic/cultural backgrounds (even if new entrants largely stay on the periphery of e.g. key decision-making as they build a clear model of how EA looks at the world, get a better sense of what intervening in their fields of interest might look like if they join as students/early career professionals rather than high-ranking experts) could give our movement more resilience by increasing the aggregate balance in what we expect to be normal (in terms of ways of securing capital, degrees of power concentration and risk taking we deem acceptable, etc. - note that I’m not advocating against centralised functioning for certain aspects of the cooperative work we set out to do).
If anything, perhaps the fact that ‘core EA’ seems to not really have seen the FTX debacle coming nor be too troubled by a select few deciding how money gets allocated despite the stakes and the degree of uncertainty and volatility of everything we’re tinkering with suggests to me that we should invest in what I’ll call higher perspectival pluralism, for want of a better descriptor (I’m writing on a very noisy broken down train and my attention is pretty badly challenged by that, in case any of this doesn’t feel too smooth to read). I realise this must seem abhorrent to read to some people—I’m not saying expertise should not play a major role in decision making in EA, which means a lot of decision making shouldn’t be democratic in the sense where everyone’s opinion, however well or poorly informed, counts equally. But I think it’s worth considering that there’s a lot of steering of the movement that exceeds the bounds of technical decisions best made by people who really know their field. Norms around behaviour and risk taking are examples of what I mean by ‘steering of the movement’.
b) curiosity around how you construe the increased ‘moderation’ you find yourself drawn to. It’d be interesting to flesh out what this entails. My sense is that this ‘moderation’ is essentially a response to the ‘naïve optimising’ that has been discussed in some previous comments. It’s a sound response, in my view, and it bears breaking down into something with higher descriptive signal.
The way I would propose to conceive of this ‘moderation’ goes something like this—excessively optimising for an outcome fails to support the resilience of the wider system (whichever system we may be looking at in a given instance) by removing the ‘inefficiencies’ (e.g. redundancies) that provide buffers against shocks and distortions, including unknown unknowns. Decreased resilience is bad. It’s not compatible with sustained robustness—it displaces pressure, often in poorly monitored ways that end up blowing up in our faces unexpectedly. Or worse, the faces of other stakeholders who had been minding their own business all along.
To be clear, my understanding is you are mostly considering embracing more moderate views with regards to EA’s potential to achieve a flawless record, to be a desirable social bubble, etc. - I think my take on this applies to this too (and again, I agree with your general sentiment and am simply trying to clarify what moderation might mean in this context).
In other words, welcoming some degree of inefficiency could actually be a good way to translate the epistemic humility most of us strive for (and the moderation you speak of) into the design and implementation of our initiatives. I’d like to see any approaches to what this could look like from a design/operational research perspective, if anyone has come up with even tentative models for this.
So my sense is that we should:
- be wary of endeavours like FTX that are willing to compromise important aspects of how we’re trying to impact the world for the sake of maximal effectiveness, and—
encourage people to build sustainably robust impact across the board rather than achieving Everything Right Away With Maximal ROI but with serious tunnel vision. In other words, valuing effectiveness means, under conditions of uncertainty, sacrificing some efficiency (as a side note, I suppose this is one of the underlying assumptions for point #a above). Obviously this wouldn’t be the case if we were omniscient, but I don’t think any of us is arguing that, especially post-FTX. So until then, I think we should value slack more highly across the board.