I suspect part of what is happening is that systems change advocates are not judging their interventions purely on an individualist consequentialist calculus. If they were purely motivated by a belief that, say, starting a proto-B or intentional community is going to Solve The Metacrisis, I would agree that this is extremely unlikely making the intervention weak AF.
But seeing it as part of a correlated ecosystem of interventions might make more sense. I’m modelling systems change folks as taking a bet that the general direction they’re going in is correct enough that many others will independently (or somewhat dependently through engaging with metacrisis literature) reach similar conclusions and do similar things, resulting in emergent larger-scale changes. (there’s a good chance I am modelling metacrisis folk wrong)
FWIW this doesn’t feel extremely different to longtermist cause areas like AI safety to me. AI safety is also an ecosystem of interventions (technical work + advocacy + governance + education + philosophy + …), if it works it’ll likely be due to some complicated combination of these that the individual theories of change didn’t fully capture. If an individual or group tell me that they are going to single-handedly save the world from unaligned AI, that is a red flag for me, because the system of AI development is more complex than an individual/group can reckon with IMO.
I suspect part of what is happening is that systems change advocates are not judging their interventions purely on an individualist consequentialist calculus. If they were purely motivated by a belief that, say, starting a proto-B or intentional community is going to Solve The Metacrisis, I would agree that this is extremely unlikely making the intervention weak AF.
But seeing it as part of a correlated ecosystem of interventions might make more sense. I’m modelling systems change folks as taking a bet that the general direction they’re going in is correct enough that many others will independently (or somewhat dependently through engaging with metacrisis literature) reach similar conclusions and do similar things, resulting in emergent larger-scale changes. (there’s a good chance I am modelling metacrisis folk wrong)
FWIW this doesn’t feel extremely different to longtermist cause areas like AI safety to me. AI safety is also an ecosystem of interventions (technical work + advocacy + governance + education + philosophy + …), if it works it’ll likely be due to some complicated combination of these that the individual theories of change didn’t fully capture. If an individual or group tell me that they are going to single-handedly save the world from unaligned AI, that is a red flag for me, because the system of AI development is more complex than an individual/group can reckon with IMO.