Curious if you disagree but this strikes me as red flags (I skimmed these so let me know if I got anything wrong).
I’m very skeptical of any theory of change that relies on large parts of society behaving differently, unless there is very compelling evidence that this would work. I see this a lot in non-EA vegan advocacy where there is a claim that if everybody just did x differently (e.g. debated differently). Everybody very very rarely just does anything differently. One of the big values I see in EA is, for example, contributing to companies going cage-free at scale, while the rest of the vegan movement was failing to win individual hearts and minds or developing some social movement theory about how we’re on the precipice of a new way of thinking spreading.
I’ve been curious what the metacrisis folks could produce because I respect some of the people involved and I take the critique seriously that EA doesn’t focus on systemic issues or interrelated problems enough.
But it strikes me that folks looking at systemic/interrelated solutions should grapple with the fact that these are so much harder to do, and that, to me at least, the solutions proposed seem very unlikely to come close to remotely tackling the problem.
Caveat: I do appreciate all of this could just be due to my lack of deep engagement.
I suspect part of what is happening is that systems change advocates are not judging their interventions purely on an individualist consequentialist calculus. If they were purely motivated by a belief that, say, starting a proto-B or intentional community is going to Solve The Metacrisis, I would agree that this is extremely unlikely making the intervention weak AF.
But seeing it as part of a correlated ecosystem of interventions might make more sense. I’m modelling systems change folks as taking a bet that the general direction they’re going in is correct enough that many others will independently (or somewhat dependently through engaging with metacrisis literature) reach similar conclusions and do similar things, resulting in emergent larger-scale changes. (there’s a good chance I am modelling metacrisis folk wrong)
FWIW this doesn’t feel extremely different to longtermist cause areas like AI safety to me. AI safety is also an ecosystem of interventions (technical work + advocacy + governance + education + philosophy + …), if it works it’ll likely be due to some complicated combination of these that the individual theories of change didn’t fully capture. If an individual or group tell me that they are going to single-handedly save the world from unaligned AI, that is a red flag for me, because the system of AI development is more complex than an individual/group can reckon with IMO.
Curious if you disagree but this strikes me as red flags (I skimmed these so let me know if I got anything wrong).
I’m very skeptical of any theory of change that relies on large parts of society behaving differently, unless there is very compelling evidence that this would work. I see this a lot in non-EA vegan advocacy where there is a claim that if everybody just did x differently (e.g. debated differently). Everybody very very rarely just does anything differently. One of the big values I see in EA is, for example, contributing to companies going cage-free at scale, while the rest of the vegan movement was failing to win individual hearts and minds or developing some social movement theory about how we’re on the precipice of a new way of thinking spreading.
I’ve been curious what the metacrisis folks could produce because I respect some of the people involved and I take the critique seriously that EA doesn’t focus on systemic issues or interrelated problems enough.
But it strikes me that folks looking at systemic/interrelated solutions should grapple with the fact that these are so much harder to do, and that, to me at least, the solutions proposed seem very unlikely to come close to remotely tackling the problem.
Caveat: I do appreciate all of this could just be due to my lack of deep engagement.
I suspect part of what is happening is that systems change advocates are not judging their interventions purely on an individualist consequentialist calculus. If they were purely motivated by a belief that, say, starting a proto-B or intentional community is going to Solve The Metacrisis, I would agree that this is extremely unlikely making the intervention weak AF.
But seeing it as part of a correlated ecosystem of interventions might make more sense. I’m modelling systems change folks as taking a bet that the general direction they’re going in is correct enough that many others will independently (or somewhat dependently through engaging with metacrisis literature) reach similar conclusions and do similar things, resulting in emergent larger-scale changes. (there’s a good chance I am modelling metacrisis folk wrong)
FWIW this doesn’t feel extremely different to longtermist cause areas like AI safety to me. AI safety is also an ecosystem of interventions (technical work + advocacy + governance + education + philosophy + …), if it works it’ll likely be due to some complicated combination of these that the individual theories of change didn’t fully capture. If an individual or group tell me that they are going to single-handedly save the world from unaligned AI, that is a red flag for me, because the system of AI development is more complex than an individual/group can reckon with IMO.