Thanks for the thoughtful reply to the thoughtful reply. I appreciate the challenge of trying to bridge together two different frameworks you care about and the challenges of discussing it on the forum.
Two points I want to follow up on relate to the metacrsis.
I’ve tried to learn about it and felt like I was reading continental philosophy where it seems like clarity wasn’t being optimised for.
I also found the solutions to to be very lacking, where the only answers I could ascertain were sensemaking and raising awareness and then some complicated things I couldn’t understand. For example here is a quote from transcript of a conversation with Daniel Schmachtenberger on the metacrisis:
So it’s not a solution, it’s a whole ecosystem of solutions that we have to work on, but they all have to be informed by understanding the problems and the interconnection of the whole well enough that you don’t advantage one part while externalizing the cost to the other areas. So I think we’ve got to do… And I appreciate you being available late at night your time, I think we got to do the beginning first part of this thing that is I found meaningful in your work and was valuable for me to make more central of the embedded growth obligation in finance being coupled to diminishing returns in energy that are not easily overcomeable through the current renewable technologies and process or the efficiencies in process and that being a major fucking thing that we have to deal with and that being connected to so many of the other things. I think we did a good job of starting that and maybe starting to get to what some of the transition looks like, some of the cultural parts, some of the advanced policy parts can be our next conversation.
I’d be curious to hear from you, what are some wins the metacrisis movements have had, or proposed solutions?
I wouldn’t consider the metacrisis movement to have any wins yet (according to the EA standard of wins at least), and its hard to point at very concrete proposed solutions yet. It’s mostly in ‘diagnosis mode’.
The main directions of solutions I’m aware of are things like Perspectiva’s antidebates, which are meant to improve collective epistemics (a layer of the metacrisis) (which was also adapted recently by Liv Bouree in her recent AI safety antidebate), or Life Itself’s Developmental Spaces, which is trying to catalyze culture change (another layer of the metacrisis)(Life Itself’s more broad proposed strategy is here). You could consider Schmachtenberger’s third attractor the beginnings of a proposed solution.
Most/all of the metacrisis discourse hasn’t taken short AI timelines into account, so the paths to impact of the proposed solutions tend to assume we have a bunch of time.
All the metacrisis stuff could do with more direction, urgency and clarity IMO, and part of my motivation for building this bridge between them and EA is that hopefully injecting some EA energy might make concrete progress more likely to happen.
Curious if you disagree but this strikes me as red flags (I skimmed these so let me know if I got anything wrong).
I’m very skeptical of any theory of change that relies on large parts of society behaving differently, unless there is very compelling evidence that this would work. I see this a lot in non-EA vegan advocacy where there is a claim that if everybody just did x differently (e.g. debated differently). Everybody very very rarely just does anything differently. One of the big values I see in EA is, for example, contributing to companies going cage-free at scale, while the rest of the vegan movement was failing to win individual hearts and minds or developing some social movement theory about how we’re on the precipice of a new way of thinking spreading.
I’ve been curious what the metacrisis folks could produce because I respect some of the people involved and I take the critique seriously that EA doesn’t focus on systemic issues or interrelated problems enough.
But it strikes me that folks looking at systemic/interrelated solutions should grapple with the fact that these are so much harder to do, and that, to me at least, the solutions proposed seem very unlikely to come close to remotely tackling the problem.
Caveat: I do appreciate all of this could just be due to my lack of deep engagement.
I suspect part of what is happening is that systems change advocates are not judging their interventions purely on an individualist consequentialist calculus. If they were purely motivated by a belief that, say, starting a proto-B or intentional community is going to Solve The Metacrisis, I would agree that this is extremely unlikely making the intervention weak AF.
But seeing it as part of a correlated ecosystem of interventions might make more sense. I’m modelling systems change folks as taking a bet that the general direction they’re going in is correct enough that many others will independently (or somewhat dependently through engaging with metacrisis literature) reach similar conclusions and do similar things, resulting in emergent larger-scale changes. (there’s a good chance I am modelling metacrisis folk wrong)
FWIW this doesn’t feel extremely different to longtermist cause areas like AI safety to me. AI safety is also an ecosystem of interventions (technical work + advocacy + governance + education + philosophy + …), if it works it’ll likely be due to some complicated combination of these that the individual theories of change didn’t fully capture. If an individual or group tell me that they are going to single-handedly save the world from unaligned AI, that is a red flag for me, because the system of AI development is more complex than an individual/group can reckon with IMO.
Thanks for the thoughtful reply to the thoughtful reply. I appreciate the challenge of trying to bridge together two different frameworks you care about and the challenges of discussing it on the forum.
Two points I want to follow up on relate to the metacrsis.
I’ve tried to learn about it and felt like I was reading continental philosophy where it seems like clarity wasn’t being optimised for.
I also found the solutions to to be very lacking, where the only answers I could ascertain were sensemaking and raising awareness and then some complicated things I couldn’t understand. For example here is a quote from transcript of a conversation with Daniel Schmachtenberger on the metacrisis:
I’d be curious to hear from you, what are some wins the metacrisis movements have had, or proposed solutions?
I wouldn’t consider the metacrisis movement to have any wins yet (according to the EA standard of wins at least), and its hard to point at very concrete proposed solutions yet. It’s mostly in ‘diagnosis mode’.
The main directions of solutions I’m aware of are things like Perspectiva’s antidebates, which are meant to improve collective epistemics (a layer of the metacrisis) (which was also adapted recently by Liv Bouree in her recent AI safety antidebate), or Life Itself’s Developmental Spaces, which is trying to catalyze culture change (another layer of the metacrisis)(Life Itself’s more broad proposed strategy is here). You could consider Schmachtenberger’s third attractor the beginnings of a proposed solution.
Most/all of the metacrisis discourse hasn’t taken short AI timelines into account, so the paths to impact of the proposed solutions tend to assume we have a bunch of time.
All the metacrisis stuff could do with more direction, urgency and clarity IMO, and part of my motivation for building this bridge between them and EA is that hopefully injecting some EA energy might make concrete progress more likely to happen.
Curious if you disagree but this strikes me as red flags (I skimmed these so let me know if I got anything wrong).
I’m very skeptical of any theory of change that relies on large parts of society behaving differently, unless there is very compelling evidence that this would work. I see this a lot in non-EA vegan advocacy where there is a claim that if everybody just did x differently (e.g. debated differently). Everybody very very rarely just does anything differently. One of the big values I see in EA is, for example, contributing to companies going cage-free at scale, while the rest of the vegan movement was failing to win individual hearts and minds or developing some social movement theory about how we’re on the precipice of a new way of thinking spreading.
I’ve been curious what the metacrisis folks could produce because I respect some of the people involved and I take the critique seriously that EA doesn’t focus on systemic issues or interrelated problems enough.
But it strikes me that folks looking at systemic/interrelated solutions should grapple with the fact that these are so much harder to do, and that, to me at least, the solutions proposed seem very unlikely to come close to remotely tackling the problem.
Caveat: I do appreciate all of this could just be due to my lack of deep engagement.
I suspect part of what is happening is that systems change advocates are not judging their interventions purely on an individualist consequentialist calculus. If they were purely motivated by a belief that, say, starting a proto-B or intentional community is going to Solve The Metacrisis, I would agree that this is extremely unlikely making the intervention weak AF.
But seeing it as part of a correlated ecosystem of interventions might make more sense. I’m modelling systems change folks as taking a bet that the general direction they’re going in is correct enough that many others will independently (or somewhat dependently through engaging with metacrisis literature) reach similar conclusions and do similar things, resulting in emergent larger-scale changes. (there’s a good chance I am modelling metacrisis folk wrong)
FWIW this doesn’t feel extremely different to longtermist cause areas like AI safety to me. AI safety is also an ecosystem of interventions (technical work + advocacy + governance + education + philosophy + …), if it works it’ll likely be due to some complicated combination of these that the individual theories of change didn’t fully capture. If an individual or group tell me that they are going to single-handedly save the world from unaligned AI, that is a red flag for me, because the system of AI development is more complex than an individual/group can reckon with IMO.