I think the point of the metacrisis is to look at the underlying drivers of global catastrophic risks that are mostly various forms of coordination problems related to the management of exponential technologies (e.g., AI, Biotech, and to some degree fossil fuel engines, etc.) and try to address them directly rather than try to solve each issue separately. In particular, there is a worry that solving such issues separately involves building surveillance and control powers to manage the exponential tech which then leads to dystopic outcomes because more centralized power and power corrupts. Because addressing one of these issues opens up the other, we are in a dilemma situation that can only be addressed holistically as “the metacrisis”.
That’s my broad strokes summary of the metacrisis and why it’s argued to be bad. I think some EAs won’t see the “more centralized power and power corrupts” as a problem because that’s what we are building the AI singleton for… others would say EAs are naively mad for thinking that could be an adequate solution and maybe even dangerous for unilaterally and actively working on this. I think there is more discussion to be had, in particular, if one has short timelines.
I think the point of the metacrisis is to look at the underlying drivers of global catastrophic risks that are mostly various forms of coordination problems related to the management of exponential technologies (e.g., AI, Biotech, and to some degree fossil fuel engines, etc.) and try to address them directly rather than try to solve each issue separately. In particular, there is a worry that solving such issues separately involves building surveillance and control powers to manage the exponential tech which then leads to dystopic outcomes because more centralized power and power corrupts. Because addressing one of these issues opens up the other, we are in a dilemma situation that can only be addressed holistically as “the metacrisis”.
That’s my broad strokes summary of the metacrisis and why it’s argued to be bad. I think some EAs won’t see the “more centralized power and power corrupts” as a problem because that’s what we are building the AI singleton for… others would say EAs are naively mad for thinking that could be an adequate solution and maybe even dangerous for unilaterally and actively working on this. I think there is more discussion to be had, in particular, if one has short timelines.