Hey, do I understand correctly that you’re pointing out a problem like “there are lots of problems that will eventually lead to x-risk” + “that’s bad” + “these problems somewhat feed into each other” ?
If so, speaking only for myself and not for the entire community or anything like that:
I agree
I personally think that AI risks will simply arrive earlier. If I change my mind and think AI risks will arrive after some of the other risks, I’ll probably change what I’m working on.
Again, I speak only for myself.
(I’ll also go over some of your materials, I’m happy to hear someone made a serious review of it, I’m interested)
I think the point of the metacrisis is to look at the underlying drivers of global catastrophic risks that are mostly various forms of coordination problems related to the management of exponential technologies (e.g., AI, Biotech, and to some degree fossil fuel engines, etc.) and try to address them directly rather than try to solve each issue separately. In particular, there is a worry that solving such issues separately involves building surveillance and control powers to manage the exponential tech which then leads to dystopic outcomes because more centralized power and power corrupts. Because addressing one of these issues opens up the other, we are in a dilemma situation that can only be addressed holistically as “the metacrisis”.
That’s my broad strokes summary of the metacrisis and why it’s argued to be bad. I think some EAs won’t see the “more centralized power and power corrupts” as a problem because that’s what we are building the AI singleton for… others would say EAs are naively mad for thinking that could be an adequate solution and maybe even dangerous for unilaterally and actively working on this. I think there is more discussion to be had, in particular, if one has short timelines.
Hey, do I understand correctly that you’re pointing out a problem like “there are lots of problems that will eventually lead to x-risk” + “that’s bad” + “these problems somewhat feed into each other” ?
If so, speaking only for myself and not for the entire community or anything like that:
I agree
I personally think that AI risks will simply arrive earlier. If I change my mind and think AI risks will arrive after some of the other risks, I’ll probably change what I’m working on.
Again, I speak only for myself.
(I’ll also go over some of your materials, I’m happy to hear someone made a serious review of it, I’m interested)
I think the point of the metacrisis is to look at the underlying drivers of global catastrophic risks that are mostly various forms of coordination problems related to the management of exponential technologies (e.g., AI, Biotech, and to some degree fossil fuel engines, etc.) and try to address them directly rather than try to solve each issue separately. In particular, there is a worry that solving such issues separately involves building surveillance and control powers to manage the exponential tech which then leads to dystopic outcomes because more centralized power and power corrupts. Because addressing one of these issues opens up the other, we are in a dilemma situation that can only be addressed holistically as “the metacrisis”.
That’s my broad strokes summary of the metacrisis and why it’s argued to be bad. I think some EAs won’t see the “more centralized power and power corrupts” as a problem because that’s what we are building the AI singleton for… others would say EAs are naively mad for thinking that could be an adequate solution and maybe even dangerous for unilaterally and actively working on this. I think there is more discussion to be had, in particular, if one has short timelines.