I watched most of a youtube video on this topic to see what it’s about.
I think I agree that “coordination problems are the biggest issue that’s facing us” is an underrated perspective. I see it as a reason for less optimism about the future.
The term “crisis” (in “metacrisis”) makes it sound like it’s something new and acute, but it seems that we’ve had coordination problems for all of history. Though maybe their effects are getting worse because of accelerating technological progress?
In any case, in the video I watched, Schmachtenberger mentioned the saying, “If you understand a problem, you’re halfway there toward solving it.” (Not sure that was the exact wording, but something like that.) Unfortunately, I don’t think the saying holds here. I feel quite pessimistic about changing the dynamics about why earth is so unlike Yudkwosky’s “dath ilan.” Maybe I stopped the Schmachtenberger video before he got to the solution proposals (but I feel like if he had great solution proposals, he should lead with those). In my view, the catch-22 is that you need well-functioning (and sane and compassionate) groups/companies/institutions/government branches to “reform” anything, which is challenging when your problem is that groups/companies/institutions/government branches don’t work well (or aren’t sane or compassionate).
I didn’t watch the entire video by Schmachtenberger, but I got a sense that he thinks something like, “If we can change societal incentives, we can address the metacrisis.” Unfortunately, I think this is extremely hard – it’s swimming upstream, and even if we were able to change some societal incentives, they’d at best go from “vastly suboptimal” to “still pretty suboptimal.” (I think it would require god-like technology to create anything close to optimal social incentives.)
Of course, that doesn’t mean making things better is not worth trying. If I had longer AI timelines, I would probably think of this as the top priority. (Accordingly, I think it’s weird that this isn’t on the radar of more EAs, since many EAs have longer timelines than me?)
My approach is mostly taking for granted that large parts of the world are broken, so I recommend working with the groups/companies/institutions/government branches that still function, expanding existing pockets of sanity and creating new ones.
Of course, if someone had an idea for changing the way people consume news, or making a better version of social media, trying to create more of a shared reality and shared priority about what matters in the world, improving public discourse, I’d be like “this is very much worth trying!.” But it seems challenging to compete for attention against clickbait and outrage amplification machinery.
EA already has the cause area “improving institutional decision-making.” I think things like approval voting are cool and I like forecasting just like many EAs, but I’d probably place more of a focus on “expanding pockets of sanity” or “building new pockets of sanity from scratch.” “Improving” suggests that things are gradual. My cognitive style might be biased towards black-and-white thinking, but to me it really feels like a lot of institutions/groups/companies/government branches mostly fall into two types, “dysfunctional” and “please give us more of that.” It’s pointless to try to improve the ones with dysfunctional leadership or culture (instead, those have to be reformed or you have to work without them). Focus on what works and create more of it.
The term “crisis” (in “metacrisis”) makes it sound like it’s something new and acute, but it seems that we’ve had coordination problems for all of history. Though maybe their effects are getting worse because of accelerating technological progress?
This would be surprising to me, since so much of tech progress is the creation of social coordination technologies (internet and social media platforms, cell phones and computers, new modes of transport, cheaper food and safer water that simplifies logistics of human movement, new institutions, new language and theories, new educational resources).
Perhaps, however, the explosion in social coordination technologies makes it easier for self-interested and antisocial groups to coordinate for private advantage at the expense of the public.
For example, one lens on American slavery might be a transition from:
No pro-slavery social coordination infrastructure/technology
Increasing amounts of pro-slavery coordination allow a horrific trans-Atlantic slave trade
Gradually increasing anti-slavery coordination undermine, then finally destroy the slave trade
A modern back and forth of coordination as a result of conflict pressures between human traffickers and anti-trafficking police, as well as among racist police institutions and individuals and anti-racist activists
In these scenarios of conflict, increased coordination technology is neither intrinsically good or bad. It’s just a tool that may give a net advantage to the good guys, bad guys, or neither, depending on context.
That being said, it may be that improving specific forms of coordination technology that help us toward betteer ends is still what we need the most. I’m very receptive to that view. Better voting mechanisms, better science distillation mechanisms, better institutions, better tax policy, better ways of dealing with the risks of specific forms of technological development, etc.
I guess I just think that insofar as these are concerns of the meta-crisisians, they’re also things we deal with pretty thoroughly in EA through our own lenses. I’d be curious to know what the concept of “meta-crisis” offers in terms of further clarifying the problems or proposing new solutions.
Thank you for engaging with the content in a meaningful way and also taking the time to write up your experience. This answer was particularly helpful for me to get the sense that a) there is a productive way that more discussion can be had on this topic and b) some ideas for how this might be framed. So thank you very much!
I intensively skimmed the first suggested article “Technology is not Values Neutral. Ending the reign of nihilistic design”, and found the analysis mostly lucid and free of political buzzwords. There’s definitely a lot worth engaging with there.
Similarly to what you write however, I got a sense of unjustified optimism in the proposed solution, which centers around analyzing second and third order effects of technology during their development. Unfortunately, the article does not appear to acknowledge that predicting such societal effects seems really hard, as evidenced by the observation that people in the past have anecdotically been usually wrong about the sociatal effects of technologies, and that there is no consensus even on the current effects of e.g. social media on society.
I watched most of a youtube video on this topic to see what it’s about.
I think I agree that “coordination problems are the biggest issue that’s facing us” is an underrated perspective. I see it as a reason for less optimism about the future.
The term “crisis” (in “metacrisis”) makes it sound like it’s something new and acute, but it seems that we’ve had coordination problems for all of history. Though maybe their effects are getting worse because of accelerating technological progress?
In any case, in the video I watched, Schmachtenberger mentioned the saying, “If you understand a problem, you’re halfway there toward solving it.” (Not sure that was the exact wording, but something like that.) Unfortunately, I don’t think the saying holds here. I feel quite pessimistic about changing the dynamics about why earth is so unlike Yudkwosky’s “dath ilan.” Maybe I stopped the Schmachtenberger video before he got to the solution proposals (but I feel like if he had great solution proposals, he should lead with those). In my view, the catch-22 is that you need well-functioning (and sane and compassionate) groups/companies/institutions/government branches to “reform” anything, which is challenging when your problem is that groups/companies/institutions/government branches don’t work well (or aren’t sane or compassionate).
I didn’t watch the entire video by Schmachtenberger, but I got a sense that he thinks something like, “If we can change societal incentives, we can address the metacrisis.” Unfortunately, I think this is extremely hard – it’s swimming upstream, and even if we were able to change some societal incentives, they’d at best go from “vastly suboptimal” to “still pretty suboptimal.” (I think it would require god-like technology to create anything close to optimal social incentives.)
Of course, that doesn’t mean making things better is not worth trying. If I had longer AI timelines, I would probably think of this as the top priority. (Accordingly, I think it’s weird that this isn’t on the radar of more EAs, since many EAs have longer timelines than me?)
My approach is mostly taking for granted that large parts of the world are broken, so I recommend working with the groups/companies/institutions/government branches that still function, expanding existing pockets of sanity and creating new ones.
Of course, if someone had an idea for changing the way people consume news, or making a better version of social media, trying to create more of a shared reality and shared priority about what matters in the world, improving public discourse, I’d be like “this is very much worth trying!.” But it seems challenging to compete for attention against clickbait and outrage amplification machinery.
EA already has the cause area “improving institutional decision-making.” I think things like approval voting are cool and I like forecasting just like many EAs, but I’d probably place more of a focus on “expanding pockets of sanity” or “building new pockets of sanity from scratch.” “Improving” suggests that things are gradual. My cognitive style might be biased towards black-and-white thinking, but to me it really feels like a lot of institutions/groups/companies/government branches mostly fall into two types, “dysfunctional” and “please give us more of that.” It’s pointless to try to improve the ones with dysfunctional leadership or culture (instead, those have to be reformed or you have to work without them). Focus on what works and create more of it.
This would be surprising to me, since so much of tech progress is the creation of social coordination technologies (internet and social media platforms, cell phones and computers, new modes of transport, cheaper food and safer water that simplifies logistics of human movement, new institutions, new language and theories, new educational resources).
Perhaps, however, the explosion in social coordination technologies makes it easier for self-interested and antisocial groups to coordinate for private advantage at the expense of the public.
For example, one lens on American slavery might be a transition from:
No pro-slavery social coordination infrastructure/technology
Increasing amounts of pro-slavery coordination allow a horrific trans-Atlantic slave trade
Gradually increasing anti-slavery coordination undermine, then finally destroy the slave trade
A modern back and forth of coordination as a result of conflict pressures between human traffickers and anti-trafficking police, as well as among racist police institutions and individuals and anti-racist activists
In these scenarios of conflict, increased coordination technology is neither intrinsically good or bad. It’s just a tool that may give a net advantage to the good guys, bad guys, or neither, depending on context.
That being said, it may be that improving specific forms of coordination technology that help us toward betteer ends is still what we need the most. I’m very receptive to that view. Better voting mechanisms, better science distillation mechanisms, better institutions, better tax policy, better ways of dealing with the risks of specific forms of technological development, etc.
I guess I just think that insofar as these are concerns of the meta-crisisians, they’re also things we deal with pretty thoroughly in EA through our own lenses. I’d be curious to know what the concept of “meta-crisis” offers in terms of further clarifying the problems or proposing new solutions.
Thank you for engaging with the content in a meaningful way and also taking the time to write up your experience. This answer was particularly helpful for me to get the sense that a) there is a productive way that more discussion can be had on this topic and b) some ideas for how this might be framed. So thank you very much!
I intensively skimmed the first suggested article “Technology is not Values Neutral. Ending the reign of nihilistic design”, and found the analysis mostly lucid and free of political buzzwords. There’s definitely a lot worth engaging with there. Similarly to what you write however, I got a sense of unjustified optimism in the proposed solution, which centers around analyzing second and third order effects of technology during their development. Unfortunately, the article does not appear to acknowledge that predicting such societal effects seems really hard, as evidenced by the observation that people in the past have anecdotically been usually wrong about the sociatal effects of technologies, and that there is no consensus even on the current effects of e.g. social media on society.