This helped clarify things a bit, thanks -not the least because I’ve interacted with some of you lately.
I’m still mostly skeptical because of the amount of implicit conjunctions (a solution that solves A & B & C & D & E seems less plausible to exist than several specialised solutions), how common it is for extremely effective ideas to be specialised (rather than the knock-on result of a general idea) and the vague similarity with “Great Idea” death spiral traits. All of this said, I’m in favor of keeping the discussion open. Fox mindset rules.
For those who need clarification, I think I understand four non-exclusive example avenues of what “solving the metacrisis” looks like (all the names are 100% made-up but useful for me to think about it):
1-”Systemism” postulates the solution has to do with complex system stuff. You really want to solve AI X-risk ? Don’t try to build OpenAI, says the systemist, instead, do :
1.1-One gigabrained single intervention that truly addresses all the problems at the same time (maybe « Citizen Assemblies but better »)
1.2-A conjunction of interventions that mutually reinforces each other in a feedback loop (maybe « Citizen Assemblies » + « UBI » + « Empowerment in social help »)
Will this intervention solve one or all problems ? Again, opinions diverge :
1.3-Each intervention should solve one problem -a bigger one, and to a better extent, than conventional solutions.
1.4-This intervention should solve all the problems
1.5-The intervention should solve one problem, but you can « copy-paste » it, it has high transferrability.
2-”Introspectivism” postulates the solution has to do with changing the way we relate to ourselves and the rest of the world. You really want to solve AI X-risk ? Again, don’t build OpenAI, but go meditate, learn NVC, use Holacracy.
3-”Integralism” postulates the solution is to incorporate criticism of all different paradigms. You really want to solve AI X-risk ? Make your plans consistent with Marxism, Heidegerian Phenomenology and Buddhism, and you’ll get there.
4-”Culturalism” postulates that cultural artifacts (workshops, books, a social movement, and/or memes in general) will suceed to change how people act and coordinate, such that reducing X-risks becomes feasible. Don’t try to build OpenAI, think about cultural artifacts -but not books about AI Risk, more like books about coordination and communication.
Separately, I think discussing disagreements around the meta-crisis is going to be hard.
Why? I think that there is a disparity in relevance heuristics among EA and… well, the rest of the world. EA has analytical and pragmatic relevance heuristics. “Meta-crisis people” have a social relevance heuristic, and some other streams of thoughts have a phenomenological relevance heuristic.
Think Cluster Headache. I think many people attempted to say that Cluster Headaches could matter a big deal. But they said stuff we (or at least, I) didn’t necessarily understand, like “it’s very intense. You become pain. You can’t understand”. Then decades later, someone says “maybe suffering is exponential, not linear, and CH is an intensity where this insight is very clear”. And then (along with other numerate considerations) we progressively started caring (or at least, I started caring).
All these systems can communicate with EA if and only if they succeed to formalize / pragmaticize themselves to some degree, and I personally think this is what people like Jan Kulveit, Andres G. Emilsson, or Richard Ngo are (inadvertently ?) doing. I’d suggest doing this for meta-crisis (math > dialogues) otherwise people may backfire.
This helped clarify things a bit, thanks -not the least because I’ve interacted with some of you lately.
I’m still mostly skeptical because of the amount of implicit conjunctions (a solution that solves A & B & C & D & E seems less plausible to exist than several specialised solutions), how common it is for extremely effective ideas to be specialised (rather than the knock-on result of a general idea) and the vague similarity with “Great Idea” death spiral traits. All of this said, I’m in favor of keeping the discussion open. Fox mindset rules.
For those who need clarification, I think I understand four non-exclusive example avenues of what “solving the metacrisis” looks like (all the names are 100% made-up but useful for me to think about it):
1-”Systemism” postulates the solution has to do with complex system stuff. You really want to solve AI X-risk ? Don’t try to build OpenAI, says the systemist, instead, do :
1.1-One gigabrained single intervention that truly addresses all the problems at the same time (maybe « Citizen Assemblies but better »)
1.2-A conjunction of interventions that mutually reinforces each other in a feedback loop (maybe « Citizen Assemblies » + « UBI » + « Empowerment in social help »)
Will this intervention solve one or all problems ? Again, opinions diverge :
1.3-Each intervention should solve one problem -a bigger one, and to a better extent, than conventional solutions.
1.4-This intervention should solve all the problems
1.5-The intervention should solve one problem, but you can « copy-paste » it, it has high transferrability.
2-”Introspectivism” postulates the solution has to do with changing the way we relate to ourselves and the rest of the world. You really want to solve AI X-risk ? Again, don’t build OpenAI, but go meditate, learn NVC, use Holacracy.
3-”Integralism” postulates the solution is to incorporate criticism of all different paradigms. You really want to solve AI X-risk ? Make your plans consistent with Marxism, Heidegerian Phenomenology and Buddhism, and you’ll get there.
4-”Culturalism” postulates that cultural artifacts (workshops, books, a social movement, and/or memes in general) will suceed to change how people act and coordinate, such that reducing X-risks becomes feasible. Don’t try to build OpenAI, think about cultural artifacts -but not books about AI Risk, more like books about coordination and communication.
Separately, I think discussing disagreements around the meta-crisis is going to be hard.
Why? I think that there is a disparity in relevance heuristics among EA and… well, the rest of the world. EA has analytical and pragmatic relevance heuristics. “Meta-crisis people” have a social relevance heuristic, and some other streams of thoughts have a phenomenological relevance heuristic.
Think Cluster Headache. I think many people attempted to say that Cluster Headaches could matter a big deal. But they said stuff we (or at least, I) didn’t necessarily understand, like “it’s very intense. You become pain. You can’t understand”. Then decades later, someone says “maybe suffering is exponential, not linear, and CH is an intensity where this insight is very clear”. And then (along with other numerate considerations) we progressively started caring (or at least, I started caring).
All these systems can communicate with EA if and only if they succeed to formalize / pragmaticize themselves to some degree, and I personally think this is what people like Jan Kulveit, Andres G. Emilsson, or Richard Ngo are (inadvertently ?) doing. I’d suggest doing this for meta-crisis (math > dialogues) otherwise people may backfire.