I appreciate being counter poked! That was my hope.
The concepts of metarationality, complexity science, and the like really appeal to me. When I have tried to enter into their domain and learn what they advise, I’ve been disappointed mainly for the reasons in my above critique. It means a lot to get an inside answer, thank you.
I’m going to switch gears and now give my own best version of what integral altruism and associated nodes have to offer:
Pre-mortem—Also known as prospective hindsight, you start with the premise that everything went horribly wrong, and then identify what lead to that outcome so we can avoid it. It comes from a psychologist studying field intuition around 2007. Now adopted enthusiastically by EA. (see also backcasting whose lineage traces back to sustainability and environmentalism)
Red teaming—Yes this is super EA. EA took it from military wargames around 2004. But this is exactly the kind of “holding two views” and explicitly searching for alternative frames that integral altruism has pointed towards. Integral altruism would probably call it polarity management techniques which emerged from systems thinking research. Polarity management is mapping the upsides and downsides to two different goal frameworks, and oscillating between them. You make a 2x2 matrix and then make decisions which keep you in the upper half of the matrix of both goal frameworks. Polarity management dates back to 1992.
Adaptive Management—The jist is: When you’re managing a system you don’t fully understand, every management action should be treated as an experiment to generate information, not just to achieve outcomes. Passive adaptive management is the standard good practice of enacting seems best, monitoring results, and adjusting. Active adaptive management is deliberately designing multiple competing interventions to discriminate how the system works, even if that means some of the interventions are suboptimal by your current theory. Developed for ecology by C.S. Holling in 1978 from a systems thinking background. I think this is a pretty important tool and I vaguely feel like it should be discussed more. The int/a term would be probe-sense-respond, moving at the speed of wisdom, or double loop learning. The EA version would be explore/exploit tradeoffs or maybe value of information calculations.
Integrative Complexity—A simplistic version of this is making pro/con lists and then having that loaded into active memory when making decisions. Pro/con lists are not as rigorous analysis as most EA endorsed methods, but they are extremely practical, and in some sense enable more thorough judgements than rigorous calculations. Apparently the formal version was developed by Philip Tetlock in the 1980s from psychometrics research. The int/a terms might be decoupling and recoupling. EA might call it scout mindset.
Collective intelligence—Super forecaster research has shown when and how crowd intelligence outperforms experts (and vice versa). Prediction markets are an example of trying to implement this at scale to inform decisionmaking. I think prediction markets are really exciting new technology that will improving decision making across humanity. I suspect this can be traced back to a variety of sources, but in particular it came from systems thinking research. Int/a might say it reflects how full spectrum knowing trumps expertise/formal analysis (in certain conditions).
Focusing—I personally have found this to be incredibly useful across many scales of decisionmaking. I think it makes me wiser and more able to expose and tinker with my underlying reasons. By Gendlin in 1978, adopted by CFAR, generally “EA” accepted now in my experience. Completely in the spirit of integral altruism.
Elizabeth Anderson is a philosopher that worked on the idea of a world composed of incomparable values. There is no reason that we would necessarily live in a world composed of values that are comparable. This might be more accurate (though inconvenient) reflection of reality. I’m going to butcher this, but here is my summary: Anderson describes shifting between frameworks and appropriate actions according to the values being optimized for. For example, we don’t optimize for “grieving” but it is meaningful to us. Examining the shift between optimization and how we actually practice meaningful activities could help us better pinpoint why we should shift out of EA optimization.
I hope these descriptions are a useful translation bridging these two approaches.
I think integral altruism and friends are important because I want to make better, more wholistic, more informed choices. I want to take everything into account. I want the ability to be context dependent and switch to the exact best approach according to the changing circumstances. It might be harder to do and harder to describe, but it is what we should strive for. I think we all want this.
edit: I think this part captures it best: “we want to empower individuals & projects with x and y so they can discern for themselves whether x or y is right for their context.” Yes! Exactly! Its really great when advice declares “who this is for.” I think int/a and adjacent groups could work towards bringing more clarity when holding such expanded levels of context. Make recognizable the context. What probably matters, what has been missed, what might not matter, how can we identify the options. When does “the individual altruist, the problem they are working on, and a plethora of other factors” matter, and when don’t they make a difference? Clarify, reveal, discern. Everything depends. We are in the state of trying to know what we can best do, under our individual circumstances.
I appreciate being counter poked! That was my hope.
The concepts of metarationality, complexity science, and the like really appeal to me. When I have tried to enter into their domain and learn what they advise, I’ve been disappointed mainly for the reasons in my above critique. It means a lot to get an inside answer, thank you.
I’m going to switch gears and now give my own best version of what integral altruism and associated nodes have to offer:
Pre-mortem—Also known as prospective hindsight, you start with the premise that everything went horribly wrong, and then identify what lead to that outcome so we can avoid it. It comes from a psychologist studying field intuition around 2007. Now adopted enthusiastically by EA. (see also backcasting whose lineage traces back to sustainability and environmentalism)
Red teaming—Yes this is super EA. EA took it from military wargames around 2004. But this is exactly the kind of “holding two views” and explicitly searching for alternative frames that integral altruism has pointed towards. Integral altruism would probably call it polarity management techniques which emerged from systems thinking research. Polarity management is mapping the upsides and downsides to two different goal frameworks, and oscillating between them. You make a 2x2 matrix and then make decisions which keep you in the upper half of the matrix of both goal frameworks. Polarity management dates back to 1992.
Adaptive Management—The jist is: When you’re managing a system you don’t fully understand, every management action should be treated as an experiment to generate information, not just to achieve outcomes. Passive adaptive management is the standard good practice of enacting seems best, monitoring results, and adjusting. Active adaptive management is deliberately designing multiple competing interventions to discriminate how the system works, even if that means some of the interventions are suboptimal by your current theory. Developed for ecology by C.S. Holling in 1978 from a systems thinking background. I think this is a pretty important tool and I vaguely feel like it should be discussed more. The int/a term would be probe-sense-respond, moving at the speed of wisdom, or double loop learning. The EA version would be explore/exploit tradeoffs or maybe value of information calculations.
Integrative Complexity—A simplistic version of this is making pro/con lists and then having that loaded into active memory when making decisions. Pro/con lists are not as rigorous analysis as most EA endorsed methods, but they are extremely practical, and in some sense enable more thorough judgements than rigorous calculations. Apparently the formal version was developed by Philip Tetlock in the 1980s from psychometrics research. The int/a terms might be decoupling and recoupling. EA might call it scout mindset.
Collective intelligence—Super forecaster research has shown when and how crowd intelligence outperforms experts (and vice versa). Prediction markets are an example of trying to implement this at scale to inform decisionmaking. I think prediction markets are really exciting new technology that will improving decision making across humanity. I suspect this can be traced back to a variety of sources, but in particular it came from systems thinking research. Int/a might say it reflects how full spectrum knowing trumps expertise/formal analysis (in certain conditions).
Focusing—I personally have found this to be incredibly useful across many scales of decisionmaking. I think it makes me wiser and more able to expose and tinker with my underlying reasons. By Gendlin in 1978, adopted by CFAR, generally “EA” accepted now in my experience. Completely in the spirit of integral altruism.
Elizabeth Anderson is a philosopher that worked on the idea of a world composed of incomparable values. There is no reason that we would necessarily live in a world composed of values that are comparable. This might be more accurate (though inconvenient) reflection of reality. I’m going to butcher this, but here is my summary: Anderson describes shifting between frameworks and appropriate actions according to the values being optimized for. For example, we don’t optimize for “grieving” but it is meaningful to us. Examining the shift between optimization and how we actually practice meaningful activities could help us better pinpoint why we should shift out of EA optimization.
I hope these descriptions are a useful translation bridging these two approaches.
I think integral altruism and friends are important because I want to make better, more wholistic, more informed choices. I want to take everything into account. I want the ability to be context dependent and switch to the exact best approach according to the changing circumstances. It might be harder to do and harder to describe, but it is what we should strive for. I think we all want this.
edit: I think this part captures it best: “we want to empower individuals & projects with x and y so they can discern for themselves whether x or y is right for their context.”
Yes! Exactly! Its really great when advice declares “who this is for.” I think int/a and adjacent groups could work towards bringing more clarity when holding such expanded levels of context. Make recognizable the context. What probably matters, what has been missed, what might not matter, how can we identify the options. When does “the individual altruist, the problem they are working on, and a plethora of other factors” matter, and when don’t they make a difference? Clarify, reveal, discern. Everything depends. We are in the state of trying to know what we can best do, under our individual circumstances.