I’m really pleased to see so many people coalescing around this post. I’m enormously blessed to be amongst people thinking about the big problems with such openness, passion, and energy.
Int/a correctly identifies that EA has imperfections. But the proposals, replacing specificity with multidimensionality, putting process over goals, substituting metrics with sensing, don’t fix those imperfections. They mostly obfuscate them by disallowing comparison and avoid failure by never choosing between options. I think the main problem int/a has with EA is not an EA problem but an imperfect-world problem.
EA’s singleminded focus on specificity, measurability, and goal-orientedness is the painful, imperfect method that turns values and caring and messy big problems into singular choices and actions. Yes, the metrics are always flawed. Yes, you cut off possibilities when you commit to a direction. That’s the cost of actually acting in the world, and I don’t think int/a has provided a better path forward.
I may be being ungenerous, but my aim is to cut through to my biggest concern and look for correction. What int/a offers is staying in the ideation phase. More intuition, more holism, more systems thinking, more openness, more frames. Every single recommendation is widening, sourcing, and uncontroversial. These are a vital part of the opening process. But as far as I can tell, int/a does not move past enriching understanding, and does not seem concerned with what that is giving up. At some point the unpleasant part has to come: splitting apart, letting go of options, committing to something that might be wrong. EA isn’t limiting itself to specificity and comparison out of compulsion. It sees these as necessary stages. Pleading for more modalities does not get you to a tradeoff-free world! At some point you have to demonstrate a better outcome.
The complexity science and metacrisis communities have said “see the whole system, keep entanglements, don’t reduce” and then hit the entirely predictable problem of being unable to make much headway. They have produced real analytical tools, but the endpoint actions remain sparse. Is EA’s predisposition towards action more harmful than int/a’s moving at the speed of wisdom? I genuinely think EA’s greater bias toward action has produced more good than harm. But I can see arguing for change.
What int/a does do well, and EA should listen to, is their unearthing root problems, catching incomplete definitions, calls for opening up, and providing more frames. Int/a can teach us greater things to get narrowed toward. I don’t think its best seen as a competing method. It needs to be handed off to EA-style problem-solving, and should be resurfaced periodically too.
Int/a is still new, so my first-level analysis is that it’s okay for it to still be in the ideation phase. My second-level analysis is that AI timelines might be short maybe this phase needs to be cut short.
Thanks Tandena, I appreciate this poke! The following is my counter-poke.
They mostly obfuscate them by disallowing comparison and avoid failure by never choosing between options.
We are not interested in disallowing comparison and never choosing between options. Our move is to be aware of more options to choose from. To integrate x and y is not to never chose between them, but to have both at your disposal so you can apply either x or y depending on which works better in a given context.
One way we are distinct from EA is that we are explicitly striving to be metarational, which means recognizing that the best frame, idea, tool, or action is context-dependent. Dependent on the context of the individual altruist, the problem they are working on, and a plethora of other factors.
I claim that the current explicit tools & implicit approaches of EA are not the best suited for all contexts that changemakers today can find themselves in.
For example, take the ‘Decoupling & Recoupling’ principle. Taking the most zoomed-out frames (e.g. metacrisis) benefits understanding but hurts tractability, and zoomed-in frames has the opposite tradeoff. We want to consciously choose this tradeoff depending on a judgement of which is better given what we’re doing, rather than only being aware of the zoomed in frame.
int/a as a superstructure is not committing to x or y, because there is no universal truth of whether x or y is correct, it depends on the person and the problem. But this does not prevent individuals or projects within int/a from committing to x or y, we want to empower individuals & projects with x and y so they can discern for themselves whether x or y is right for their context.
And it is not preventing the possibility of a synthesis of x and y leading to novel concrete directions for solving the world’s most complex & pressing problems.
So yea, I appreciate that “staying in the ideation phase” is a possible failure mode, but I don’t consider it intrinsic to the project.
(edit: looking at this again, this is not the only way to interpret ‘integrate’, and probably it would be cool to get more clear about what we mean by the word. For example, another angle on integrating x and y could be to find a specific action z that is robust to both x and y being true)
I appreciate being counter poked! That was my hope.
The concepts of metarationality, complexity science, and the like really appeal to me. When I have tried to enter into their domain and learn what they advise, I’ve been disappointed mainly for the reasons in my above critique. It means a lot to get an inside answer, thank you.
I’m going to switch gears and now give my own best version of what integral altruism and associated nodes have to offer:
Pre-mortem—Also known as prospective hindsight, you start with the premise that everything went horribly wrong, and then identify what lead to that outcome so we can avoid it. It comes from a psychologist studying field intuition around 2007. Now adopted enthusiastically by EA. (see also backcasting whose lineage traces back to sustainability and environmentalism)
Red teaming—Yes this is super EA. EA took it from military wargames around 2004. But this is exactly the kind of “holding two views” and explicitly searching for alternative frames that integral altruism has pointed towards. Integral altruism would probably call it polarity management techniques which emerged from systems thinking research. Polarity management is mapping the upsides and downsides to two different goal frameworks, and oscillating between them. You make a 2x2 matrix and then make decisions which keep you in the upper half of the matrix of both goal frameworks. Polarity management dates back to 1992.
Adaptive Management—The jist is: When you’re managing a system you don’t fully understand, every management action should be treated as an experiment to generate information, not just to achieve outcomes. Passive adaptive management is the standard good practice of enacting seems best, monitoring results, and adjusting. Active adaptive management is deliberately designing multiple competing interventions to discriminate how the system works, even if that means some of the interventions are suboptimal by your current theory. Developed for ecology by C.S. Holling in 1978 from a systems thinking background. I think this is a pretty important tool and I vaguely feel like it should be discussed more. The int/a term would be probe-sense-respond, moving at the speed of wisdom, or double loop learning. The EA version would be explore/exploit tradeoffs or maybe value of information calculations.
Integrative Complexity—A simplistic version of this is making pro/con lists and then having that loaded into active memory when making decisions. Pro/con lists are not as rigorous analysis as most EA endorsed methods, but they are extremely practical, and in some sense enable more thorough judgements than rigorous calculations. Apparently the formal version was developed by Philip Tetlock in the 1980s from psychometrics research. The int/a terms might be decoupling and recoupling. EA might call it scout mindset.
Collective intelligence—Super forecaster research has shown when and how crowd intelligence outperforms experts (and vice versa). Prediction markets are an example of trying to implement this at scale to inform decisionmaking. I think prediction markets are really exciting new technology that will improving decision making across humanity. I suspect this can be traced back to a variety of sources, but in particular it came from systems thinking research. Int/a might say it reflects how full spectrum knowing trumps expertise/formal analysis (in certain conditions).
Focusing—I personally have found this to be incredibly useful across many scales of decisionmaking. I think it makes me wiser and more able to expose and tinker with my underlying reasons. By Gendlin in 1978, adopted by CFAR, generally “EA” accepted now in my experience. Completely in the spirit of integral altruism.
Elizabeth Anderson is a philosopher that worked on the idea of a world composed of incomparable values. There is no reason that we would necessarily live in a world composed of values that are comparable. This might be more accurate (though inconvenient) reflection of reality. I’m going to butcher this, but here is my summary: Anderson describes shifting between frameworks and appropriate actions according to the values being optimized for. For example, we don’t optimize for “grieving” but it is meaningful to us. Examining the shift between optimization and how we actually practice meaningful activities could help us better pinpoint why we should shift out of EA optimization.
I hope these descriptions are a useful translation bridging these two approaches.
I think integral altruism and friends are important because I want to make better, more wholistic, more informed choices. I want to take everything into account. I want the ability to be context dependent and switch to the exact best approach according to the changing circumstances. It might be harder to do and harder to describe, but it is what we should strive for. I think we all want this.
edit: I think this part captures it best: “we want to empower individuals & projects with x and y so they can discern for themselves whether x or y is right for their context.” Yes! Exactly! Its really great when advice declares “who this is for.” I think int/a and adjacent groups could work towards bringing more clarity when holding such expanded levels of context. Make recognizable the context. What probably matters, what has been missed, what might not matter, how can we identify the options. When does “the individual altruist, the problem they are working on, and a plethora of other factors” matter, and when don’t they make a difference? Clarify, reveal, discern. Everything depends. We are in the state of trying to know what we can best do, under our individual circumstances.
I’m really pleased to see so many people coalescing around this post. I’m enormously blessed to be amongst people thinking about the big problems with such openness, passion, and energy.
Int/a correctly identifies that EA has imperfections. But the proposals, replacing specificity with multidimensionality, putting process over goals, substituting metrics with sensing, don’t fix those imperfections. They mostly obfuscate them by disallowing comparison and avoid failure by never choosing between options. I think the main problem int/a has with EA is not an EA problem but an imperfect-world problem.
EA’s singleminded focus on specificity, measurability, and goal-orientedness is the painful, imperfect method that turns values and caring and messy big problems into singular choices and actions. Yes, the metrics are always flawed. Yes, you cut off possibilities when you commit to a direction. That’s the cost of actually acting in the world, and I don’t think int/a has provided a better path forward.
I may be being ungenerous, but my aim is to cut through to my biggest concern and look for correction. What int/a offers is staying in the ideation phase. More intuition, more holism, more systems thinking, more openness, more frames. Every single recommendation is widening, sourcing, and uncontroversial. These are a vital part of the opening process. But as far as I can tell, int/a does not move past enriching understanding, and does not seem concerned with what that is giving up. At some point the unpleasant part has to come: splitting apart, letting go of options, committing to something that might be wrong. EA isn’t limiting itself to specificity and comparison out of compulsion. It sees these as necessary stages. Pleading for more modalities does not get you to a tradeoff-free world! At some point you have to demonstrate a better outcome.
The complexity science and metacrisis communities have said “see the whole system, keep entanglements, don’t reduce” and then hit the entirely predictable problem of being unable to make much headway. They have produced real analytical tools, but the endpoint actions remain sparse. Is EA’s predisposition towards action more harmful than int/a’s moving at the speed of wisdom? I genuinely think EA’s greater bias toward action has produced more good than harm. But I can see arguing for change.
What int/a does do well, and EA should listen to, is their unearthing root problems, catching incomplete definitions, calls for opening up, and providing more frames. Int/a can teach us greater things to get narrowed toward. I don’t think its best seen as a competing method. It needs to be handed off to EA-style problem-solving, and should be resurfaced periodically too.
Int/a is still new, so my first-level analysis is that it’s okay for it to still be in the ideation phase. My second-level analysis is that AI timelines might be short maybe this phase needs to be cut short.
To add to this, my sense is that int/a seems very thoughtful, but also quite slow-moving.
I liked the people I met in int/a meetups, very warm, friendly, considerate and thoughtful. Great vibes!
My main critique would be there could be more output: More meetups, forum posts like this shared sooner (less polished is totally fine), more action.
Part of this is probably also due to int/a being currently entirely (?) volunteer-run. If int/a gets funding, I’d be excited to see more output.
Thanks Tandena, I appreciate this poke! The following is my counter-poke.
We are not interested in disallowing comparison and never choosing between options. Our move is to be aware of more options to choose from. To integrate x and y is not to never chose between them, but to have both at your disposal so you can apply either x or y depending on which works better in a given context.
One way we are distinct from EA is that we are explicitly striving to be metarational, which means recognizing that the best frame, idea, tool, or action is context-dependent. Dependent on the context of the individual altruist, the problem they are working on, and a plethora of other factors.
I claim that the current explicit tools & implicit approaches of EA are not the best suited for all contexts that changemakers today can find themselves in.
For example, take the ‘Decoupling & Recoupling’ principle. Taking the most zoomed-out frames (e.g. metacrisis) benefits understanding but hurts tractability, and zoomed-in frames has the opposite tradeoff. We want to consciously choose this tradeoff depending on a judgement of which is better given what we’re doing, rather than only being aware of the zoomed in frame.
int/a as a superstructure is not committing to x or y, because there is no universal truth of whether x or y is correct, it depends on the person and the problem. But this does not prevent individuals or projects within int/a from committing to x or y, we want to empower individuals & projects with x and y so they can discern for themselves whether x or y is right for their context.
And it is not preventing the possibility of a synthesis of x and y leading to novel concrete directions for solving the world’s most complex & pressing problems.
So yea, I appreciate that “staying in the ideation phase” is a possible failure mode, but I don’t consider it intrinsic to the project.
(edit: looking at this again, this is not the only way to interpret ‘integrate’, and probably it would be cool to get more clear about what we mean by the word. For example, another angle on integrating x and y could be to find a specific action z that is robust to both x and y being true)
I appreciate being counter poked! That was my hope.
The concepts of metarationality, complexity science, and the like really appeal to me. When I have tried to enter into their domain and learn what they advise, I’ve been disappointed mainly for the reasons in my above critique. It means a lot to get an inside answer, thank you.
I’m going to switch gears and now give my own best version of what integral altruism and associated nodes have to offer:
Pre-mortem—Also known as prospective hindsight, you start with the premise that everything went horribly wrong, and then identify what lead to that outcome so we can avoid it. It comes from a psychologist studying field intuition around 2007. Now adopted enthusiastically by EA. (see also backcasting whose lineage traces back to sustainability and environmentalism)
Red teaming—Yes this is super EA. EA took it from military wargames around 2004. But this is exactly the kind of “holding two views” and explicitly searching for alternative frames that integral altruism has pointed towards. Integral altruism would probably call it polarity management techniques which emerged from systems thinking research. Polarity management is mapping the upsides and downsides to two different goal frameworks, and oscillating between them. You make a 2x2 matrix and then make decisions which keep you in the upper half of the matrix of both goal frameworks. Polarity management dates back to 1992.
Adaptive Management—The jist is: When you’re managing a system you don’t fully understand, every management action should be treated as an experiment to generate information, not just to achieve outcomes. Passive adaptive management is the standard good practice of enacting seems best, monitoring results, and adjusting. Active adaptive management is deliberately designing multiple competing interventions to discriminate how the system works, even if that means some of the interventions are suboptimal by your current theory. Developed for ecology by C.S. Holling in 1978 from a systems thinking background. I think this is a pretty important tool and I vaguely feel like it should be discussed more. The int/a term would be probe-sense-respond, moving at the speed of wisdom, or double loop learning. The EA version would be explore/exploit tradeoffs or maybe value of information calculations.
Integrative Complexity—A simplistic version of this is making pro/con lists and then having that loaded into active memory when making decisions. Pro/con lists are not as rigorous analysis as most EA endorsed methods, but they are extremely practical, and in some sense enable more thorough judgements than rigorous calculations. Apparently the formal version was developed by Philip Tetlock in the 1980s from psychometrics research. The int/a terms might be decoupling and recoupling. EA might call it scout mindset.
Collective intelligence—Super forecaster research has shown when and how crowd intelligence outperforms experts (and vice versa). Prediction markets are an example of trying to implement this at scale to inform decisionmaking. I think prediction markets are really exciting new technology that will improving decision making across humanity. I suspect this can be traced back to a variety of sources, but in particular it came from systems thinking research. Int/a might say it reflects how full spectrum knowing trumps expertise/formal analysis (in certain conditions).
Focusing—I personally have found this to be incredibly useful across many scales of decisionmaking. I think it makes me wiser and more able to expose and tinker with my underlying reasons. By Gendlin in 1978, adopted by CFAR, generally “EA” accepted now in my experience. Completely in the spirit of integral altruism.
Elizabeth Anderson is a philosopher that worked on the idea of a world composed of incomparable values. There is no reason that we would necessarily live in a world composed of values that are comparable. This might be more accurate (though inconvenient) reflection of reality. I’m going to butcher this, but here is my summary: Anderson describes shifting between frameworks and appropriate actions according to the values being optimized for. For example, we don’t optimize for “grieving” but it is meaningful to us. Examining the shift between optimization and how we actually practice meaningful activities could help us better pinpoint why we should shift out of EA optimization.
I hope these descriptions are a useful translation bridging these two approaches.
I think integral altruism and friends are important because I want to make better, more wholistic, more informed choices. I want to take everything into account. I want the ability to be context dependent and switch to the exact best approach according to the changing circumstances. It might be harder to do and harder to describe, but it is what we should strive for. I think we all want this.
edit: I think this part captures it best: “we want to empower individuals & projects with x and y so they can discern for themselves whether x or y is right for their context.”
Yes! Exactly! Its really great when advice declares “who this is for.” I think int/a and adjacent groups could work towards bringing more clarity when holding such expanded levels of context. Make recognizable the context. What probably matters, what has been missed, what might not matter, how can we identify the options. When does “the individual altruist, the problem they are working on, and a plethora of other factors” matter, and when don’t they make a difference? Clarify, reveal, discern. Everything depends. We are in the state of trying to know what we can best do, under our individual circumstances.