I think the marginal vs. total distinction is confused. Maximizing personal impact, while taking into account externalities (as EAs do), will be equivalent to maximizing collective impact.
An Effective Altruist, by focusing on impact at the margin, may ask questions such as: What impact will my next $100 donation make in this charity vs that charity?
It seems you’re trying to set up a distinction between EA focusing on small issues, and systems change focusing on big issues. But this is a strawman. Even if an individual makes a $100 donation, the cause they’re donating to can still target a systemic issue. In any case, there are now EAs making enormous donations: “What if you were in a position to give away billions of dollars to improve the world? What would you do with it?”
This approach invites sustained collective tolerance of deep uncertainty, in order to make space for new cultural norms to emerge. Linear, black-and-white thinking risks compromising this creative process before desirable novel realities have fully formed in a self-sustaining way.
while taking into account externalities (as EAs do)
I think that the current EA methodology to take into account impact externalities is incomplete. I am not aware of any way to reliably quantify flow-through effects, or to quantify how a particular cause area indirectly affects the impact of other cause areas.
The concept of total impact, if somehow integrated into our cause prioritisation methodology, may help us to account for impact externalities more accurately. I concede that total impact may be too simplistic a concept...
For what it’s worth, I currently think the solution requires modelling the Earth as a complex system, clarifying top-level metrics to optimise the system for, and a probability weighted theory of change for the system as a whole.
It seems you’re trying to set up a distinction between EA focusing on small issues, and systems change focusing on big issues.
I do not mean to say that EA focuses on small issues and systems change focuses on big issues. Rather, I see EA as having a robust (but incomplete) cause prioritisation methodology, and systems change having a methodology that accounts well for complexity (but neglects cause prioritisation in the context of the system of Earth as a whole).
This is pretty mystical.
On reflection, I think that conducting systems change projects in appropriate phases, with clear expectations for each phase, is a viable way to synthesis EA and systems change approaches and culture. Specifically, a substantial research phase would typically be required to understand the system before one can know what interventions to prioritise.
For what it’s worth, I currently think the solution requires modelling the Earth as a complex system, clarifying top-level metrics to optimise the system for, and a probability weighted theory of change for the system as a whole.
I’d be interested in seeing this. Do you have anything written up?
Parts 3 and 5 of the article linked below explain this approach is more detail, although my thinking has moved on a bit since writing this.
There’s a good chance that these ideas will be refined and written up collaborative in an applied context as part of GeM Labs’ Understanding and Optimising Policy project over the next year. If they are out of scope of this project, I intend to develop them independently and share my progress.
I think the marginal vs. total distinction is confused. Maximizing personal impact, while taking into account externalities (as EAs do), will be equivalent to maximizing collective impact.
It seems you’re trying to set up a distinction between EA focusing on small issues, and systems change focusing on big issues. But this is a strawman. Even if an individual makes a $100 donation, the cause they’re donating to can still target a systemic issue. In any case, there are now EAs making enormous donations: “What if you were in a position to give away billions of dollars to improve the world? What would you do with it?”
This is pretty mystical.
I think that the current EA methodology to take into account impact externalities is incomplete. I am not aware of any way to reliably quantify flow-through effects, or to quantify how a particular cause area indirectly affects the impact of other cause areas.
The concept of total impact, if somehow integrated into our cause prioritisation methodology, may help us to account for impact externalities more accurately. I concede that total impact may be too simplistic a concept...
For what it’s worth, I currently think the solution requires modelling the Earth as a complex system, clarifying top-level metrics to optimise the system for, and a probability weighted theory of change for the system as a whole.
I do not mean to say that EA focuses on small issues and systems change focuses on big issues. Rather, I see EA as having a robust (but incomplete) cause prioritisation methodology, and systems change having a methodology that accounts well for complexity (but neglects cause prioritisation in the context of the system of Earth as a whole).
On reflection, I think that conducting systems change projects in appropriate phases, with clear expectations for each phase, is a viable way to synthesis EA and systems change approaches and culture. Specifically, a substantial research phase would typically be required to understand the system before one can know what interventions to prioritise.
I’d be interested in seeing this. Do you have anything written up?
Parts 3 and 5 of the article linked below explain this approach is more detail, although my thinking has moved on a bit since writing this.
There’s a good chance that these ideas will be refined and written up collaborative in an applied context as part of GeM Labs’ Understanding and Optimising Policy project over the next year. If they are out of scope of this project, I intend to develop them independently and share my progress.
https://docs.google.com/document/d/1DFZ9OAb0g5dtQuZHbAfngwACQkgSpjqrpWWOeMrsq7o/edit?usp=sharing