The key simplifying assumption is one in which decision quality is orthogonal to value alignment. I don’t believe this is literally true, but is a good start. MichaelA et. al’s BIP (Benevolence, Intelligence, Power) ontology* is also helpful here.
If we think of Lizka’s B in the first diagram (“a well-run government”) is only weakly positive or neutral on the value alignment axis from an LT perspective, and most other dots are negative, we’d yield a simplified result that what Lizka calls “un-targeted, value-neutral IIDM”—that is, improving decision quality of unaligned actors (which is roughly what much of EA work/grantmaking in IIDM in practice looks like, eg in forecasting or alternative voting) as broadly having the same effect as improving technological progress or economic growth.
I’m more optimistic about IIDM that’s either more targeted (e.g. specialized in improving the decision quality of EA institutions, or perhaps via picking a side in great power stuff) or value-aligned (e.g. having predictive setups where we predict certain types of IIDM work differentially benefits the LT future over other goals an institution can have, I think your(?) work on “institutions for future generations” plausibly fall here).
One way to salvage these efforts’ LT impact is claiming that in practice work that apparently looks like “un-targeted, value-neutral IIDM” (e.g. funding academic work in forecasting or campaigning for approval voting) is in practice pretty targeted or value-gnostic, e.g. because EAs are the only ones who care about forecasting.
A secondary reason (not covered by Lizka’s post) I’m leery is that influence goes both ways, and I worry that LT people who get stuck on IIDM may (eventually) get corrupted by the epistemics or values of institutions they’re trying to influence, or that of other allies. I don’t think this is a dominant consideration however, and ultimately I’d reluctantly lean towards EA being too small by ourselves to save the world without at least risking this form of corruption**.
*MichaelA wrote this while he was at Convergence Analysis. He now works at RP. As an aside, I do think there’s a salient bias I have where I’m more likely to read/seriously consider work by coworkers than other work of equivalent merit, unfortunately I do not currently have active plans to fix this bias.
**Aside 2: I’m worried that my word choice in this post is too strong, with phrases like “corruption” etc. I’d be interested in more neutral phrasing that conveys the same concepts.
Decision quality is orthogonal to value alignment. … I’m more optimistic about IIDM that’s either more targeted or value-aligned.
Agree. And yes to date I have focused on targeted interventions (e.g. improving government risk management functions) and value-aligning orgs (e.g. institutions for Future Generations).
[Could] claiming that in practice work that apparently looks like “un-targeted, value-neutral IIDM” (e.g. funding academic work in forecasting or campaigning for approval voting) is in practice pretty targeted or value-gnostic.
Agree. FWIW I think I would make this case about approval voting as I believe aligning powerful actors (elected officials) incentives with the populations incentives is a form of value-aligning. Not sure I would make this case for forecasting, but could be open to hearing others make the case.
So where if anywhere do we disagree?
I’m leery is that influence goes both ways, and I worry that LT people who get stuck on IIDM may (eventually) get corrupted by the epistemics or values of institutions they’re trying to influence, or that of other allies.
Disagree. I don’t see that as a worry. I have not seen any evidence any cases of this, and there are 100s of EA aligned folk in the UK policy space. Where are you from? I have heard this worry so far only from people in the USA, maybe there are cultural differences or this has been happening there. Insofar as it is a risk I would assume it might be less bad for actors working outside of institutions (capaigners, lobbyists) so I do think more EA-aligned institutions in this domain could be useful.
If we think of Lizka’s B in the first diagram (“a well-run government”) is only weakly positive or neutral on the value alignment axis from an LT perspective
I think a well-run government is pretty positive. Maybe it depends on the government (as you say maybe there is a case for picking sides) and my experience is UK based. But, for example my understanding is there is some evidence that improved diplomacy practice is good for avoiding conflicts and mismanagement of central government functions can lead to periods of great instability (e.g. financial crises). Also a government is a collections of many smaller institutions it when you get into the weeds of it it becomes easier to pick and choose the sub-institutions that matter more.
I appreciate the (politer than me) engagement!
These are the key diagrams from Lizka’s post:
The key simplifying assumption is one in which decision quality is orthogonal to value alignment. I don’t believe this is literally true, but is a good start. MichaelA et. al’s BIP (Benevolence, Intelligence, Power) ontology* is also helpful here.
If we think of Lizka’s B in the first diagram (“a well-run government”) is only weakly positive or neutral on the value alignment axis from an LT perspective, and most other dots are negative, we’d yield a simplified result that what Lizka calls “un-targeted, value-neutral IIDM”—that is, improving decision quality of unaligned actors (which is roughly what much of EA work/grantmaking in IIDM in practice looks like, eg in forecasting or alternative voting) as broadly having the same effect as improving technological progress or economic growth.
I’m more optimistic about IIDM that’s either more targeted (e.g. specialized in improving the decision quality of EA institutions, or perhaps via picking a side in great power stuff) or value-aligned (e.g. having predictive setups where we predict certain types of IIDM work differentially benefits the LT future over other goals an institution can have, I think your(?) work on “institutions for future generations” plausibly fall here).
One way to salvage these efforts’ LT impact is claiming that in practice work that apparently looks like “un-targeted, value-neutral IIDM” (e.g. funding academic work in forecasting or campaigning for approval voting) is in practice pretty targeted or value-gnostic, e.g. because EAs are the only ones who care about forecasting.
A secondary reason (not covered by Lizka’s post) I’m leery is that influence goes both ways, and I worry that LT people who get stuck on IIDM may (eventually) get corrupted by the epistemics or values of institutions they’re trying to influence, or that of other allies. I don’t think this is a dominant consideration however, and ultimately I’d reluctantly lean towards EA being too small by ourselves to save the world without at least risking this form of corruption**.
*MichaelA wrote this while he was at Convergence Analysis. He now works at RP. As an aside, I do think there’s a salient bias I have where I’m more likely to read/seriously consider work by coworkers than other work of equivalent merit, unfortunately I do not currently have active plans to fix this bias.
**Aside 2: I’m worried that my word choice in this post is too strong, with phrases like “corruption” etc. I’d be interested in more neutral phrasing that conveys the same concepts.
Super thanks for the lengthy answer.
I think we are mostly on the same page.
Agree. And yes to date I have focused on targeted interventions (e.g. improving government risk management functions) and value-aligning orgs (e.g. institutions for Future Generations).
Agree. FWIW I think I would make this case about approval voting as I believe aligning powerful actors (elected officials) incentives with the populations incentives is a form of value-aligning. Not sure I would make this case for forecasting, but could be open to hearing others make the case.
So where if anywhere do we disagree?
Disagree. I don’t see that as a worry. I have not seen any evidence any cases of this, and there are 100s of EA aligned folk in the UK policy space. Where are you from? I have heard this worry so far only from people in the USA, maybe there are cultural differences or this has been happening there. Insofar as it is a risk I would assume it might be less bad for actors working outside of institutions (capaigners, lobbyists) so I do think more EA-aligned institutions in this domain could be useful.
I think a well-run government is pretty positive. Maybe it depends on the government (as you say maybe there is a case for picking sides) and my experience is UK based. But, for example my understanding is there is some evidence that improved diplomacy practice is good for avoiding conflicts and mismanagement of central government functions can lead to periods of great instability (e.g. financial crises). Also a government is a collections of many smaller institutions it when you get into the weeds of it it becomes easier to pick and choose the sub-institutions that matter more.