Why are you sceptical of IIDM, meta-science, etc. Would love to hear arguments against?
The short argument for is that insofar as making the future goes well means dealing with uncertainty and things that are hard to predict, then these seem like exactly the kinds of interventions to work on (as set out here).
A common criticism of economic growth and scientific progress is that it entails sped up technological development which could mean greater x-risk. This is why many EAs prefer differential growth/progress and focusing on specific risks.
On the other hand there are arguments that economic growth and technological development could reduce x-risk and help us achieve existential security e.g. here and Will MacAskill alludes to a similar argument in his recent EA Global fireside chat at around the 7 minute mark.
Overall there seems to be disagreement amongst prominent EAs and it’s quite unclear overall.
With regards to IIDM I don’t see why that wouldn’t be net positive.
Yeah since almost all x-risk is anthropogenic, our prior for economic growth and scientific progress is very close to 50-50, and I have specific empirical (though still not very detailed) reasons to update in the negative direction (at least on the margin, as of 2022).
With regards to IIDM I don’t see why that wouldn’t be net positive.
I think this disentanglement by Lizka might be helpful*, especially if (like me) your empirical views about external institutions are a bit more negative than Lizka’s.
*Disclaimer: I supervised her when she was writing this
Hi. Thank you so much for the link, somehow I had missed that post by Lizka. Was great reading :-)
To flag however I am still a bit confused. Lizka’s post says “Personally, I think IIDM-style work is a very promising area for effective altruism”so I don’t understand how you go from that too IIDM is net-negative. I also don’t understand what the phrase “especially if (like me) your empirical views about external institutions are a bit more negative than Lizka’s” means (like if you think institutions are generally not doing good then IIDM might be more useful not less).
I am not trying to be critical here. I am genuinely very keen to understand the case against. I work in this space so it would be really great to find people who think this is not useful and to understand their point of view.
Not to speak for Linch, but my understanding of Lizka’s overall point is that IIDM-style work that is not sufficiently well-targeted could be net-negative. A lot of people think of IIDM work primarily from a tools- and techniques-based lens (think e.g. forecasting), which means that more advanced tools could be used by any institution to further its aims, no matter whether those aims are good/productive or not. (They could also be put to use to further good aims but still not result in better decisions because of other institutional dysfunctions.) This lens is in contrast to the approach that Effective Institutions Project is taking to the issue, which considers institutions on a case-by-case basis and tries to understand what interventions would cause those specific institutions to contribute more to the net good of humanity.
This lens is in contrast to the approach that Effective Institutions Project is taking to the issue, which considers institutions on a case-by-case basis and tries to understand what interventions would cause those specific institutions to contribute more to the net good of humanity.
I’m excited about this! Do people on the Effective Institutions Project consider these institutions from a LT lens? If so, do they mostly have a “broad tent” approach to LT impacts, or more of a “targeted/narrow theory of change” approach?
Yes, we have an institutional prioritization analysis in progress that uses both neartermist and longtermist lenses explicitly and also tries to triangulate between them (in the spirit of Sam’s advice that “Doing Both Is Best”). We’ll be sending out a draft for review towards the end of this month and I’d be happy to include you in the distribution list if interested.
With respect to LT impact/issues, it is a broad tent approach although the theory of change to make change in an institution could be more targeted depending on the specific circumstances of that institution.
The key simplifying assumption is one in which decision quality is orthogonal to value alignment. I don’t believe this is literally true, but is a good start. MichaelA et. al’s BIP (Benevolence, Intelligence, Power) ontology* is also helpful here.
If we think of Lizka’s B in the first diagram (“a well-run government”) is only weakly positive or neutral on the value alignment axis from an LT perspective, and most other dots are negative, we’d yield a simplified result that what Lizka calls “un-targeted, value-neutral IIDM”—that is, improving decision quality of unaligned actors (which is roughly what much of EA work/grantmaking in IIDM in practice looks like, eg in forecasting or alternative voting) as broadly having the same effect as improving technological progress or economic growth.
I’m more optimistic about IIDM that’s either more targeted (e.g. specialized in improving the decision quality of EA institutions, or perhaps via picking a side in great power stuff) or value-aligned (e.g. having predictive setups where we predict certain types of IIDM work differentially benefits the LT future over other goals an institution can have, I think your(?) work on “institutions for future generations” plausibly fall here).
One way to salvage these efforts’ LT impact is claiming that in practice work that apparently looks like “un-targeted, value-neutral IIDM” (e.g. funding academic work in forecasting or campaigning for approval voting) is in practice pretty targeted or value-gnostic, e.g. because EAs are the only ones who care about forecasting.
A secondary reason (not covered by Lizka’s post) I’m leery is that influence goes both ways, and I worry that LT people who get stuck on IIDM may (eventually) get corrupted by the epistemics or values of institutions they’re trying to influence, or that of other allies. I don’t think this is a dominant consideration however, and ultimately I’d reluctantly lean towards EA being too small by ourselves to save the world without at least risking this form of corruption**.
*MichaelA wrote this while he was at Convergence Analysis. He now works at RP. As an aside, I do think there’s a salient bias I have where I’m more likely to read/seriously consider work by coworkers than other work of equivalent merit, unfortunately I do not currently have active plans to fix this bias.
**Aside 2: I’m worried that my word choice in this post is too strong, with phrases like “corruption” etc. I’d be interested in more neutral phrasing that conveys the same concepts.
Decision quality is orthogonal to value alignment. … I’m more optimistic about IIDM that’s either more targeted or value-aligned.
Agree. And yes to date I have focused on targeted interventions (e.g. improving government risk management functions) and value-aligning orgs (e.g. institutions for Future Generations).
[Could] claiming that in practice work that apparently looks like “un-targeted, value-neutral IIDM” (e.g. funding academic work in forecasting or campaigning for approval voting) is in practice pretty targeted or value-gnostic.
Agree. FWIW I think I would make this case about approval voting as I believe aligning powerful actors (elected officials) incentives with the populations incentives is a form of value-aligning. Not sure I would make this case for forecasting, but could be open to hearing others make the case.
So where if anywhere do we disagree?
I’m leery is that influence goes both ways, and I worry that LT people who get stuck on IIDM may (eventually) get corrupted by the epistemics or values of institutions they’re trying to influence, or that of other allies.
Disagree. I don’t see that as a worry. I have not seen any evidence any cases of this, and there are 100s of EA aligned folk in the UK policy space. Where are you from? I have heard this worry so far only from people in the USA, maybe there are cultural differences or this has been happening there. Insofar as it is a risk I would assume it might be less bad for actors working outside of institutions (capaigners, lobbyists) so I do think more EA-aligned institutions in this domain could be useful.
If we think of Lizka’s B in the first diagram (“a well-run government”) is only weakly positive or neutral on the value alignment axis from an LT perspective
I think a well-run government is pretty positive. Maybe it depends on the government (as you say maybe there is a case for picking sides) and my experience is UK based. But, for example my understanding is there is some evidence that improved diplomacy practice is good for avoiding conflicts and mismanagement of central government functions can lead to periods of great instability (e.g. financial crises). Also a government is a collections of many smaller institutions it when you get into the weeds of it it becomes easier to pick and choose the sub-institutions that matter more.
Why are you sceptical of IIDM, meta-science, etc. Would love to hear arguments against?
The short argument for is that insofar as making the future goes well means dealing with uncertainty and things that are hard to predict, then these seem like exactly the kinds of interventions to work on (as set out here).
A common criticism of economic growth and scientific progress is that it entails sped up technological development which could mean greater x-risk. This is why many EAs prefer differential growth/progress and focusing on specific risks.
On the other hand there are arguments that economic growth and technological development could reduce x-risk and help us achieve existential security e.g. here and Will MacAskill alludes to a similar argument in his recent EA Global fireside chat at around the 7 minute mark.
Overall there seems to be disagreement amongst prominent EAs and it’s quite unclear overall.
With regards to IIDM I don’t see why that wouldn’t be net positive.
Yeah since almost all x-risk is anthropogenic, our prior for economic growth and scientific progress is very close to 50-50, and I have specific empirical (though still not very detailed) reasons to update in the negative direction (at least on the margin, as of 2022).
I think this disentanglement by Lizka might be helpful*, especially if (like me) your empirical views about external institutions are a bit more negative than Lizka’s.
*Disclaimer: I supervised her when she was writing this
Hi. Thank you so much for the link, somehow I had missed that post by Lizka. Was great reading :-)
To flag however I am still a bit confused. Lizka’s post says “Personally, I think IIDM-style work is a very promising area for effective altruism”so I don’t understand how you go from that too IIDM is net-negative. I also don’t understand what the phrase “especially if (like me) your empirical views about external institutions are a bit more negative than Lizka’s” means (like if you think institutions are generally not doing good then IIDM might be more useful not less).
I am not trying to be critical here. I am genuinely very keen to understand the case against. I work in this space so it would be really great to find people who think this is not useful and to understand their point of view.
Not to speak for Linch, but my understanding of Lizka’s overall point is that IIDM-style work that is not sufficiently well-targeted could be net-negative. A lot of people think of IIDM work primarily from a tools- and techniques-based lens (think e.g. forecasting), which means that more advanced tools could be used by any institution to further its aims, no matter whether those aims are good/productive or not. (They could also be put to use to further good aims but still not result in better decisions because of other institutional dysfunctions.) This lens is in contrast to the approach that Effective Institutions Project is taking to the issue, which considers institutions on a case-by-case basis and tries to understand what interventions would cause those specific institutions to contribute more to the net good of humanity.
I’m excited about this! Do people on the Effective Institutions Project consider these institutions from a LT lens? If so, do they mostly have a “broad tent” approach to LT impacts, or more of a “targeted/narrow theory of change” approach?
Yes, we have an institutional prioritization analysis in progress that uses both neartermist and longtermist lenses explicitly and also tries to triangulate between them (in the spirit of Sam’s advice that “Doing Both Is Best”). We’ll be sending out a draft for review towards the end of this month and I’d be happy to include you in the distribution list if interested.
With respect to LT impact/issues, it is a broad tent approach although the theory of change to make change in an institution could be more targeted depending on the specific circumstances of that institution.
I appreciate the (politer than me) engagement!
These are the key diagrams from Lizka’s post:
The key simplifying assumption is one in which decision quality is orthogonal to value alignment. I don’t believe this is literally true, but is a good start. MichaelA et. al’s BIP (Benevolence, Intelligence, Power) ontology* is also helpful here.
If we think of Lizka’s B in the first diagram (“a well-run government”) is only weakly positive or neutral on the value alignment axis from an LT perspective, and most other dots are negative, we’d yield a simplified result that what Lizka calls “un-targeted, value-neutral IIDM”—that is, improving decision quality of unaligned actors (which is roughly what much of EA work/grantmaking in IIDM in practice looks like, eg in forecasting or alternative voting) as broadly having the same effect as improving technological progress or economic growth.
I’m more optimistic about IIDM that’s either more targeted (e.g. specialized in improving the decision quality of EA institutions, or perhaps via picking a side in great power stuff) or value-aligned (e.g. having predictive setups where we predict certain types of IIDM work differentially benefits the LT future over other goals an institution can have, I think your(?) work on “institutions for future generations” plausibly fall here).
One way to salvage these efforts’ LT impact is claiming that in practice work that apparently looks like “un-targeted, value-neutral IIDM” (e.g. funding academic work in forecasting or campaigning for approval voting) is in practice pretty targeted or value-gnostic, e.g. because EAs are the only ones who care about forecasting.
A secondary reason (not covered by Lizka’s post) I’m leery is that influence goes both ways, and I worry that LT people who get stuck on IIDM may (eventually) get corrupted by the epistemics or values of institutions they’re trying to influence, or that of other allies. I don’t think this is a dominant consideration however, and ultimately I’d reluctantly lean towards EA being too small by ourselves to save the world without at least risking this form of corruption**.
*MichaelA wrote this while he was at Convergence Analysis. He now works at RP. As an aside, I do think there’s a salient bias I have where I’m more likely to read/seriously consider work by coworkers than other work of equivalent merit, unfortunately I do not currently have active plans to fix this bias.
**Aside 2: I’m worried that my word choice in this post is too strong, with phrases like “corruption” etc. I’d be interested in more neutral phrasing that conveys the same concepts.
Super thanks for the lengthy answer.
I think we are mostly on the same page.
Agree. And yes to date I have focused on targeted interventions (e.g. improving government risk management functions) and value-aligning orgs (e.g. institutions for Future Generations).
Agree. FWIW I think I would make this case about approval voting as I believe aligning powerful actors (elected officials) incentives with the populations incentives is a form of value-aligning. Not sure I would make this case for forecasting, but could be open to hearing others make the case.
So where if anywhere do we disagree?
Disagree. I don’t see that as a worry. I have not seen any evidence any cases of this, and there are 100s of EA aligned folk in the UK policy space. Where are you from? I have heard this worry so far only from people in the USA, maybe there are cultural differences or this has been happening there. Insofar as it is a risk I would assume it might be less bad for actors working outside of institutions (capaigners, lobbyists) so I do think more EA-aligned institutions in this domain could be useful.
I think a well-run government is pretty positive. Maybe it depends on the government (as you say maybe there is a case for picking sides) and my experience is UK based. But, for example my understanding is there is some evidence that improved diplomacy practice is good for avoiding conflicts and mismanagement of central government functions can lead to periods of great instability (e.g. financial crises). Also a government is a collections of many smaller institutions it when you get into the weeds of it it becomes easier to pick and choose the sub-institutions that matter more.