This surprises me. Re-reading your writeup, I think my impression was based on the section āWhat is my perspective on āimproving institutionsā?ā Iād be interested to hear your take on how I might be misinterpreting that section or misinterpreting this new comment of yours. Iāll first quote the section in full, for the sake of other readers:
I am concerned that āimproving institutionsā is a hard area to navigate, and that the methodological and strategic foundations for how to do this well have not yet been well developed. I think that in many cases, literally no one in the world has a good, considered answer to the question of whether improving the decision quality of a specific institution along a specific dimension would have net good or net bad effects on the world. For instance, how to weigh the effects from making the US Department of Defense more ārationalā at forecasting progress in artificial intelligence? Would reducing bureaucracy at the UN Development Programme have a positive or negative net effect on global health over the next decade? I believe that we are often deeply uncertain about such questions, and that any tentative answer is liable to be overturned upon the discovery of an additional crucial consideration.
At the very least, I think that an attempt at answering such questions would require extensive familiarity with relevant research (e.g., in macrostrategy); I would also expect that it often hinges on a deep understanding of the relevant domains (e.g., specific policy contexts, specific academic disciplines, specific technologies, etc.). I am therefore tentatively skeptical about the value of a relatively broad-strokes strategy aimed at improving institutions in general.
I am particularly concerned about this because some prominent interventions for improving institutions would primarily result in making a given institution better at achieving its stated goals. For instance, I think this would often be the case when promoting domain-agnostic decision tools or policy components such as forecasting or nudging. Until alternative interventions will be uncovered, I would therefore expect that some people interested in improving institutions would default to pursuing these āknownā interventions.
To illustrate some of these concerns, consider the history of AI policy & governance. I know of several EA researchers who share the impression that early efforts in this area were ābottlenecked by entangled and under-defined research questions that are extremely difficult to resolveā, as Carrick Flynn noted in an influential post. My impression is that this is still somewhat true, but that there has been significant progress in reducing this bottleneck since 2017. However, crucially, my loose impression (often from the outside) is that this progress was to a large extent achieved by highly focused efforts: that is, people who focused full-time on AI policy & governance, and who made large upfront investments into acquiring knowledge and networks highly specific to the AI domain or particular policy contexts such as the US federal government (or could draw on past experience in these areas). I am thinking of, for example, background work at 80,000 Hours by Niel Bowerman and others (e.g. here) and the AI governance work at FHI by Allan Dafoe and his team.
Personally, when I think of what work in the area of āimproving institutionsā Iām most excited about, my (relatively uninformed and tentative) answer is: Adopt a similar approach for other important cause areas; i.e., find people who are excited to do the groundwork on, e.g., the institutions and policy areas most relevant to official development assistance, animal welfare (e.g. agricultural policy), nuclear security, biosecurity, extreme climate change, etc. I think that doing this well would often be a full-time job, and would require a rare combination of skills and good networks with both āEA researchersā and ānon-EAā domain experts as well as policymakers.
It seems to me like things like āFundamental, macrostrategic, basic, or crucial-considerations-like workā would be relevant to things like this (not just this specific grant application) in multiple ways:
I think the basic idea of differential technological development or differential progress is relevant here, and the development and dissemination of that idea was essentially macrostrategy research (though of course other versions of the idea predated EA-aligned macrostrategy research)
Further work to develop or disseminate this idea could presumably help evaluate future grant applications like this
This also seems like evidence that other versions of this kind of work could be useful for grantmaking decisions
Same goes for the basic concepts of crucial considerations, disentanglement research, and cluelessness (I personally donāt think the latter is useful, but you seem to), which you link to in that report
In these cases, I think thereās less useful work to be done further elaborating the concepts (maybe there would be for cluelessness, but currently I think we should instead replace that concept), but the basic terms and concepts seem to have improved our thinking, and macrostrategy-like work may find more such things
It seems it would also be useful and tractable to at least somewhat improve our understanding of which āintermediate goalsā would be net positive (and which would be especially positive) and which institutions are more likely to advance or hinder those goals given various changes to their stated goals, decision-making procedures, etc.
Mapping the space of relevant actors and working out what sort of goals, incentives, decision-making procedures, capabilities, etc. they already have also seems relevant
I essentially mean ānot aimed at just prioritising or implementing specific interventions, but rather at better understanding features of the world that are relevant to many different interventionsā
I think the expected value of the IIDM groupās future activities, and thus the expected impact of a grant to them, is sensitive to how much relevant fundamental, macrostrategic, etc., kind of work they will have access to in the future.
Given the nature of the activities proposed by the IIDM group, I donāt think it would have helped me for the grant decision if I had known more about macrostrategy. It would have been different if they had proposed a more specific or āobject-levelā strategy, e.g., lobbying for a certain policy.
I mean it would have helped me somewhat, but I think it pales in importance compared to things like āhaving more first-hand experience in/āwith the kind of institutions the group hopes to improveā, āmore relevant knowledge about institutions, including theoretical frameworks for how to think about themā, and āhaving seen more work by the groupās leaders, or otherwise being better able to assess their abilities and potentialā.
[ETA: Maybe itās also useful to add that, on my inside view, free-floating macrostrategy research isnāt that useful, certainly not for concrete decisions the IIDM group might face. This also applies to most of the things you suggest, which strike me as ātoo high-levelā and ātoo shallowā to be that helpful, though I think some āgrunt workā like āmapping out actorsā would help a bit, albeit itās not what I typically think of when saying macrostrategy.
Neither is āobject-levelā work that ignores macrostrategic uncertainty useful.
I think often the only thing that helps is to have people to the object-level work who are both excellent at doing the object-level work and have the kind of opaque āgood judgmentā that allows them to be appropriately responsive to macrostrategic considerations and reach reflective equilibrium between incentives suggested by proxies and high-level considerations around āhow valuable is that proxy anyway?ā. Unfortunately, such people seem extremely rare. I also think (and here my view probably differs from that of others who would endorse most of the other things Iām saying here) that weāre not nearly as good as we could be at identifying people who may already be in the EA community and have the potential to become great at this, and at identifying and āteachingā some of the practice-able skills relevant for this. (I think there are also some more āinnateā components.)
This is all slightly exaggerated to gesture at the view I have, and Iām not sure how much weight Iād want to give that inside view when making, e.g., funding decisions.]
I think to some extent thereās just a miscommunication here, rather than a difference in views. I intended to put a lot of things in the āFundamental, macrostrategic, basic, or crucial-considerations-like workāāI mainly wanted to draw a distinction between (a) all research āupstreamā of grantmaking, and (b) things like Available funding, Good applicants with good proposals for implementing good project ideas, Grantmaker capacity to evaluate applications, and Grantmaker capacity to solicit or generate new project ideas.
E.g., Iād include āmore relevant knowledge about institutions, including theoretical frameworks for how to think about themā in the bucket I was trying to gesture to.
So not just e.g. Bostrom-style macrostrategy work.
On reflection, I probably shouldāve also put āintervention researchā in there, and added as a sub-question āAnd do you think one of these types of research would be more useful for your grantmaking than the others?ā
---
But then your āETAā part is less focused on macrostrategy specifically, and there I think my current view does differ from yours (making yours interesting + thought-provoking).
This surprises me. Re-reading your writeup, I think my impression was based on the section āWhat is my perspective on āimproving institutionsā?ā Iād be interested to hear your take on how I might be misinterpreting that section or misinterpreting this new comment of yours. Iāll first quote the section in full, for the sake of other readers:
It seems to me like things like āFundamental, macrostrategic, basic, or crucial-considerations-like workā would be relevant to things like this (not just this specific grant application) in multiple ways:
I think the basic idea of differential technological development or differential progress is relevant here, and the development and dissemination of that idea was essentially macrostrategy research (though of course other versions of the idea predated EA-aligned macrostrategy research)
Further work to develop or disseminate this idea could presumably help evaluate future grant applications like this
This also seems like evidence that other versions of this kind of work could be useful for grantmaking decisions
Same goes for the basic concepts of crucial considerations, disentanglement research, and cluelessness (I personally donāt think the latter is useful, but you seem to), which you link to in that report
In these cases, I think thereās less useful work to be done further elaborating the concepts (maybe there would be for cluelessness, but currently I think we should instead replace that concept), but the basic terms and concepts seem to have improved our thinking, and macrostrategy-like work may find more such things
It seems it would also be useful and tractable to at least somewhat improve our understanding of which āintermediate goalsā would be net positive (and which would be especially positive) and which institutions are more likely to advance or hinder those goals given various changes to their stated goals, decision-making procedures, etc.
Mapping the space of relevant actors and working out what sort of goals, incentives, decision-making procedures, capabilities, etc. they already have also seems relevant
E.g., http://āāgcrinstitute.org/āā2020-survey-of-artificial-general-intelligence-projects-for-ethics-risk-and-policy/āā
And Iād call that ābasicā research
I essentially mean ānot aimed at just prioritising or implementing specific interventions, but rather at better understanding features of the world that are relevant to many different interventionsā
See also https://āāforum.effectivealtruism.org/āāposts/āāham9GEsgvuFcHtSBB/āāshould-marginal-longtermist-donations-support-fundamental-or
I think the expected value of the IIDM groupās future activities, and thus the expected impact of a grant to them, is sensitive to how much relevant fundamental, macrostrategic, etc., kind of work they will have access to in the future.
Given the nature of the activities proposed by the IIDM group, I donāt think it would have helped me for the grant decision if I had known more about macrostrategy. It would have been different if they had proposed a more specific or āobject-levelā strategy, e.g., lobbying for a certain policy.
I mean it would have helped me somewhat, but I think it pales in importance compared to things like āhaving more first-hand experience in/āwith the kind of institutions the group hopes to improveā, āmore relevant knowledge about institutions, including theoretical frameworks for how to think about themā, and āhaving seen more work by the groupās leaders, or otherwise being better able to assess their abilities and potentialā.
[ETA: Maybe itās also useful to add that, on my inside view, free-floating macrostrategy research isnāt that useful, certainly not for concrete decisions the IIDM group might face. This also applies to most of the things you suggest, which strike me as ātoo high-levelā and ātoo shallowā to be that helpful, though I think some āgrunt workā like āmapping out actorsā would help a bit, albeit itās not what I typically think of when saying macrostrategy.
Neither is āobject-levelā work that ignores macrostrategic uncertainty useful.
I think often the only thing that helps is to have people to the object-level work who are both excellent at doing the object-level work and have the kind of opaque āgood judgmentā that allows them to be appropriately responsive to macrostrategic considerations and reach reflective equilibrium between incentives suggested by proxies and high-level considerations around āhow valuable is that proxy anyway?ā. Unfortunately, such people seem extremely rare. I also think (and here my view probably differs from that of others who would endorse most of the other things Iām saying here) that weāre not nearly as good as we could be at identifying people who may already be in the EA community and have the potential to become great at this, and at identifying and āteachingā some of the practice-able skills relevant for this. (I think there are also some more āinnateā components.)
This is all slightly exaggerated to gesture at the view I have, and Iām not sure how much weight Iād want to give that inside view when making, e.g., funding decisions.]
Thanks, these are interesting perspectives.
---
I think to some extent thereās just a miscommunication here, rather than a difference in views. I intended to put a lot of things in the āFundamental, macrostrategic, basic, or crucial-considerations-like workāāI mainly wanted to draw a distinction between (a) all research āupstreamā of grantmaking, and (b) things like Available funding, Good applicants with good proposals for implementing good project ideas, Grantmaker capacity to evaluate applications, and Grantmaker capacity to solicit or generate new project ideas.
E.g., Iād include āmore relevant knowledge about institutions, including theoretical frameworks for how to think about themā in the bucket I was trying to gesture to.
So not just e.g. Bostrom-style macrostrategy work.
On reflection, I probably shouldāve also put āintervention researchā in there, and added as a sub-question āAnd do you think one of these types of research would be more useful for your grantmaking than the others?ā
---
But then your āETAā part is less focused on macrostrategy specifically, and there I think my current view does differ from yours (making yours interesting + thought-provoking).