[Iâm going to adapt some questions from myself or other people from the recent Long-Term Future Fund and Animal Welfare Fund AMAs.]
How much do you think you wouldâve granted in this recent round if the total funding available to the IF had been ~$5M? ~$10M? ~$20M?
What do you think is your main bottleneck to giving more? Some possibilities that come to mind:
Available funding
Good applicants with good proposals for implementing good project ideas
And to the extent that this is your bottleneck, do yo
Grantmaker capacity to evaluate applications
Maybe this should capture both whether they have time and whether they have techniques or abilities to evaluate project ideas whose expected value seems particularly hard to assess
Grantmaker capacity to solicit or generate new project ideas
Fundamental, macrostrategic, basic, or crucial-considerations-like work that could aid in generating project ideas and evaluating applications
E.g., it sounds like this wouldâve been relevant to Max Danielâs views on the IIDM working group in the recent round
To the extent that youâre bottlenecked by the number of good applications or would be bottlenecked by that if funded more, is that because (or do you expect itâd be because) there too few applications in general, or too low a proportion that are high-quality?
When an application isnât sufficiently high-quality, is that usually due to the quality of the idea, the quality of the applicant, or a mismatch between the idea and the applicantâs skillset (e.g., the applicant does seem highly generally competent, but lacks a specific, relevant skill)?
If there are too few applicants, or too few with relevant skills, is this because there are too few of such people interested in EA infrastructure stuff, or because there probably are such people who are interested in that stuff but theyâre applying less often than would be ideal?
(It seems like answers to those questions could inform whether EAIF should focus on generating more ideas, finding more people from within EA who could execute ideas, finding more people from outside of EA who could execute ideas, or improving the match between ideas and people.)
Answering these thoroughly would be really tricky, but here are a few off-the-cuff thoughts:
1. Tough to tell. My intuition is âthe same amount as I didâ because I was happy with the amount I could grant to each of the recipients I granted to, and I didnât have time to look at more applications than I did. Otoh I could imagine if we the fund had significantly more funding that would seem to provide a stronger mandate for trying things out and taking risks, so maybe that would have inclined me to spend less time evaluating each grant and use some money to do active grant making, or maybe would have inclined me to have funded one or two of the grants that I turned down. I also expect to be less time constrained in future because we wonât be doing an entire quarterâs grants in one round, and because there will be less âgetting up to speedâ.
2. Probably most of these are some bottleneck, and also they interact: - I had pretty limited capacity this round, and hope to have more in future. Some of that was also to do with not knowing much about some particular space and the plausible interventions in that space, so was a knowledge constraint. Some was to do with finding the most efficient way to come to an answer. - It felt to me like there was some bottleneck of great applicants with great proposals. Some proposals stood out fairly quickly as being worth funding to me, so I expect to have been able to fund more grants had there been more of these. Itâs possible some grants we didnât fund would have seemed worth funding had the proposal been clearer /â more specific. - There were macrostrategic questions the grant makers disagreed overâfor example, the extent to which people working in academia should focus on doing good research of their own versus encourage others to do relevant research. There are also such questions that I think didnât affect any of our grants this time but I expect to in future, such as how to prioritise spreading ideas like âyou can donate extremely cost-effectively to these global health charitiesâ versus more generalised EA principles.
3. The proportion of good applications was fairly high compared to my expectation (though ofc the fewer applications we reject the faster we can give out grants, so until weâre granting to everyone who applies, thereâs always a sense in which the proportion of good applications is bottlenecking us). The proportion of applications that seemed pretty clearly great, well thought through and ready to go as initially proposed, and which the committee agreed on, seemed maybe lower than I might have expected.
4. I think I noticed some of each of these, and itâs a little tough to say because the better the applicant, the more likely they are to come up with good ideas and also to be well calibrated on their fit with the idea. If I could dial up just one of these, probably it would be quality of idea.
5. One worry I have is that many people who do well early in life are encouraged to do fairly traditional thingsâfor example they get offered good jobs and scholarships to go down set career tracks. By comparison, people who come into their own later on (eg late in university) are more in a position to be thinking independently about what to work on. Therefore my sense is that community building in general is systematically missing out on some of the people who would be best at it because itâs a kind of weird, non-standard thing to work on. So I guess I lean on the side of too few people interested in EA infrastructure stuff.
Re 2: Mostly âgood applicants with good proposals for implementing good project ideasâ and âgrantmaker capacity to solicit or generate new project ideasâ, where the main bottleneck on the second of those isnât really generating the basic idea but coming up with a more detailed proposal and figuring out who to pitch on it etc.
Re 3: I think I would be happy to evaluate more grant applications and have a correspondingly higher bar. I donât think that low quality applications make my life as a grantmaker much worse; if youâre reading this, please submit your EAIF application rather than worry that it is not worth our time to evaluate.
Re 4: It varies. Mostly it isnât that the applicant lacks a specific skill.
Re 5: There are a bunch of things that have to align in order for someone to make a good proposal. There has to be a good project idea, and there has to be someone who would be able to make that work, and they have to know about the idea and apply for funding for it, and they need access to whatever other resources they need. Many of these steps can fail. Eg probably there are people who Iâd love to fund to do a particular project, but no-one has had the idea for the project, or someone has had the idea for the project but that person hasnât heard about it or hasnât decided that itâs promising, or doesnât want to try it because they donât have access to some other resource. I think my current guess is that there are good project ideas that exist, and people whoâd be good at doing them, and if we can connect the people to the projects and the required resources we could make some great grants, and I hope to spend more of my time doing this in future.
How much do you think you wouldâve granted in this recent round if the total funding available to the IF had been ~$5M? ~$10M? ~$20M?
I canât think of any specific grant decision this round for which I think this would have made a difference. Maybe I would have spent more time thinking about how successful grantees might be able to utilize more money than they applied for, and on discussing this with grantees.
Overall, I think there might be a âparadoxicalâ effect that I may have spent less time evaluating grant applications, and therefore would have made fewer grants this round if we had had much more total funding. This is because under this assumption, I would more strongly feel that we should frontload building the capacity to make more, and larger, higher-value grants in the future as opposed to optimizing the decisions on the grant applications we happened to get now. E.g., I might have spent more time on:
Generating leads for, and otherwise helping with, recruiting additional fund managers
Active grantmaking
âStructuralâ improvements to the fundâe.g., improving our discussions and voting methods
On 2, I agree with Buck that the two key bottlenecksâespecially if we weight grants by their expected impactâwere âGood applicants with good proposals for implementing good project ideasâ and âGrantmaker capacity to solicit or generate new project ideasâ.
I think Iâve had a stronger sense than at least some other fund managers that âGrantmaker capacity to evaluate applicationsâ was also a significant bottleneck, though I would rank it somewhat below the above two, and I think it tends to be a larger bottleneck for grants that are more âmarginalâ anyway, which diminishes its impact-weighted importance. Iâm still somewhat worried that our lack of capacity (both time and lack of some abilities) could in some cases lead to a âfalse negativeâ on a highly impactful grant, especially due to our current way of aggregating opinions between fund managers.
5. If there are too few applicants, or too few with relevant skills, is this because there are too few of such people interested in EA infrastructure stuff, or because there probably are such people who are interested in that stuff but theyâre applying less often than would be ideal?
I think both of these are significant effects. I suspect I might be more worried than others about âgood people applying less often than would be idealâ, but not sure.
4. When an application isnât sufficiently high-quality, is that usually due to the quality of the idea, the quality of the applicant, or a mismatch between the idea and the applicantâs skillset (e.g., the applicant does seem highly generally competent, but lacks a specific, relevant skill)?
All of these have happened. I agree with Buck that âapplicant lacks a highly specific skillâ seems uncommon; I think the cases of âmismatch between the idea and the applicantâ are broader/âfuzzier.
I donât have an immediate sense that any of them is particularly common.
Re 3, Iâm not sure I understand the question and feel a bit confused about how to answer it directly, but I agree with Buck:
I think I would be happy to evaluate more grant applications and have a correspondingly higher bar. I donât think that low quality applications make my life as a grantmaker much worse; if youâre reading this, please submit your EAIF application rather than worry that it is not worth our time to evaluate.
Fundamental, macrostrategic, basic, or crucial-considerations-like work that could aid in generating project ideas and evaluating applications
E.g., it sounds like this wouldâve been relevant to Max Danielâs views on the IIDM working group in the recent round
Hmm, Iâm not sure I agree with this. Yes, if I had access to a working crystal ball that would have helpedâbut for realistic versions of âknowing more about macrostrategyâ, I canât immediately think of anything that would have helped with evaluating the IIDM grant in particular. (There are other things that would have helped, but I donât think they have to do with macrostrategy, crucial considerations, etc.)
This surprises me. Re-reading your writeup, I think my impression was based on the section âWhat is my perspective on âimproving institutionsâ?â Iâd be interested to hear your take on how I might be misinterpreting that section or misinterpreting this new comment of yours. Iâll first quote the section in full, for the sake of other readers:
I am concerned that âimproving institutionsâ is a hard area to navigate, and that the methodological and strategic foundations for how to do this well have not yet been well developed. I think that in many cases, literally no one in the world has a good, considered answer to the question of whether improving the decision quality of a specific institution along a specific dimension would have net good or net bad effects on the world. For instance, how to weigh the effects from making the US Department of Defense more ârationalâ at forecasting progress in artificial intelligence? Would reducing bureaucracy at the UN Development Programme have a positive or negative net effect on global health over the next decade? I believe that we are often deeply uncertain about such questions, and that any tentative answer is liable to be overturned upon the discovery of an additional crucial consideration.
At the very least, I think that an attempt at answering such questions would require extensive familiarity with relevant research (e.g., in macrostrategy); I would also expect that it often hinges on a deep understanding of the relevant domains (e.g., specific policy contexts, specific academic disciplines, specific technologies, etc.). I am therefore tentatively skeptical about the value of a relatively broad-strokes strategy aimed at improving institutions in general.
I am particularly concerned about this because some prominent interventions for improving institutions would primarily result in making a given institution better at achieving its stated goals. For instance, I think this would often be the case when promoting domain-agnostic decision tools or policy components such as forecasting or nudging. Until alternative interventions will be uncovered, I would therefore expect that some people interested in improving institutions would default to pursuing these âknownâ interventions.
To illustrate some of these concerns, consider the history of AI policy & governance. I know of several EA researchers who share the impression that early efforts in this area were âbottlenecked by entangled and under-defined research questions that are extremely difficult to resolveâ, as Carrick Flynn noted in an influential post. My impression is that this is still somewhat true, but that there has been significant progress in reducing this bottleneck since 2017. However, crucially, my loose impression (often from the outside) is that this progress was to a large extent achieved by highly focused efforts: that is, people who focused full-time on AI policy & governance, and who made large upfront investments into acquiring knowledge and networks highly specific to the AI domain or particular policy contexts such as the US federal government (or could draw on past experience in these areas). I am thinking of, for example, background work at 80,000 Hours by Niel Bowerman and others (e.g. here) and the AI governance work at FHI by Allan Dafoe and his team.
Personally, when I think of what work in the area of âimproving institutionsâ Iâm most excited about, my (relatively uninformed and tentative) answer is: Adopt a similar approach for other important cause areas; i.e., find people who are excited to do the groundwork on, e.g., the institutions and policy areas most relevant to official development assistance, animal welfare (e.g. agricultural policy), nuclear security, biosecurity, extreme climate change, etc. I think that doing this well would often be a full-time job, and would require a rare combination of skills and good networks with both âEA researchersâ and ânon-EAâ domain experts as well as policymakers.
It seems to me like things like âFundamental, macrostrategic, basic, or crucial-considerations-like workâ would be relevant to things like this (not just this specific grant application) in multiple ways:
I think the basic idea of differential technological development or differential progress is relevant here, and the development and dissemination of that idea was essentially macrostrategy research (though of course other versions of the idea predated EA-aligned macrostrategy research)
Further work to develop or disseminate this idea could presumably help evaluate future grant applications like this
This also seems like evidence that other versions of this kind of work could be useful for grantmaking decisions
Same goes for the basic concepts of crucial considerations, disentanglement research, and cluelessness (I personally donât think the latter is useful, but you seem to), which you link to in that report
In these cases, I think thereâs less useful work to be done further elaborating the concepts (maybe there would be for cluelessness, but currently I think we should instead replace that concept), but the basic terms and concepts seem to have improved our thinking, and macrostrategy-like work may find more such things
It seems it would also be useful and tractable to at least somewhat improve our understanding of which âintermediate goalsâ would be net positive (and which would be especially positive) and which institutions are more likely to advance or hinder those goals given various changes to their stated goals, decision-making procedures, etc.
Mapping the space of relevant actors and working out what sort of goals, incentives, decision-making procedures, capabilities, etc. they already have also seems relevant
I essentially mean ânot aimed at just prioritising or implementing specific interventions, but rather at better understanding features of the world that are relevant to many different interventionsâ
I think the expected value of the IIDM groupâs future activities, and thus the expected impact of a grant to them, is sensitive to how much relevant fundamental, macrostrategic, etc., kind of work they will have access to in the future.
Given the nature of the activities proposed by the IIDM group, I donât think it would have helped me for the grant decision if I had known more about macrostrategy. It would have been different if they had proposed a more specific or âobject-levelâ strategy, e.g., lobbying for a certain policy.
I mean it would have helped me somewhat, but I think it pales in importance compared to things like âhaving more first-hand experience in/âwith the kind of institutions the group hopes to improveâ, âmore relevant knowledge about institutions, including theoretical frameworks for how to think about themâ, and âhaving seen more work by the groupâs leaders, or otherwise being better able to assess their abilities and potentialâ.
[ETA: Maybe itâs also useful to add that, on my inside view, free-floating macrostrategy research isnât that useful, certainly not for concrete decisions the IIDM group might face. This also applies to most of the things you suggest, which strike me as âtoo high-levelâ and âtoo shallowâ to be that helpful, though I think some âgrunt workâ like âmapping out actorsâ would help a bit, albeit itâs not what I typically think of when saying macrostrategy.
Neither is âobject-levelâ work that ignores macrostrategic uncertainty useful.
I think often the only thing that helps is to have people to the object-level work who are both excellent at doing the object-level work and have the kind of opaque âgood judgmentâ that allows them to be appropriately responsive to macrostrategic considerations and reach reflective equilibrium between incentives suggested by proxies and high-level considerations around âhow valuable is that proxy anyway?â. Unfortunately, such people seem extremely rare. I also think (and here my view probably differs from that of others who would endorse most of the other things Iâm saying here) that weâre not nearly as good as we could be at identifying people who may already be in the EA community and have the potential to become great at this, and at identifying and âteachingâ some of the practice-able skills relevant for this. (I think there are also some more âinnateâ components.)
This is all slightly exaggerated to gesture at the view I have, and Iâm not sure how much weight Iâd want to give that inside view when making, e.g., funding decisions.]
I think to some extent thereâs just a miscommunication here, rather than a difference in views. I intended to put a lot of things in the âFundamental, macrostrategic, basic, or crucial-considerations-like workââI mainly wanted to draw a distinction between (a) all research âupstreamâ of grantmaking, and (b) things like Available funding, Good applicants with good proposals for implementing good project ideas, Grantmaker capacity to evaluate applications, and Grantmaker capacity to solicit or generate new project ideas.
E.g., Iâd include âmore relevant knowledge about institutions, including theoretical frameworks for how to think about themâ in the bucket I was trying to gesture to.
So not just e.g. Bostrom-style macrostrategy work.
On reflection, I probably shouldâve also put âintervention researchâ in there, and added as a sub-question âAnd do you think one of these types of research would be more useful for your grantmaking than the others?â
---
But then your âETAâ part is less focused on macrostrategy specifically, and there I think my current view does differ from yours (making yours interesting + thought-provoking).
[Iâm going to adapt some questions from myself or other people from the recent Long-Term Future Fund and Animal Welfare Fund AMAs.]
How much do you think you wouldâve granted in this recent round if the total funding available to the IF had been ~$5M? ~$10M? ~$20M?
What do you think is your main bottleneck to giving more? Some possibilities that come to mind:
Available funding
Good applicants with good proposals for implementing good project ideas
And to the extent that this is your bottleneck, do yo
Grantmaker capacity to evaluate applications
Maybe this should capture both whether they have time and whether they have techniques or abilities to evaluate project ideas whose expected value seems particularly hard to assess
Grantmaker capacity to solicit or generate new project ideas
Fundamental, macrostrategic, basic, or crucial-considerations-like work that could aid in generating project ideas and evaluating applications
E.g., it sounds like this wouldâve been relevant to Max Danielâs views on the IIDM working group in the recent round
To the extent that youâre bottlenecked by the number of good applications or would be bottlenecked by that if funded more, is that because (or do you expect itâd be because) there too few applications in general, or too low a proportion that are high-quality?
When an application isnât sufficiently high-quality, is that usually due to the quality of the idea, the quality of the applicant, or a mismatch between the idea and the applicantâs skillset (e.g., the applicant does seem highly generally competent, but lacks a specific, relevant skill)?
If there are too few applicants, or too few with relevant skills, is this because there are too few of such people interested in EA infrastructure stuff, or because there probably are such people who are interested in that stuff but theyâre applying less often than would be ideal?
(It seems like answers to those questions could inform whether EAIF should focus on generating more ideas, finding more people from within EA who could execute ideas, finding more people from outside of EA who could execute ideas, or improving the match between ideas and people.)
Answering these thoroughly would be really tricky, but here are a few off-the-cuff thoughts:
1. Tough to tell. My intuition is âthe same amount as I didâ because I was happy with the amount I could grant to each of the recipients I granted to, and I didnât have time to look at more applications than I did. Otoh I could imagine if we the fund had significantly more funding that would seem to provide a stronger mandate for trying things out and taking risks, so maybe that would have inclined me to spend less time evaluating each grant and use some money to do active grant making, or maybe would have inclined me to have funded one or two of the grants that I turned down. I also expect to be less time constrained in future because we wonât be doing an entire quarterâs grants in one round, and because there will be less âgetting up to speedâ.
2. Probably most of these are some bottleneck, and also they interact:
- I had pretty limited capacity this round, and hope to have more in future. Some of that was also to do with not knowing much about some particular space and the plausible interventions in that space, so was a knowledge constraint. Some was to do with finding the most efficient way to come to an answer.
- It felt to me like there was some bottleneck of great applicants with great proposals. Some proposals stood out fairly quickly as being worth funding to me, so I expect to have been able to fund more grants had there been more of these. Itâs possible some grants we didnât fund would have seemed worth funding had the proposal been clearer /â more specific.
- There were macrostrategic questions the grant makers disagreed overâfor example, the extent to which people working in academia should focus on doing good research of their own versus encourage others to do relevant research. There are also such questions that I think didnât affect any of our grants this time but I expect to in future, such as how to prioritise spreading ideas like âyou can donate extremely cost-effectively to these global health charitiesâ versus more generalised EA principles.
3. The proportion of good applications was fairly high compared to my expectation (though ofc the fewer applications we reject the faster we can give out grants, so until weâre granting to everyone who applies, thereâs always a sense in which the proportion of good applications is bottlenecking us). The proportion of applications that seemed pretty clearly great, well thought through and ready to go as initially proposed, and which the committee agreed on, seemed maybe lower than I might have expected.
4. I think I noticed some of each of these, and itâs a little tough to say because the better the applicant, the more likely they are to come up with good ideas and also to be well calibrated on their fit with the idea. If I could dial up just one of these, probably it would be quality of idea.
5. One worry I have is that many people who do well early in life are encouraged to do fairly traditional thingsâfor example they get offered good jobs and scholarships to go down set career tracks. By comparison, people who come into their own later on (eg late in university) are more in a position to be thinking independently about what to work on. Therefore my sense is that community building in general is systematically missing out on some of the people who would be best at it because itâs a kind of weird, non-standard thing to work on. So I guess I lean on the side of too few people interested in EA infrastructure stuff.
(Just wanted to say that I agree with Michelle.)
Re 1: I donât think I would have granted more
Re 2: Mostly âgood applicants with good proposals for implementing good project ideasâ and âgrantmaker capacity to solicit or generate new project ideasâ, where the main bottleneck on the second of those isnât really generating the basic idea but coming up with a more detailed proposal and figuring out who to pitch on it etc.
Re 3: I think I would be happy to evaluate more grant applications and have a correspondingly higher bar. I donât think that low quality applications make my life as a grantmaker much worse; if youâre reading this, please submit your EAIF application rather than worry that it is not worth our time to evaluate.
Re 4: It varies. Mostly it isnât that the applicant lacks a specific skill.
Re 5: There are a bunch of things that have to align in order for someone to make a good proposal. There has to be a good project idea, and there has to be someone who would be able to make that work, and they have to know about the idea and apply for funding for it, and they need access to whatever other resources they need. Many of these steps can fail. Eg probably there are people who Iâd love to fund to do a particular project, but no-one has had the idea for the project, or someone has had the idea for the project but that person hasnât heard about it or hasnât decided that itâs promising, or doesnât want to try it because they donât have access to some other resource. I think my current guess is that there are good project ideas that exist, and people whoâd be good at doing them, and if we can connect the people to the projects and the required resources we could make some great grants, and I hope to spend more of my time doing this in future.
I canât think of any specific grant decision this round for which I think this would have made a difference. Maybe I would have spent more time thinking about how successful grantees might be able to utilize more money than they applied for, and on discussing this with grantees.
Overall, I think there might be a âparadoxicalâ effect that I may have spent less time evaluating grant applications, and therefore would have made fewer grants this round if we had had much more total funding. This is because under this assumption, I would more strongly feel that we should frontload building the capacity to make more, and larger, higher-value grants in the future as opposed to optimizing the decisions on the grant applications we happened to get now. E.g., I might have spent more time on:
Generating leads for, and otherwise helping with, recruiting additional fund managers
Active grantmaking
âStructuralâ improvements to the fundâe.g., improving our discussions and voting methods
On 2, I agree with Buck that the two key bottlenecksâespecially if we weight grants by their expected impactâwere âGood applicants with good proposals for implementing good project ideasâ and âGrantmaker capacity to solicit or generate new project ideasâ.
I think Iâve had a stronger sense than at least some other fund managers that âGrantmaker capacity to evaluate applicationsâ was also a significant bottleneck, though I would rank it somewhat below the above two, and I think it tends to be a larger bottleneck for grants that are more âmarginalâ anyway, which diminishes its impact-weighted importance. Iâm still somewhat worried that our lack of capacity (both time and lack of some abilities) could in some cases lead to a âfalse negativeâ on a highly impactful grant, especially due to our current way of aggregating opinions between fund managers.
I think both of these are significant effects. I suspect I might be more worried than others about âgood people applying less often than would be idealâ, but not sure.
All of these have happened. I agree with Buck that âapplicant lacks a highly specific skillâ seems uncommon; I think the cases of âmismatch between the idea and the applicantâ are broader/âfuzzier.
I donât have an immediate sense that any of them is particularly common.
Re 3, Iâm not sure I understand the question and feel a bit confused about how to answer it directly, but I agree with Buck:
Hmm, Iâm not sure I agree with this. Yes, if I had access to a working crystal ball that would have helpedâbut for realistic versions of âknowing more about macrostrategyâ, I canât immediately think of anything that would have helped with evaluating the IIDM grant in particular. (There are other things that would have helped, but I donât think they have to do with macrostrategy, crucial considerations, etc.)
This surprises me. Re-reading your writeup, I think my impression was based on the section âWhat is my perspective on âimproving institutionsâ?â Iâd be interested to hear your take on how I might be misinterpreting that section or misinterpreting this new comment of yours. Iâll first quote the section in full, for the sake of other readers:
It seems to me like things like âFundamental, macrostrategic, basic, or crucial-considerations-like workâ would be relevant to things like this (not just this specific grant application) in multiple ways:
I think the basic idea of differential technological development or differential progress is relevant here, and the development and dissemination of that idea was essentially macrostrategy research (though of course other versions of the idea predated EA-aligned macrostrategy research)
Further work to develop or disseminate this idea could presumably help evaluate future grant applications like this
This also seems like evidence that other versions of this kind of work could be useful for grantmaking decisions
Same goes for the basic concepts of crucial considerations, disentanglement research, and cluelessness (I personally donât think the latter is useful, but you seem to), which you link to in that report
In these cases, I think thereâs less useful work to be done further elaborating the concepts (maybe there would be for cluelessness, but currently I think we should instead replace that concept), but the basic terms and concepts seem to have improved our thinking, and macrostrategy-like work may find more such things
It seems it would also be useful and tractable to at least somewhat improve our understanding of which âintermediate goalsâ would be net positive (and which would be especially positive) and which institutions are more likely to advance or hinder those goals given various changes to their stated goals, decision-making procedures, etc.
Mapping the space of relevant actors and working out what sort of goals, incentives, decision-making procedures, capabilities, etc. they already have also seems relevant
E.g., http://ââgcrinstitute.org/ââ2020-survey-of-artificial-general-intelligence-projects-for-ethics-risk-and-policy/ââ
And Iâd call that âbasicâ research
I essentially mean ânot aimed at just prioritising or implementing specific interventions, but rather at better understanding features of the world that are relevant to many different interventionsâ
See also https://ââforum.effectivealtruism.org/ââposts/ââham9GEsgvuFcHtSBB/ââshould-marginal-longtermist-donations-support-fundamental-or
I think the expected value of the IIDM groupâs future activities, and thus the expected impact of a grant to them, is sensitive to how much relevant fundamental, macrostrategic, etc., kind of work they will have access to in the future.
Given the nature of the activities proposed by the IIDM group, I donât think it would have helped me for the grant decision if I had known more about macrostrategy. It would have been different if they had proposed a more specific or âobject-levelâ strategy, e.g., lobbying for a certain policy.
I mean it would have helped me somewhat, but I think it pales in importance compared to things like âhaving more first-hand experience in/âwith the kind of institutions the group hopes to improveâ, âmore relevant knowledge about institutions, including theoretical frameworks for how to think about themâ, and âhaving seen more work by the groupâs leaders, or otherwise being better able to assess their abilities and potentialâ.
[ETA: Maybe itâs also useful to add that, on my inside view, free-floating macrostrategy research isnât that useful, certainly not for concrete decisions the IIDM group might face. This also applies to most of the things you suggest, which strike me as âtoo high-levelâ and âtoo shallowâ to be that helpful, though I think some âgrunt workâ like âmapping out actorsâ would help a bit, albeit itâs not what I typically think of when saying macrostrategy.
Neither is âobject-levelâ work that ignores macrostrategic uncertainty useful.
I think often the only thing that helps is to have people to the object-level work who are both excellent at doing the object-level work and have the kind of opaque âgood judgmentâ that allows them to be appropriately responsive to macrostrategic considerations and reach reflective equilibrium between incentives suggested by proxies and high-level considerations around âhow valuable is that proxy anyway?â. Unfortunately, such people seem extremely rare. I also think (and here my view probably differs from that of others who would endorse most of the other things Iâm saying here) that weâre not nearly as good as we could be at identifying people who may already be in the EA community and have the potential to become great at this, and at identifying and âteachingâ some of the practice-able skills relevant for this. (I think there are also some more âinnateâ components.)
This is all slightly exaggerated to gesture at the view I have, and Iâm not sure how much weight Iâd want to give that inside view when making, e.g., funding decisions.]
Thanks, these are interesting perspectives.
---
I think to some extent thereâs just a miscommunication here, rather than a difference in views. I intended to put a lot of things in the âFundamental, macrostrategic, basic, or crucial-considerations-like workââI mainly wanted to draw a distinction between (a) all research âupstreamâ of grantmaking, and (b) things like Available funding, Good applicants with good proposals for implementing good project ideas, Grantmaker capacity to evaluate applications, and Grantmaker capacity to solicit or generate new project ideas.
E.g., Iâd include âmore relevant knowledge about institutions, including theoretical frameworks for how to think about themâ in the bucket I was trying to gesture to.
So not just e.g. Bostrom-style macrostrategy work.
On reflection, I probably shouldâve also put âintervention researchâ in there, and added as a sub-question âAnd do you think one of these types of research would be more useful for your grantmaking than the others?â
---
But then your âETAâ part is less focused on macrostrategy specifically, and there I think my current view does differ from yours (making yours interesting + thought-provoking).