[I’m going to adapt some questions from myself or other people from the recent Long-Term Future Fund and Animal Welfare Fund AMAs.]
How much do you think you would’ve granted in this recent round if the total funding available to the IF had been ~$5M? ~$10M? ~$20M?
What do you think is your main bottleneck to giving more? Some possibilities that come to mind:
Available funding
Good applicants with good proposals for implementing good project ideas
And to the extent that this is your bottleneck, do yo
Grantmaker capacity to evaluate applications
Maybe this should capture both whether they have time and whether they have techniques or abilities to evaluate project ideas whose expected value seems particularly hard to assess
Grantmaker capacity to solicit or generate new project ideas
Fundamental, macrostrategic, basic, or crucial-considerations-like work that could aid in generating project ideas and evaluating applications
E.g., it sounds like this would’ve been relevant to Max Daniel’s views on the IIDM working group in the recent round
To the extent that you’re bottlenecked by the number of good applications or would be bottlenecked by that if funded more, is that because (or do you expect it’d be because) there too few applications in general, or too low a proportion that are high-quality?
When an application isn’t sufficiently high-quality, is that usually due to the quality of the idea, the quality of the applicant, or a mismatch between the idea and the applicant’s skillset (e.g., the applicant does seem highly generally competent, but lacks a specific, relevant skill)?
If there are too few applicants, or too few with relevant skills, is this because there are too few of such people interested in EA infrastructure stuff, or because there probably are such people who are interested in that stuff but they’re applying less often than would be ideal?
(It seems like answers to those questions could inform whether EAIF should focus on generating more ideas, finding more people from within EA who could execute ideas, finding more people from outside of EA who could execute ideas, or improving the match between ideas and people.)
Answering these thoroughly would be really tricky, but here are a few off-the-cuff thoughts:
1. Tough to tell. My intuition is ‘the same amount as I did’ because I was happy with the amount I could grant to each of the recipients I granted to, and I didn’t have time to look at more applications than I did. Otoh I could imagine if we the fund had significantly more funding that would seem to provide a stronger mandate for trying things out and taking risks, so maybe that would have inclined me to spend less time evaluating each grant and use some money to do active grant making, or maybe would have inclined me to have funded one or two of the grants that I turned down. I also expect to be less time constrained in future because we won’t be doing an entire quarter’s grants in one round, and because there will be less ‘getting up to speed’.
2. Probably most of these are some bottleneck, and also they interact: - I had pretty limited capacity this round, and hope to have more in future. Some of that was also to do with not knowing much about some particular space and the plausible interventions in that space, so was a knowledge constraint. Some was to do with finding the most efficient way to come to an answer. - It felt to me like there was some bottleneck of great applicants with great proposals. Some proposals stood out fairly quickly as being worth funding to me, so I expect to have been able to fund more grants had there been more of these. It’s possible some grants we didn’t fund would have seemed worth funding had the proposal been clearer / more specific. - There were macrostrategic questions the grant makers disagreed over—for example, the extent to which people working in academia should focus on doing good research of their own versus encourage others to do relevant research. There are also such questions that I think didn’t affect any of our grants this time but I expect to in future, such as how to prioritise spreading ideas like ‘you can donate extremely cost-effectively to these global health charities’ versus more generalised EA principles.
3. The proportion of good applications was fairly high compared to my expectation (though ofc the fewer applications we reject the faster we can give out grants, so until we’re granting to everyone who applies, there’s always a sense in which the proportion of good applications is bottlenecking us). The proportion of applications that seemed pretty clearly great, well thought through and ready to go as initially proposed, and which the committee agreed on, seemed maybe lower than I might have expected.
4. I think I noticed some of each of these, and it’s a little tough to say because the better the applicant, the more likely they are to come up with good ideas and also to be well calibrated on their fit with the idea. If I could dial up just one of these, probably it would be quality of idea.
5. One worry I have is that many people who do well early in life are encouraged to do fairly traditional things—for example they get offered good jobs and scholarships to go down set career tracks. By comparison, people who come into their own later on (eg late in university) are more in a position to be thinking independently about what to work on. Therefore my sense is that community building in general is systematically missing out on some of the people who would be best at it because it’s a kind of weird, non-standard thing to work on. So I guess I lean on the side of too few people interested in EA infrastructure stuff.
Re 2: Mostly “good applicants with good proposals for implementing good project ideas” and “grantmaker capacity to solicit or generate new project ideas”, where the main bottleneck on the second of those isn’t really generating the basic idea but coming up with a more detailed proposal and figuring out who to pitch on it etc.
Re 3: I think I would be happy to evaluate more grant applications and have a correspondingly higher bar. I don’t think that low quality applications make my life as a grantmaker much worse; if you’re reading this, please submit your EAIF application rather than worry that it is not worth our time to evaluate.
Re 4: It varies. Mostly it isn’t that the applicant lacks a specific skill.
Re 5: There are a bunch of things that have to align in order for someone to make a good proposal. There has to be a good project idea, and there has to be someone who would be able to make that work, and they have to know about the idea and apply for funding for it, and they need access to whatever other resources they need. Many of these steps can fail. Eg probably there are people who I’d love to fund to do a particular project, but no-one has had the idea for the project, or someone has had the idea for the project but that person hasn’t heard about it or hasn’t decided that it’s promising, or doesn’t want to try it because they don’t have access to some other resource. I think my current guess is that there are good project ideas that exist, and people who’d be good at doing them, and if we can connect the people to the projects and the required resources we could make some great grants, and I hope to spend more of my time doing this in future.
How much do you think you would’ve granted in this recent round if the total funding available to the IF had been ~$5M? ~$10M? ~$20M?
I can’t think of any specific grant decision this round for which I think this would have made a difference. Maybe I would have spent more time thinking about how successful grantees might be able to utilize more money than they applied for, and on discussing this with grantees.
Overall, I think there might be a “paradoxical” effect that I may have spent less time evaluating grant applications, and therefore would have made fewer grants this round if we had had much more total funding. This is because under this assumption, I would more strongly feel that we should frontload building the capacity to make more, and larger, higher-value grants in the future as opposed to optimizing the decisions on the grant applications we happened to get now. E.g., I might have spent more time on:
Generating leads for, and otherwise helping with, recruiting additional fund managers
Active grantmaking
‘Structural’ improvements to the fund—e.g., improving our discussions and voting methods
On 2, I agree with Buck that the two key bottlenecks—especially if we weight grants by their expected impact—were “Good applicants with good proposals for implementing good project ideas” and “Grantmaker capacity to solicit or generate new project ideas”.
I think I’ve had a stronger sense than at least some other fund managers that “Grantmaker capacity to evaluate applications” was also a significant bottleneck, though I would rank it somewhat below the above two, and I think it tends to be a larger bottleneck for grants that are more ‘marginal’ anyway, which diminishes its impact-weighted importance. I’m still somewhat worried that our lack of capacity (both time and lack of some abilities) could in some cases lead to a “false negative” on a highly impactful grant, especially due to our current way of aggregating opinions between fund managers.
5. If there are too few applicants, or too few with relevant skills, is this because there are too few of such people interested in EA infrastructure stuff, or because there probably are such people who are interested in that stuff but they’re applying less often than would be ideal?
I think both of these are significant effects. I suspect I might be more worried than others about “good people applying less often than would be ideal”, but not sure.
4. When an application isn’t sufficiently high-quality, is that usually due to the quality of the idea, the quality of the applicant, or a mismatch between the idea and the applicant’s skillset (e.g., the applicant does seem highly generally competent, but lacks a specific, relevant skill)?
All of these have happened. I agree with Buck that “applicant lacks a highly specific skill” seems uncommon; I think the cases of “mismatch between the idea and the applicant” are broader/fuzzier.
I don’t have an immediate sense that any of them is particularly common.
Re 3, I’m not sure I understand the question and feel a bit confused about how to answer it directly, but I agree with Buck:
I think I would be happy to evaluate more grant applications and have a correspondingly higher bar. I don’t think that low quality applications make my life as a grantmaker much worse; if you’re reading this, please submit your EAIF application rather than worry that it is not worth our time to evaluate.
Fundamental, macrostrategic, basic, or crucial-considerations-like work that could aid in generating project ideas and evaluating applications
E.g., it sounds like this would’ve been relevant to Max Daniel’s views on the IIDM working group in the recent round
Hmm, I’m not sure I agree with this. Yes, if I had access to a working crystal ball that would have helped—but for realistic versions of ‘knowing more about macrostrategy’, I can’t immediately think of anything that would have helped with evaluating the IIDM grant in particular. (There are other things that would have helped, but I don’t think they have to do with macrostrategy, crucial considerations, etc.)
This surprises me. Re-reading your writeup, I think my impression was based on the section “What is my perspective on ‘improving institutions’?” I’d be interested to hear your take on how I might be misinterpreting that section or misinterpreting this new comment of yours. I’ll first quote the section in full, for the sake of other readers:
I am concerned that ‘improving institutions’ is a hard area to navigate, and that the methodological and strategic foundations for how to do this well have not yet been well developed. I think that in many cases, literally no one in the world has a good, considered answer to the question of whether improving the decision quality of a specific institution along a specific dimension would have net good or net bad effects on the world. For instance, how to weigh the effects from making the US Department of Defense more ‘rational’ at forecasting progress in artificial intelligence? Would reducing bureaucracy at the UN Development Programme have a positive or negative net effect on global health over the next decade? I believe that we are often deeply uncertain about such questions, and that any tentative answer is liable to be overturned upon the discovery of an additional crucial consideration.
At the very least, I think that an attempt at answering such questions would require extensive familiarity with relevant research (e.g., in macrostrategy); I would also expect that it often hinges on a deep understanding of the relevant domains (e.g., specific policy contexts, specific academic disciplines, specific technologies, etc.). I am therefore tentatively skeptical about the value of a relatively broad-strokes strategy aimed at improving institutions in general.
I am particularly concerned about this because some prominent interventions for improving institutions would primarily result in making a given institution better at achieving its stated goals. For instance, I think this would often be the case when promoting domain-agnostic decision tools or policy components such as forecasting or nudging. Until alternative interventions will be uncovered, I would therefore expect that some people interested in improving institutions would default to pursuing these ‘known’ interventions.
To illustrate some of these concerns, consider the history of AI policy & governance. I know of several EA researchers who share the impression that early efforts in this area were “bottlenecked by entangled and under-defined research questions that are extremely difficult to resolve”, as Carrick Flynn noted in an influential post. My impression is that this is still somewhat true, but that there has been significant progress in reducing this bottleneck since 2017. However, crucially, my loose impression (often from the outside) is that this progress was to a large extent achieved by highly focused efforts: that is, people who focused full-time on AI policy & governance, and who made large upfront investments into acquiring knowledge and networks highly specific to the AI domain or particular policy contexts such as the US federal government (or could draw on past experience in these areas). I am thinking of, for example, background work at 80,000 Hours by Niel Bowerman and others (e.g. here) and the AI governance work at FHI by Allan Dafoe and his team.
Personally, when I think of what work in the area of ‘improving institutions’ I’m most excited about, my (relatively uninformed and tentative) answer is: Adopt a similar approach for other important cause areas; i.e., find people who are excited to do the groundwork on, e.g., the institutions and policy areas most relevant to official development assistance, animal welfare (e.g. agricultural policy), nuclear security, biosecurity, extreme climate change, etc. I think that doing this well would often be a full-time job, and would require a rare combination of skills and good networks with both ‘EA researchers’ and ‘non-EA’ domain experts as well as policymakers.
It seems to me like things like “Fundamental, macrostrategic, basic, or crucial-considerations-like work” would be relevant to things like this (not just this specific grant application) in multiple ways:
I think the basic idea of differential technological development or differential progress is relevant here, and the development and dissemination of that idea was essentially macrostrategy research (though of course other versions of the idea predated EA-aligned macrostrategy research)
Further work to develop or disseminate this idea could presumably help evaluate future grant applications like this
This also seems like evidence that other versions of this kind of work could be useful for grantmaking decisions
Same goes for the basic concepts of crucial considerations, disentanglement research, and cluelessness (I personally don’t think the latter is useful, but you seem to), which you link to in that report
In these cases, I think there’s less useful work to be done further elaborating the concepts (maybe there would be for cluelessness, but currently I think we should instead replace that concept), but the basic terms and concepts seem to have improved our thinking, and macrostrategy-like work may find more such things
It seems it would also be useful and tractable to at least somewhat improve our understanding of which “intermediate goals” would be net positive (and which would be especially positive) and which institutions are more likely to advance or hinder those goals given various changes to their stated goals, decision-making procedures, etc.
Mapping the space of relevant actors and working out what sort of goals, incentives, decision-making procedures, capabilities, etc. they already have also seems relevant
I essentially mean “not aimed at just prioritising or implementing specific interventions, but rather at better understanding features of the world that are relevant to many different interventions”
I think the expected value of the IIDM group’s future activities, and thus the expected impact of a grant to them, is sensitive to how much relevant fundamental, macrostrategic, etc., kind of work they will have access to in the future.
Given the nature of the activities proposed by the IIDM group, I don’t think it would have helped me for the grant decision if I had known more about macrostrategy. It would have been different if they had proposed a more specific or “object-level” strategy, e.g., lobbying for a certain policy.
I mean it would have helped me somewhat, but I think it pales in importance compared to things like “having more first-hand experience in/with the kind of institutions the group hopes to improve”, “more relevant knowledge about institutions, including theoretical frameworks for how to think about them”, and “having seen more work by the group’s leaders, or otherwise being better able to assess their abilities and potential”.
[ETA: Maybe it’s also useful to add that, on my inside view, free-floating macrostrategy research isn’t that useful, certainly not for concrete decisions the IIDM group might face. This also applies to most of the things you suggest, which strike me as ‘too high-level’ and ‘too shallow’ to be that helpful, though I think some ‘grunt work’ like ‘mapping out actors’ would help a bit, albeit it’s not what I typically think of when saying macrostrategy.
Neither is ‘object-level’ work that ignores macrostrategic uncertainty useful.
I think often the only thing that helps is to have people to the object-level work who are both excellent at doing the object-level work and have the kind of opaque “good judgment” that allows them to be appropriately responsive to macrostrategic considerations and reach reflective equilibrium between incentives suggested by proxies and high-level considerations around “how valuable is that proxy anyway?”. Unfortunately, such people seem extremely rare. I also think (and here my view probably differs from that of others who would endorse most of the other things I’m saying here) that we’re not nearly as good as we could be at identifying people who may already be in the EA community and have the potential to become great at this, and at identifying and ‘teaching’ some of the practice-able skills relevant for this. (I think there are also some more ‘innate’ components.)
This is all slightly exaggerated to gesture at the view I have, and I’m not sure how much weight I’d want to give that inside view when making, e.g., funding decisions.]
I think to some extent there’s just a miscommunication here, rather than a difference in views. I intended to put a lot of things in the “Fundamental, macrostrategic, basic, or crucial-considerations-like work”—I mainly wanted to draw a distinction between (a) all research “upstream” of grantmaking, and (b) things like Available funding, Good applicants with good proposals for implementing good project ideas, Grantmaker capacity to evaluate applications, and Grantmaker capacity to solicit or generate new project ideas.
E.g., I’d include “more relevant knowledge about institutions, including theoretical frameworks for how to think about them” in the bucket I was trying to gesture to.
So not just e.g. Bostrom-style macrostrategy work.
On reflection, I probably should’ve also put “intervention research” in there, and added as a sub-question “And do you think one of these types of research would be more useful for your grantmaking than the others?”
---
But then your “ETA” part is less focused on macrostrategy specifically, and there I think my current view does differ from yours (making yours interesting + thought-provoking).
[I’m going to adapt some questions from myself or other people from the recent Long-Term Future Fund and Animal Welfare Fund AMAs.]
How much do you think you would’ve granted in this recent round if the total funding available to the IF had been ~$5M? ~$10M? ~$20M?
What do you think is your main bottleneck to giving more? Some possibilities that come to mind:
Available funding
Good applicants with good proposals for implementing good project ideas
And to the extent that this is your bottleneck, do yo
Grantmaker capacity to evaluate applications
Maybe this should capture both whether they have time and whether they have techniques or abilities to evaluate project ideas whose expected value seems particularly hard to assess
Grantmaker capacity to solicit or generate new project ideas
Fundamental, macrostrategic, basic, or crucial-considerations-like work that could aid in generating project ideas and evaluating applications
E.g., it sounds like this would’ve been relevant to Max Daniel’s views on the IIDM working group in the recent round
To the extent that you’re bottlenecked by the number of good applications or would be bottlenecked by that if funded more, is that because (or do you expect it’d be because) there too few applications in general, or too low a proportion that are high-quality?
When an application isn’t sufficiently high-quality, is that usually due to the quality of the idea, the quality of the applicant, or a mismatch between the idea and the applicant’s skillset (e.g., the applicant does seem highly generally competent, but lacks a specific, relevant skill)?
If there are too few applicants, or too few with relevant skills, is this because there are too few of such people interested in EA infrastructure stuff, or because there probably are such people who are interested in that stuff but they’re applying less often than would be ideal?
(It seems like answers to those questions could inform whether EAIF should focus on generating more ideas, finding more people from within EA who could execute ideas, finding more people from outside of EA who could execute ideas, or improving the match between ideas and people.)
Answering these thoroughly would be really tricky, but here are a few off-the-cuff thoughts:
1. Tough to tell. My intuition is ‘the same amount as I did’ because I was happy with the amount I could grant to each of the recipients I granted to, and I didn’t have time to look at more applications than I did. Otoh I could imagine if we the fund had significantly more funding that would seem to provide a stronger mandate for trying things out and taking risks, so maybe that would have inclined me to spend less time evaluating each grant and use some money to do active grant making, or maybe would have inclined me to have funded one or two of the grants that I turned down. I also expect to be less time constrained in future because we won’t be doing an entire quarter’s grants in one round, and because there will be less ‘getting up to speed’.
2. Probably most of these are some bottleneck, and also they interact:
- I had pretty limited capacity this round, and hope to have more in future. Some of that was also to do with not knowing much about some particular space and the plausible interventions in that space, so was a knowledge constraint. Some was to do with finding the most efficient way to come to an answer.
- It felt to me like there was some bottleneck of great applicants with great proposals. Some proposals stood out fairly quickly as being worth funding to me, so I expect to have been able to fund more grants had there been more of these. It’s possible some grants we didn’t fund would have seemed worth funding had the proposal been clearer / more specific.
- There were macrostrategic questions the grant makers disagreed over—for example, the extent to which people working in academia should focus on doing good research of their own versus encourage others to do relevant research. There are also such questions that I think didn’t affect any of our grants this time but I expect to in future, such as how to prioritise spreading ideas like ‘you can donate extremely cost-effectively to these global health charities’ versus more generalised EA principles.
3. The proportion of good applications was fairly high compared to my expectation (though ofc the fewer applications we reject the faster we can give out grants, so until we’re granting to everyone who applies, there’s always a sense in which the proportion of good applications is bottlenecking us). The proportion of applications that seemed pretty clearly great, well thought through and ready to go as initially proposed, and which the committee agreed on, seemed maybe lower than I might have expected.
4. I think I noticed some of each of these, and it’s a little tough to say because the better the applicant, the more likely they are to come up with good ideas and also to be well calibrated on their fit with the idea. If I could dial up just one of these, probably it would be quality of idea.
5. One worry I have is that many people who do well early in life are encouraged to do fairly traditional things—for example they get offered good jobs and scholarships to go down set career tracks. By comparison, people who come into their own later on (eg late in university) are more in a position to be thinking independently about what to work on. Therefore my sense is that community building in general is systematically missing out on some of the people who would be best at it because it’s a kind of weird, non-standard thing to work on. So I guess I lean on the side of too few people interested in EA infrastructure stuff.
(Just wanted to say that I agree with Michelle.)
Re 1: I don’t think I would have granted more
Re 2: Mostly “good applicants with good proposals for implementing good project ideas” and “grantmaker capacity to solicit or generate new project ideas”, where the main bottleneck on the second of those isn’t really generating the basic idea but coming up with a more detailed proposal and figuring out who to pitch on it etc.
Re 3: I think I would be happy to evaluate more grant applications and have a correspondingly higher bar. I don’t think that low quality applications make my life as a grantmaker much worse; if you’re reading this, please submit your EAIF application rather than worry that it is not worth our time to evaluate.
Re 4: It varies. Mostly it isn’t that the applicant lacks a specific skill.
Re 5: There are a bunch of things that have to align in order for someone to make a good proposal. There has to be a good project idea, and there has to be someone who would be able to make that work, and they have to know about the idea and apply for funding for it, and they need access to whatever other resources they need. Many of these steps can fail. Eg probably there are people who I’d love to fund to do a particular project, but no-one has had the idea for the project, or someone has had the idea for the project but that person hasn’t heard about it or hasn’t decided that it’s promising, or doesn’t want to try it because they don’t have access to some other resource. I think my current guess is that there are good project ideas that exist, and people who’d be good at doing them, and if we can connect the people to the projects and the required resources we could make some great grants, and I hope to spend more of my time doing this in future.
I can’t think of any specific grant decision this round for which I think this would have made a difference. Maybe I would have spent more time thinking about how successful grantees might be able to utilize more money than they applied for, and on discussing this with grantees.
Overall, I think there might be a “paradoxical” effect that I may have spent less time evaluating grant applications, and therefore would have made fewer grants this round if we had had much more total funding. This is because under this assumption, I would more strongly feel that we should frontload building the capacity to make more, and larger, higher-value grants in the future as opposed to optimizing the decisions on the grant applications we happened to get now. E.g., I might have spent more time on:
Generating leads for, and otherwise helping with, recruiting additional fund managers
Active grantmaking
‘Structural’ improvements to the fund—e.g., improving our discussions and voting methods
On 2, I agree with Buck that the two key bottlenecks—especially if we weight grants by their expected impact—were “Good applicants with good proposals for implementing good project ideas” and “Grantmaker capacity to solicit or generate new project ideas”.
I think I’ve had a stronger sense than at least some other fund managers that “Grantmaker capacity to evaluate applications” was also a significant bottleneck, though I would rank it somewhat below the above two, and I think it tends to be a larger bottleneck for grants that are more ‘marginal’ anyway, which diminishes its impact-weighted importance. I’m still somewhat worried that our lack of capacity (both time and lack of some abilities) could in some cases lead to a “false negative” on a highly impactful grant, especially due to our current way of aggregating opinions between fund managers.
I think both of these are significant effects. I suspect I might be more worried than others about “good people applying less often than would be ideal”, but not sure.
All of these have happened. I agree with Buck that “applicant lacks a highly specific skill” seems uncommon; I think the cases of “mismatch between the idea and the applicant” are broader/fuzzier.
I don’t have an immediate sense that any of them is particularly common.
Re 3, I’m not sure I understand the question and feel a bit confused about how to answer it directly, but I agree with Buck:
Hmm, I’m not sure I agree with this. Yes, if I had access to a working crystal ball that would have helped—but for realistic versions of ‘knowing more about macrostrategy’, I can’t immediately think of anything that would have helped with evaluating the IIDM grant in particular. (There are other things that would have helped, but I don’t think they have to do with macrostrategy, crucial considerations, etc.)
This surprises me. Re-reading your writeup, I think my impression was based on the section “What is my perspective on ‘improving institutions’?” I’d be interested to hear your take on how I might be misinterpreting that section or misinterpreting this new comment of yours. I’ll first quote the section in full, for the sake of other readers:
It seems to me like things like “Fundamental, macrostrategic, basic, or crucial-considerations-like work” would be relevant to things like this (not just this specific grant application) in multiple ways:
I think the basic idea of differential technological development or differential progress is relevant here, and the development and dissemination of that idea was essentially macrostrategy research (though of course other versions of the idea predated EA-aligned macrostrategy research)
Further work to develop or disseminate this idea could presumably help evaluate future grant applications like this
This also seems like evidence that other versions of this kind of work could be useful for grantmaking decisions
Same goes for the basic concepts of crucial considerations, disentanglement research, and cluelessness (I personally don’t think the latter is useful, but you seem to), which you link to in that report
In these cases, I think there’s less useful work to be done further elaborating the concepts (maybe there would be for cluelessness, but currently I think we should instead replace that concept), but the basic terms and concepts seem to have improved our thinking, and macrostrategy-like work may find more such things
It seems it would also be useful and tractable to at least somewhat improve our understanding of which “intermediate goals” would be net positive (and which would be especially positive) and which institutions are more likely to advance or hinder those goals given various changes to their stated goals, decision-making procedures, etc.
Mapping the space of relevant actors and working out what sort of goals, incentives, decision-making procedures, capabilities, etc. they already have also seems relevant
E.g., http://gcrinstitute.org/2020-survey-of-artificial-general-intelligence-projects-for-ethics-risk-and-policy/
And I’d call that “basic” research
I essentially mean “not aimed at just prioritising or implementing specific interventions, but rather at better understanding features of the world that are relevant to many different interventions”
See also https://forum.effectivealtruism.org/posts/ham9GEsgvuFcHtSBB/should-marginal-longtermist-donations-support-fundamental-or
I think the expected value of the IIDM group’s future activities, and thus the expected impact of a grant to them, is sensitive to how much relevant fundamental, macrostrategic, etc., kind of work they will have access to in the future.
Given the nature of the activities proposed by the IIDM group, I don’t think it would have helped me for the grant decision if I had known more about macrostrategy. It would have been different if they had proposed a more specific or “object-level” strategy, e.g., lobbying for a certain policy.
I mean it would have helped me somewhat, but I think it pales in importance compared to things like “having more first-hand experience in/with the kind of institutions the group hopes to improve”, “more relevant knowledge about institutions, including theoretical frameworks for how to think about them”, and “having seen more work by the group’s leaders, or otherwise being better able to assess their abilities and potential”.
[ETA: Maybe it’s also useful to add that, on my inside view, free-floating macrostrategy research isn’t that useful, certainly not for concrete decisions the IIDM group might face. This also applies to most of the things you suggest, which strike me as ‘too high-level’ and ‘too shallow’ to be that helpful, though I think some ‘grunt work’ like ‘mapping out actors’ would help a bit, albeit it’s not what I typically think of when saying macrostrategy.
Neither is ‘object-level’ work that ignores macrostrategic uncertainty useful.
I think often the only thing that helps is to have people to the object-level work who are both excellent at doing the object-level work and have the kind of opaque “good judgment” that allows them to be appropriately responsive to macrostrategic considerations and reach reflective equilibrium between incentives suggested by proxies and high-level considerations around “how valuable is that proxy anyway?”. Unfortunately, such people seem extremely rare. I also think (and here my view probably differs from that of others who would endorse most of the other things I’m saying here) that we’re not nearly as good as we could be at identifying people who may already be in the EA community and have the potential to become great at this, and at identifying and ‘teaching’ some of the practice-able skills relevant for this. (I think there are also some more ‘innate’ components.)
This is all slightly exaggerated to gesture at the view I have, and I’m not sure how much weight I’d want to give that inside view when making, e.g., funding decisions.]
Thanks, these are interesting perspectives.
---
I think to some extent there’s just a miscommunication here, rather than a difference in views. I intended to put a lot of things in the “Fundamental, macrostrategic, basic, or crucial-considerations-like work”—I mainly wanted to draw a distinction between (a) all research “upstream” of grantmaking, and (b) things like Available funding, Good applicants with good proposals for implementing good project ideas, Grantmaker capacity to evaluate applications, and Grantmaker capacity to solicit or generate new project ideas.
E.g., I’d include “more relevant knowledge about institutions, including theoretical frameworks for how to think about them” in the bucket I was trying to gesture to.
So not just e.g. Bostrom-style macrostrategy work.
On reflection, I probably should’ve also put “intervention research” in there, and added as a sub-question “And do you think one of these types of research would be more useful for your grantmaking than the others?”
---
But then your “ETA” part is less focused on macrostrategy specifically, and there I think my current view does differ from yours (making yours interesting + thought-provoking).