Some related questions with slightly different framings:
What types/lines of research do you expect would be particularly useful for informing the LTFF’s funding decisions?
Do you have thoughts on what types/lines of research would be particularly useful for informing other funders’ funding decisions in the longtermism space?
Do you have thoughts on how the answers to those two questions might differ?
What types/lines of research do you expect would be particularly useful for informing the LTFF’s funding decisions?
I’d be interested in better understanding the trade-off between independent vs established researchers. Relative to other donors we fund a lot of independent research. My hunch here is that most independent researchers are less productive than if they were working at organisations—although, of course, for many of them that’s not an option (geographical constraints, organisational capacity, etc). This makes me place a bit of a higher bar for funding independent research. Some other fund managers disagree with me and think independent researchers tend to be more productive, e.g. due to bad incentives in academic and industry labs.
I expect distillation style work to be particularly useful. I expect there’s already relevant research here: e.g. case studies of the most impressive breakthroughs, studies looking at different incentives in academic funding, etc. There probably won’t be a definitive answer, so it’d also be important that I trust the judgement of the people involved, or have a variety of people with different priors going in coming to similar conclusions.
Do you have thoughts on what types/lines of research would be particularly useful for informing other funders’ funding decisions in the longtermism space?
While larger donors can suffer from diminishing returns, there are sometimes also increasing returns to scale. One important thing larger donors can do that isn’t really possible at the LTFF’s scale is to found new academic fields. More clarity into how to achieve this and have the field go in a useful direction would be great.
It’s still mysterious to me how academic fields actually come into being. Equally importantly, what predicts whether they have good epistemics, whether they have influence, etc? Clearly part of this is the domain of study (it’s easier to get rigorous results in category theory than economics; it’s easier to get policymakers to care about economics than category theory). But I suspect it’s also pretty dependent on the culture created by early founders and the impressions outsiders form of the field. Some evidence for this is that some very closely related fields can end up going in very different directions: e.g. machine learning and statistics.
Do you have thoughts on how the answers to those two questions might differ?
A key difference between the LTFF and some other funders is we receive donations on a rolling basis, and I expect these donations to continue to increase over time. By contrast, many major donors have an endowment to spend down. So for them it’s a really important question to know how to time those donations: how much should they give now, v.s. donate later? Whereas I think for us the case for just donating every $ we receive seems pretty strong (except for keeping enough of a buffer to even out short-term fluctuations in application quality and donation revenue).
There are a lot of things I’m uncertain about, but I should say that I expect most research aimed at resolving these uncertainties not to provide strong enough evidence to change my funding decisions (though some research definitely could!) I do think weaker evidence could change my decisions if we had a larger number of high-quality applications to choose from. On the current margin, I’d be more excited about research aimed at identifying new interventions that could be promising.
Here’s a small sample of the things that feel particularly relevant to grants I’ve considered recently. I’m not sure if I would say these are the most crucial:
What sources of existential risk are plausible?
If I thought that AI capabilities were perfectly entangled with their ability to learn human preferences, I would be unlikely to fund AI alignment work.
If I thought institutional incentives were such that people wouldn’t create AI systems that could be existentially threatening without taking maximal precautions, I would be unlikely to fund AI risk work at all.
If I thought our lightcone was overwhelmingly likely to be settled by another intelligent species similar to us, I would be unlikely to fund existential risk mitigation outside of AI.
What kind of movement-building work is effective?
Adam writes above how he thinks movement-building work that sacrifices quality for quantity is unlikely to be good. I agree with him, but I could be wrong about that. If I changed my mind here, I’d be more likely to fund a larger number of movement-building projects.
It seems possible to me that work that’s explicitly labeled as ‘movement-building’ is generally not as effective for movement-building as high-quality direct work, and could even be net-negative. If I decided this was true, I’d be less likely to fund movement-building projects at all.
What strands of AI safety work are likely to be useful?
I currently take a fairly unopinionated approach to funding AI safety work—I feel willing to fund anything that I think a sufficiently large subset of smart researchers would think is promising. I can imagine becoming more opinionated here, and being less likely to fund certain kinds of work.
If I believed that it was certain that very advanced AI systems were coming soon and would look like large neural networks, I would be unlikely to fund speculative work focused on alternate paths to AGI.
If I believed that AI systems were overwhelmingly unlikely to look like large neural networks, this would have some effect on my funding decisions, but I’d have to think more about the value of near-term work from an AI safety field-building perspective.
What crucial considerations and/or key uncertainties do you think the EA LTF fund operates under?
Some related questions with slightly different framings:
What types/lines of research do you expect would be particularly useful for informing the LTFF’s funding decisions?
Do you have thoughts on what types/lines of research would be particularly useful for informing other funders’ funding decisions in the longtermism space?
Do you have thoughts on how the answers to those two questions might differ?
I’d be interested in better understanding the trade-off between independent vs established researchers. Relative to other donors we fund a lot of independent research. My hunch here is that most independent researchers are less productive than if they were working at organisations—although, of course, for many of them that’s not an option (geographical constraints, organisational capacity, etc). This makes me place a bit of a higher bar for funding independent research. Some other fund managers disagree with me and think independent researchers tend to be more productive, e.g. due to bad incentives in academic and industry labs.
I expect distillation style work to be particularly useful. I expect there’s already relevant research here: e.g. case studies of the most impressive breakthroughs, studies looking at different incentives in academic funding, etc. There probably won’t be a definitive answer, so it’d also be important that I trust the judgement of the people involved, or have a variety of people with different priors going in coming to similar conclusions.
While larger donors can suffer from diminishing returns, there are sometimes also increasing returns to scale. One important thing larger donors can do that isn’t really possible at the LTFF’s scale is to found new academic fields. More clarity into how to achieve this and have the field go in a useful direction would be great.
It’s still mysterious to me how academic fields actually come into being. Equally importantly, what predicts whether they have good epistemics, whether they have influence, etc? Clearly part of this is the domain of study (it’s easier to get rigorous results in category theory than economics; it’s easier to get policymakers to care about economics than category theory). But I suspect it’s also pretty dependent on the culture created by early founders and the impressions outsiders form of the field. Some evidence for this is that some very closely related fields can end up going in very different directions: e.g. machine learning and statistics.
A key difference between the LTFF and some other funders is we receive donations on a rolling basis, and I expect these donations to continue to increase over time. By contrast, many major donors have an endowment to spend down. So for them it’s a really important question to know how to time those donations: how much should they give now, v.s. donate later? Whereas I think for us the case for just donating every $ we receive seems pretty strong (except for keeping enough of a buffer to even out short-term fluctuations in application quality and donation revenue).
Edit: I really like Adam’s answer
There are a lot of things I’m uncertain about, but I should say that I expect most research aimed at resolving these uncertainties not to provide strong enough evidence to change my funding decisions (though some research definitely could!) I do think weaker evidence could change my decisions if we had a larger number of high-quality applications to choose from. On the current margin, I’d be more excited about research aimed at identifying new interventions that could be promising.
Here’s a small sample of the things that feel particularly relevant to grants I’ve considered recently. I’m not sure if I would say these are the most crucial:
What sources of existential risk are plausible?
If I thought that AI capabilities were perfectly entangled with their ability to learn human preferences, I would be unlikely to fund AI alignment work.
If I thought institutional incentives were such that people wouldn’t create AI systems that could be existentially threatening without taking maximal precautions, I would be unlikely to fund AI risk work at all.
If I thought our lightcone was overwhelmingly likely to be settled by another intelligent species similar to us, I would be unlikely to fund existential risk mitigation outside of AI.
What kind of movement-building work is effective?
Adam writes above how he thinks movement-building work that sacrifices quality for quantity is unlikely to be good. I agree with him, but I could be wrong about that. If I changed my mind here, I’d be more likely to fund a larger number of movement-building projects.
It seems possible to me that work that’s explicitly labeled as ‘movement-building’ is generally not as effective for movement-building as high-quality direct work, and could even be net-negative. If I decided this was true, I’d be less likely to fund movement-building projects at all.
What strands of AI safety work are likely to be useful?
I currently take a fairly unopinionated approach to funding AI safety work—I feel willing to fund anything that I think a sufficiently large subset of smart researchers would think is promising. I can imagine becoming more opinionated here, and being less likely to fund certain kinds of work.
If I believed that it was certain that very advanced AI systems were coming soon and would look like large neural networks, I would be unlikely to fund speculative work focused on alternate paths to AGI.
If I believed that AI systems were overwhelmingly unlikely to look like large neural networks, this would have some effect on my funding decisions, but I’d have to think more about the value of near-term work from an AI safety field-building perspective.