I think the long-termist and EA communities seem too narrow on several important dimensions:
Methodologically there are several relevant approaches that seem poorly represented in the community. A concrete example would be having more people with a history background, which seems critical for understanding long-term trends. In general I think we could do better interfacing with the social sciences and other intellectual movements.
I do think there are challenges here. Most fields are not designed to answer long-term questions. For example, history is often taught by focusing on particular periods, whereas we are more interested in trends that persist across many periods. So the first people joining from a particular field are going to need to figure out how to adapt their methodology to the unique demands of long-termism.
There’s also risks from spreading ourselves too thin. It’s important we maintain a coherent community that’s able to communicate with each other. Having too many different methodologies and epistemic norms could make this hard. Eventually I think we’re going to need to specialize: I expect different fields will benefit from different norms and heuristics. But right now I don’t think we know what the right way to split long-termism is, so I’d be hesitant to specialize too early.
I also think we are currently too centered in Europe and North America, and see a lot of value in having a more active community in other countries. Many long-term problems require some form of global coordination, which will benefit significantly from having people in a variety of countries.
I do think we need to take care here. First impressions count a lot, so poorly targeted initial outreach could hinder long-term growth in a country. Even seemingly simple things like book translations can be quite difficult to get right. For example, the distinction in English between “safety” and “security” is absent in many languages, which can make translating AI safety texts quite challenging!
More fundamentally, EA ideas arose out of quite a specific intellectual tradition around questions of how to lead a good life, what meaning looks like, and so on, so figuring out how our ideas do or don’t resonate with people in places with very different intellectual traditions is a serious challenge.
Of course, our current demographic breakdown is not ideal for a community that wants to exist for many decades to come, and I think we’re missing out on some talented people because of this. It doesn’t help that many of the fields and backgrounds we are drawing from tend to be unrepresentative, especially in terms of gender balance. So improving this seems like it would dovetail well with drawing people from a broader range of academic backgrounds.
I also suspect that the set of motivations we’re currently tapping into is quite narrow. The current community is mostly utilitarian. But the long-termist case stands up well under a wide range of moral theories, so I’d like to see us reaching people with a wider range of moral views.
Related to this I think we currently appeal only to a narrow range of personality types. This is inevitable to a degree: I’d expect individuals higher in conscientiousness or neuroticism to be more likely to want to work to protect the long-term future, for example. But I also think we have so far disproportionately attracted introverts, which seems more like an accident of the communities we’ve drawn upon and how we message things. Notably extraversion vs introversion does not seem to correlate with pro-environmental behaviours, for example, whereas agreeableness and openness were correlated (Walden, 2015; Hirsh, 2010).
I would be excited about projects that work towards these goals.
(As mentioned in the original post, I’m not a Fund manager, but I sometimes advise the LTFF as part of my role as Head of EA Funds.)
I agree with Adam and Asya. Some quick further ideas off the top of my head:
More academic teaching buy-outs. I think there are likely many longtermist academics who could get a teaching buy-out but aren’t even considering it.
Research into the long-term risks (and potential benefits) of genetic engineering.
Research aimed at improving cause prioritization methodology. (This might be a better fit for the EA Infrastructure Fund, but it’s also relevant to the LTFF.)
Open access fees for research publications relevant to longtermism, such that this work is available to anyone on the internet without any obstacles, plausibly increasing readership and citations.
Research assistants for academic researchers (and for independent researchers if they have a track record and there’s no good organization for them).
Open access fees for research publications relevant to longtermism, such that this work is available to anyone on the internet without any obstacles, plausibly increasing readership and citations.
How important is this in the context of eg scihub existing?
Not everyone uses sci-hub, and even if they do, it still removes trivial inconveniences. But yeah, sci-hub and the fact that PDFs (often preprints) are usually easy to find even if it’s not open access makes me a bit less excited.
I’ve already covered in this answer areas where we don’t make many grants but I would be excited about us making more grants. So in this answer I’ll focus on areas where we already commonly make grants, but would still like to scale this up further.
I’m generally excited to fund researchers when they have a good track record, are focusing on important problems and when the research problem is likely to slip through the cracks of other funders or research groups. For example, distillation style research, or work that is speculative or doesn’t neatly fit into an existing discipline.
Another category which is a bit harder to define are grants where we have a comparative advantage at evaluating. This could be that one of the fund managers happens to already be an expert in the area and has a lot of context. Or maybe the application is time-sensitive and we’re just about to start evaluating a grant round. In these cases the counterfactual impact is higher: these grants are less likely to be made by other donors.
What are you excited to fund?
A related question: are there categories of things you’d be excited to fund, but haven’t received any applications for so far?
I think the long-termist and EA communities seem too narrow on several important dimensions:
Methodologically there are several relevant approaches that seem poorly represented in the community. A concrete example would be having more people with a history background, which seems critical for understanding long-term trends. In general I think we could do better interfacing with the social sciences and other intellectual movements.
I do think there are challenges here. Most fields are not designed to answer long-term questions. For example, history is often taught by focusing on particular periods, whereas we are more interested in trends that persist across many periods. So the first people joining from a particular field are going to need to figure out how to adapt their methodology to the unique demands of long-termism.
There’s also risks from spreading ourselves too thin. It’s important we maintain a coherent community that’s able to communicate with each other. Having too many different methodologies and epistemic norms could make this hard. Eventually I think we’re going to need to specialize: I expect different fields will benefit from different norms and heuristics. But right now I don’t think we know what the right way to split long-termism is, so I’d be hesitant to specialize too early.
I also think we are currently too centered in Europe and North America, and see a lot of value in having a more active community in other countries. Many long-term problems require some form of global coordination, which will benefit significantly from having people in a variety of countries.
I do think we need to take care here. First impressions count a lot, so poorly targeted initial outreach could hinder long-term growth in a country. Even seemingly simple things like book translations can be quite difficult to get right. For example, the distinction in English between “safety” and “security” is absent in many languages, which can make translating AI safety texts quite challenging!
More fundamentally, EA ideas arose out of quite a specific intellectual tradition around questions of how to lead a good life, what meaning looks like, and so on, so figuring out how our ideas do or don’t resonate with people in places with very different intellectual traditions is a serious challenge.
Of course, our current demographic breakdown is not ideal for a community that wants to exist for many decades to come, and I think we’re missing out on some talented people because of this. It doesn’t help that many of the fields and backgrounds we are drawing from tend to be unrepresentative, especially in terms of gender balance. So improving this seems like it would dovetail well with drawing people from a broader range of academic backgrounds.
I also suspect that the set of motivations we’re currently tapping into is quite narrow. The current community is mostly utilitarian. But the long-termist case stands up well under a wide range of moral theories, so I’d like to see us reaching people with a wider range of moral views.
Related to this I think we currently appeal only to a narrow range of personality types. This is inevitable to a degree: I’d expect individuals higher in conscientiousness or neuroticism to be more likely to want to work to protect the long-term future, for example. But I also think we have so far disproportionately attracted introverts, which seems more like an accident of the communities we’ve drawn upon and how we message things. Notably extraversion vs introversion does not seem to correlate with pro-environmental behaviours, for example, whereas agreeableness and openness were correlated (Walden, 2015; Hirsh, 2010).
I would be excited about projects that work towards these goals.
(As mentioned in the original post, I’m not a Fund manager, but I sometimes advise the LTFF as part of my role as Head of EA Funds.)
I agree with Adam and Asya. Some quick further ideas off the top of my head:
More academic teaching buy-outs. I think there are likely many longtermist academics who could get a teaching buy-out but aren’t even considering it.
Research into the long-term risks (and potential benefits) of genetic engineering.
Research aimed at improving cause prioritization methodology. (This might be a better fit for the EA Infrastructure Fund, but it’s also relevant to the LTFF.)
Open access fees for research publications relevant to longtermism, such that this work is available to anyone on the internet without any obstacles, plausibly increasing readership and citations.
Research assistants for academic researchers (and for independent researchers if they have a track record and there’s no good organization for them).
Books about longtermism-relevant topics.
How important is this in the context of eg scihub existing?
Not everyone uses sci-hub, and even if they do, it still removes trivial inconveniences. But yeah, sci-hub and the fact that PDFs (often preprints) are usually easy to find even if it’s not open access makes me a bit less excited.
That’s really interesting to read, thanks very much! (Both for this answer and for the whole AMA exercise)
I’ve already covered in this answer areas where we don’t make many grants but I would be excited about us making more grants. So in this answer I’ll focus on areas where we already commonly make grants, but would still like to scale this up further.
I’m generally excited to fund researchers when they have a good track record, are focusing on important problems and when the research problem is likely to slip through the cracks of other funders or research groups. For example, distillation style research, or work that is speculative or doesn’t neatly fit into an existing discipline.
Another category which is a bit harder to define are grants where we have a comparative advantage at evaluating. This could be that one of the fund managers happens to already be an expert in the area and has a lot of context. Or maybe the application is time-sensitive and we’re just about to start evaluating a grant round. In these cases the counterfactual impact is higher: these grants are less likely to be made by other donors.