Of course there’s lots of things we would not want to (or cannot) fund, so I’ll focus on things which I would not want to fund, but which someone reading this might have been interested in supporting or applying for.
Organisations or individuals seeking influence, unless they have a clear plan for how to use that influence to improve the long-term future, or I have an exceptionally high level of trust in them
This comes up surprisingly often. A lot of think-tanks and academic centers fall into this trap by default. A major way in which non-profits sustain themselves is by dealing in prestige: universities selling naming rights being a canonical example. It’s also pretty easy to justify to oneself: of course you have to make this one sacrifice of your principles, so you can do more good later, etc.
I’m torn on this because gaining leverage can be a good strategy, and indeed it seems hard to see how we’ll solve some major problems without individuals or organisations pursuing this. So I wouldn’t necessarily discourage people from pursuing this path, though you might want to think hard about whether you’ll be able to avoid value drift. But there’s a big information asymmetry as a donor: if someone is seeking support for something that isn’t directly useful now, with the promise of doing something useful later, it’s hard to know if they’ll follow through on that.
Movement building that increases quantity but reduces quality or diversity. The initial composition of a community has a big effect on its long-term composition: people tend to recruit people like themselves. The long-termist community is still relatively small, so we can have a substantial effect on the current (and therefore long-term) composition now.
So when I look for whether to fund a movement building intervention, I don’t just ask if it’ll attract enough good people to be worth the cost, but also whether the intervention is sufficiently targeted. This is a bit counterintuitive, and certainly in the past (e.g. when I was running student groups) I tended to assume that bigger was always better.
That said, the details really matter here. For example, AI risk is already in the public conscience, but most people have only been exposed to terrible low-quality articles about it. So I like Robert Miles YouTube channel since it’s a vastly better explanation of AI risk than most people will have come across. I still think most of the value will come from a small percentage of people who seriously engage with it, but I expect it to be positive or at least neutral for the vast majority of viewers.
Like Adam, I’ll focus on things that someone reading this might be interested in supporting or applying for. I want to emphasize that this is my personal take, not representing the whole fund, and I would be sad if this response stopped anyone from applying—there’s a lot of healthy disagreement within the fund, and we fund lots of things where at least one person thinks it’s below our bar. I also think a well-justified application could definitely change my mind.
Improving science or technology, unless there’s a strong case that the improvement would differentially benefit existential risk mitigation (or some other aspect of our long-term trajectory). As Ben Todd explains here, I think this is unlikely to be as highly-leveraged for improving the long-term future as trajectory changing efforts. I don’t think there’s a strong case that generally speeding up economic growth is an effective existential risk intervention.
Climate change mitigation. From the evidence I’ve seen, I think climate change is unlikely to be either directly existentially threatening or a particularly highly-leveraged existential risk factor. (It’s also not very neglected.) But I could be excited about funding research work that changed my mind about this.
Most self-improvement / community-member-improvement type work, e.g. “I want to create materials to help longtermists think better about their personal problems.” I’m not universally unexcited about funding this, and there are people who I think do good work like this, but my overall prior is that proposals here won’t be very good.
(I drafted this comment earlier and feel like it’s largely redundant by now, but I thought I might as well post it.)
I agree with what Adam and Asya said. I think many of those points can be summarized as ‘there isn’t a compelling theory of change for this project to result in improvements in the long-term future.’
Many applicants have great credentials, impressive connections, and a track record of getting things done, but their ideas and plans seem optimized for some goal other than improving the long-term future, and it would be a suspicious convergence if they were excellent for the long-term future as well. (If grantseekers don’t try to make the case for this in their application, I try to find out myself if this is the case, and the answer is usually ‘no.’)
We’ve received applications from policy projects, experienced professionals, and professors (including one with tens of thousands of citations), but ended up declining largely for this reason. It’s worth noting that these applications aren’t bad – often, they’re excellent – but they’re only tangentially related to what the LTFF is trying to achieve.
What are you not excited to fund?
Of course there’s lots of things we would not want to (or cannot) fund, so I’ll focus on things which I would not want to fund, but which someone reading this might have been interested in supporting or applying for.
Organisations or individuals seeking influence, unless they have a clear plan for how to use that influence to improve the long-term future, or I have an exceptionally high level of trust in them
This comes up surprisingly often. A lot of think-tanks and academic centers fall into this trap by default. A major way in which non-profits sustain themselves is by dealing in prestige: universities selling naming rights being a canonical example. It’s also pretty easy to justify to oneself: of course you have to make this one sacrifice of your principles, so you can do more good later, etc.
I’m torn on this because gaining leverage can be a good strategy, and indeed it seems hard to see how we’ll solve some major problems without individuals or organisations pursuing this. So I wouldn’t necessarily discourage people from pursuing this path, though you might want to think hard about whether you’ll be able to avoid value drift. But there’s a big information asymmetry as a donor: if someone is seeking support for something that isn’t directly useful now, with the promise of doing something useful later, it’s hard to know if they’ll follow through on that.
Movement building that increases quantity but reduces quality or diversity. The initial composition of a community has a big effect on its long-term composition: people tend to recruit people like themselves. The long-termist community is still relatively small, so we can have a substantial effect on the current (and therefore long-term) composition now.
So when I look for whether to fund a movement building intervention, I don’t just ask if it’ll attract enough good people to be worth the cost, but also whether the intervention is sufficiently targeted. This is a bit counterintuitive, and certainly in the past (e.g. when I was running student groups) I tended to assume that bigger was always better.
That said, the details really matter here. For example, AI risk is already in the public conscience, but most people have only been exposed to terrible low-quality articles about it. So I like Robert Miles YouTube channel since it’s a vastly better explanation of AI risk than most people will have come across. I still think most of the value will come from a small percentage of people who seriously engage with it, but I expect it to be positive or at least neutral for the vast majority of viewers.
I agree that both of these are among the top 5 things that I’ve encountered that make me unexcited about a grant.
Like Adam, I’ll focus on things that someone reading this might be interested in supporting or applying for. I want to emphasize that this is my personal take, not representing the whole fund, and I would be sad if this response stopped anyone from applying—there’s a lot of healthy disagreement within the fund, and we fund lots of things where at least one person thinks it’s below our bar. I also think a well-justified application could definitely change my mind.
Improving science or technology, unless there’s a strong case that the improvement would differentially benefit existential risk mitigation (or some other aspect of our long-term trajectory). As Ben Todd explains here, I think this is unlikely to be as highly-leveraged for improving the long-term future as trajectory changing efforts. I don’t think there’s a strong case that generally speeding up economic growth is an effective existential risk intervention.
Climate change mitigation. From the evidence I’ve seen, I think climate change is unlikely to be either directly existentially threatening or a particularly highly-leveraged existential risk factor. (It’s also not very neglected.) But I could be excited about funding research work that changed my mind about this.
Most self-improvement / community-member-improvement type work, e.g. “I want to create materials to help longtermists think better about their personal problems.” I’m not universally unexcited about funding this, and there are people who I think do good work like this, but my overall prior is that proposals here won’t be very good.
I am also unexcited about the things Adam wrote.
(I drafted this comment earlier and feel like it’s largely redundant by now, but I thought I might as well post it.)
I agree with what Adam and Asya said. I think many of those points can be summarized as ‘there isn’t a compelling theory of change for this project to result in improvements in the long-term future.’
Many applicants have great credentials, impressive connections, and a track record of getting things done, but their ideas and plans seem optimized for some goal other than improving the long-term future, and it would be a suspicious convergence if they were excellent for the long-term future as well. (If grantseekers don’t try to make the case for this in their application, I try to find out myself if this is the case, and the answer is usually ‘no.’)
We’ve received applications from policy projects, experienced professionals, and professors (including one with tens of thousands of citations), but ended up declining largely for this reason. It’s worth noting that these applications aren’t bad – often, they’re excellent – but they’re only tangentially related to what the LTFF is trying to achieve.