I’d overall like to see more work that has a solid longtermist justification but isn’t as close to existing longtermist work. It seems like the LTFF might be well-placed to encourage this, since we provide funding outside of established orgs. This round, we received many applications from people who weren’t very engaged with the existing longtermist community. While these didn’t end up meeting our bar, some of the projects were fairly novel and good enough to make me excited about funding people like this in general.
There are also lots of particular less-established directions where I’d personally be interested in seeing more work, e.g.:
Work on structured transparency tools for detecting risks from rogue actors
Work on information security’s effect on AI development
Work on the offense—defense balance in a world with many advanced AI systems
Work on the likelihood and moral value of extraterrestrial life
Work on increasing institutional competence, particularly around existential risk mitigation
Work on effectively spreading longtermist values outside of traditional movement-building
These are largely a reflection of what I happen to have been thinking about recently and definitely not my fully-endorsed answer to this question—I’d like to spend time talking to others and coming to more stable conclusions about specific work the LTFF should encourage more of.
These are very much a personal take, I’m not sure if others on the fund would agree.
Buying extra time for people already doing great work. A lot of high-impact careers pay pretty badly: many academic roles (especially outside the US), some non-profit and think-tank work, etc. There’s certainly diminishing returns to money, and I don’t want the long-termist community to engage in zero-sum consumption of Veblen goods. But there’s also plenty of things that are solid investments in your productivity, like having a comfortable home office, a modern computer, ordering takeaway or having cleaners, enough runway to not have financial insecurity, etc.
Financial needs also vary a fair bit from person to person. I know some people who are productive and happy living off Soylent and working on a laptop on their bed, whereas I’d quickly burn out doing that. Others might have higher needs than me, e.g. if they have financial dependents.
As a general rule, if I’d be happy to fund someone for $Y/year if they were doing this work by themselves, and they’re getting paid $X/year by their employer to do this work, I think I should be happy to pay the difference $(Y-X)/year provided the applicant has a good plan for what to do with the money. If you think you might benefit from more money, I’d encourage you to apply. Even if you don’t think you’ll get it: a lot of people underestimate how much their time is worth.
Biosecurity. At the margins I’m about equally excited by biosecurity as I am about mitigating AI risks, largely because biosecurity currently seems much more neglected from a long-termist perspective. Yet the fund makes many more grants in the AI risk space.
We have received a reasonable number of biosecurity applications in recent rounds (though we still receive substantially more for AI), but our acceptance rate has been relatively low. I’d be particularly excited about seeing applications with a relatively clear path to impact. Many of our applications have been for generally trying to raise awareness, and I think getting the details right is really crucial here: targeting the right community, having enough context and experience to understand what that community would benefit from hearing, etc.
Are there any areas covered by the fund’s scope where you’d like to receive more applications?
I’d overall like to see more work that has a solid longtermist justification but isn’t as close to existing longtermist work. It seems like the LTFF might be well-placed to encourage this, since we provide funding outside of established orgs. This round, we received many applications from people who weren’t very engaged with the existing longtermist community. While these didn’t end up meeting our bar, some of the projects were fairly novel and good enough to make me excited about funding people like this in general.
There are also lots of particular less-established directions where I’d personally be interested in seeing more work, e.g.:
Work on structured transparency tools for detecting risks from rogue actors
Work on information security’s effect on AI development
Work on the offense—defense balance in a world with many advanced AI systems
Work on the likelihood and moral value of extraterrestrial life
Work on increasing institutional competence, particularly around existential risk mitigation
Work on effectively spreading longtermist values outside of traditional movement-building
These are largely a reflection of what I happen to have been thinking about recently and definitely not my fully-endorsed answer to this question—I’d like to spend time talking to others and coming to more stable conclusions about specific work the LTFF should encourage more of.
These are very much a personal take, I’m not sure if others on the fund would agree.
Buying extra time for people already doing great work. A lot of high-impact careers pay pretty badly: many academic roles (especially outside the US), some non-profit and think-tank work, etc. There’s certainly diminishing returns to money, and I don’t want the long-termist community to engage in zero-sum consumption of Veblen goods. But there’s also plenty of things that are solid investments in your productivity, like having a comfortable home office, a modern computer, ordering takeaway or having cleaners, enough runway to not have financial insecurity, etc.
Financial needs also vary a fair bit from person to person. I know some people who are productive and happy living off Soylent and working on a laptop on their bed, whereas I’d quickly burn out doing that. Others might have higher needs than me, e.g. if they have financial dependents.
As a general rule, if I’d be happy to fund someone for $Y/year if they were doing this work by themselves, and they’re getting paid $X/year by their employer to do this work, I think I should be happy to pay the difference $(Y-X)/year provided the applicant has a good plan for what to do with the money. If you think you might benefit from more money, I’d encourage you to apply. Even if you don’t think you’ll get it: a lot of people underestimate how much their time is worth.
Biosecurity. At the margins I’m about equally excited by biosecurity as I am about mitigating AI risks, largely because biosecurity currently seems much more neglected from a long-termist perspective. Yet the fund makes many more grants in the AI risk space.
We have received a reasonable number of biosecurity applications in recent rounds (though we still receive substantially more for AI), but our acceptance rate has been relatively low. I’d be particularly excited about seeing applications with a relatively clear path to impact. Many of our applications have been for generally trying to raise awareness, and I think getting the details right is really crucial here: targeting the right community, having enough context and experience to understand what that community would benefit from hearing, etc.