I’d overall like to see more work that has a solid longtermist justification but isn’t as close to existing longtermist work. It seems like the LTFF might be well-placed to encourage this, since we provide funding outside of established orgs. This round, we received many applications from people who weren’t very engaged with the existing longtermist community. While these didn’t end up meeting our bar, some of the projects were fairly novel and good enough to make me excited about funding people like this in general.
There are also lots of particular less-established directions where I’d personally be interested in seeing more work, e.g.:
Work on structured transparency tools for detecting risks from rogue actors
Work on information security’s effect on AI development
Work on the offense—defense balance in a world with many advanced AI systems
Work on the likelihood and moral value of extraterrestrial life
Work on increasing institutional competence, particularly around existential risk mitigation
Work on effectively spreading longtermist values outside of traditional movement-building
These are largely a reflection of what I happen to have been thinking about recently and definitely not my fully-endorsed answer to this question—I’d like to spend time talking to others and coming to more stable conclusions about specific work the LTFF should encourage more of.
I’d overall like to see more work that has a solid longtermist justification but isn’t as close to existing longtermist work. It seems like the LTFF might be well-placed to encourage this, since we provide funding outside of established orgs. This round, we received many applications from people who weren’t very engaged with the existing longtermist community. While these didn’t end up meeting our bar, some of the projects were fairly novel and good enough to make me excited about funding people like this in general.
There are also lots of particular less-established directions where I’d personally be interested in seeing more work, e.g.:
Work on structured transparency tools for detecting risks from rogue actors
Work on information security’s effect on AI development
Work on the offense—defense balance in a world with many advanced AI systems
Work on the likelihood and moral value of extraterrestrial life
Work on increasing institutional competence, particularly around existential risk mitigation
Work on effectively spreading longtermist values outside of traditional movement-building
These are largely a reflection of what I happen to have been thinking about recently and definitely not my fully-endorsed answer to this question—I’d like to spend time talking to others and coming to more stable conclusions about specific work the LTFF should encourage more of.