I work as a grantmaker and have spent some time trying to improve the LTFF form. I am really only speaking for myself here and not other LTFF grantmakers.
I think this post made a bunch of interesting points but I am just responding with my quick impressions mostly where I disagree as I think it will generate more useful discussion.
Pushes away the most motivated people I think this is one of the points raised that I find most worrying (if true). I think it would be great to make grants useful for x-risk reduction to people who aren’t motivated x-risk but are likely to do useful instrumental work anyway. I feel a bit pessimistic about being able to identify such people in the current LTFF set-up (though it totally does happen) and feel more optimistic about well-scoped “requests for proposals” and “active grantmaking” (where the funder has a pretty narrow vision for the projects they want to fund and are often approaching grantees proactively or are directly involved in the projects themselves). My best guess is that passive and broad grantmaking (which is the main product of the LTFF) is not the best way of engaging with these people and we shouldn’t optimise this kind of application form for them and should instead invest in ‘active’ programs.
(I also find it a little surprising that you used community building as an example here. My personal experience is that the majority of productive community building I am aware of has been lead by people who were pretty cause motivated (though I may be less familiar with less cause-motivated CB efforts that op is excited about).)
The grand narrative claim My sense is that most applicants (particularly ones in EA and adjacent communities) do not consider “what impact will my project have on the world?” to create an expectation of some kind of grand narrative. It’s plausible that we are strongly selecting against people who are put off by this question but I think this is pretty unlikely (e.g. afaik this hasn’t been given as feedback before and the answers I see people give don’t generally give off a ‘grand narrative vibe’). My best guess is that this is interpreted as something closer to “what are the expected consequences of your project?”. Fwiw I do think that people find applying to funders intimidating but I don’t think this question is unusually intimidating relative to other ‘explain your project’ type questions in the form (or your suggestions).
Confusion around the corrupting epistemics point I didn’t quite understand this point. Is the concern that people will believe that they won’t be funded without making large claims and then are put off applying or that the question is indicative of the funders being much more receptive to overinflated claims which results in more projects being run by people with poor epistemics (or something else)?
I didn’t quite understand this point. Is the concern that people will believe that they won’t be funded without making large claims and then are put off applying or that the question is indicative of the funders being much more receptive to overinflated claims which results in more projects being run by people with poor epistemics (or something else)?
I interpreted Elizabeth as saying that the form (and other forms like it) will make people believe that they won’t be funded without making large claims. They then consequently adopt incorrect beliefs to justify large claims about the value of their projects. In short, a treatment effect, not a selection effect.
I think my model might be [good models + high impact vision grounded in that model] > [good models alone + modest goals] > [mediocre model + grand vision], where good model means both reasonably accurate and continually improving based on feedback loops inherent in the project, with the latter probably being more important. And I think that if you reward grand vision too much, you both select for and cause worse models with less self-correction.
What do you think of applicants giving percentile outcomes, terminal goals without justification (e.g “improve EA productivity”), and citing other people to justify why their project is high impact?
I work as a grantmaker and have spent some time trying to improve the LTFF form. I am really only speaking for myself here and not other LTFF grantmakers.
I think this post made a bunch of interesting points but I am just responding with my quick impressions mostly where I disagree as I think it will generate more useful discussion.
Pushes away the most motivated people
I think this is one of the points raised that I find most worrying (if true). I think it would be great to make grants useful for x-risk reduction to people who aren’t motivated x-risk but are likely to do useful instrumental work anyway. I feel a bit pessimistic about being able to identify such people in the current LTFF set-up (though it totally does happen) and feel more optimistic about well-scoped “requests for proposals” and “active grantmaking” (where the funder has a pretty narrow vision for the projects they want to fund and are often approaching grantees proactively or are directly involved in the projects themselves). My best guess is that passive and broad grantmaking (which is the main product of the LTFF) is not the best way of engaging with these people and we shouldn’t optimise this kind of application form for them and should instead invest in ‘active’ programs.
(I also find it a little surprising that you used community building as an example here. My personal experience is that the majority of productive community building I am aware of has been lead by people who were pretty cause motivated (though I may be less familiar with less cause-motivated CB efforts that op is excited about).)
The grand narrative claim
My sense is that most applicants (particularly ones in EA and adjacent communities) do not consider “what impact will my project have on the world?” to create an expectation of some kind of grand narrative. It’s plausible that we are strongly selecting against people who are put off by this question but I think this is pretty unlikely (e.g. afaik this hasn’t been given as feedback before and the answers I see people give don’t generally give off a ‘grand narrative vibe’). My best guess is that this is interpreted as something closer to “what are the expected consequences of your project?”. Fwiw I do think that people find applying to funders intimidating but I don’t think this question is unusually intimidating relative to other ‘explain your project’ type questions in the form (or your suggestions).
Confusion around the corrupting epistemics point
I didn’t quite understand this point. Is the concern that people will believe that they won’t be funded without making large claims and then are put off applying or that the question is indicative of the funders being much more receptive to overinflated claims which results in more projects being run by people with poor epistemics (or something else)?
I interpreted Elizabeth as saying that the form (and other forms like it) will make people believe that they won’t be funded without making large claims. They then consequently adopt incorrect beliefs to justify large claims about the value of their projects. In short, a treatment effect, not a selection effect.
I clarified some of my epistemic concerns.
I think my model might be [good models + high impact vision grounded in that model] > [good models alone + modest goals] > [mediocre model + grand vision], where good model means both reasonably accurate and continually improving based on feedback loops inherent in the project, with the latter probably being more important. And I think that if you reward grand vision too much, you both select for and cause worse models with less self-correction.
Thanks Caleb. I’m heads down on short term project so I can’t give a long reply, but have a few short things .
Raemon offered to do the heavy lifting on why the epistemic point is so important https://www.lesswrong.com/posts/FNPXbwKGFvXWZxHGE/grant-applications-and-grand-narratives .
What do you think of applicants giving percentile outcomes, terminal goals without justification (e.g “improve EA productivity”), and citing other people to justify why their project is high impact?
Is that link correct?