What kinds of grants tend to be most controversial among fund managers?
in the same comment. As usual, other fund managers are welcome to disagree. :P
A few cases that comes to mind:
When a grant appears to have both high upside and high downside risks (eg red/yellow flags in the applicant, wants to work in a naturally sensitive space, etc).
Fund managers often have disagreements with each other on how to weigh upside and downside risks.
Sometimes research projects that are exciting according to one (or a few) fund managers are object-level useless for saving the world according to other fund managers.
Sometimes a particular fund manager champions a project and inside-view believes it has world-saving potential when other fund managers disagrees, sometimes nobody inside-view believes that it has world-saving potential but the project has outside-view indications of being valuable (eg the grantee has done useful work in the past, or has endorsements by people who had), and different fund managers treat the outside-view evidence more or less strongly.
Grants with unusually high stipend asks.
Sometimes a better-than-average grant application will ask for a stipend (or other expense) that’s unusually high by our usual standards.
We have internal disagreements both on the object-level of whether approving such grants is a good idea and also under which set of policies or values we ought to use to set salaries (“naive EV” vs “fairness among grantees” vs “having a ‘equal sacrifice’ perspective between us and the grantee” vs “not wanting to worsen power dynamics” vs “wanting to respect local norms in other fields” etc).
For example, academic stipends for graduate students in technical fields are often much lower than salaries in the corporate world. A deliberation process might look like:
A. Our normal policy of “paying 70% of counterfactual” would suggest a very high stipend
B. But this will be very “out of line” with prevailing academic norms; it’s also a lot more expensive than following normal academic norms, where grad students are paid in non-monetary value often
C. But our grantees often want the non-monetary gains from a graduate degree much less than the typical grad student. Most of our grantees in academia don’t really want to be professors, and care less about academic prestige than typical. They often sought grad school primarily because it’s one of the few places with good technical mentorship in AI safety-adjacent fields.
D. But if we pay them our normal level, then this is sort of “defecting” on prevailing academic norms. It’d look weird, draw attention, etc
E. But if the stipends are as low as normal in CS academia (especially in some locations/universities), this is also creating a disparity between our academic grantees and other grantees, even if the former are actually higher impact...
To add to what Linch said, anecdotally, it seems like there’s more disagreements when the path to impact of the grant is less direct (as opposed to, say, AI technical research), such as with certain types of governance work, outreach, or forecasting.
What disagreements do the LTFF fund managers tend to have with each other about what’s worth funding?
I’m answering both this question and Neel Nanda’s question
in the same comment. As usual, other fund managers are welcome to disagree. :P
A few cases that comes to mind:
When a grant appears to have both high upside and high downside risks (eg red/yellow flags in the applicant, wants to work in a naturally sensitive space, etc).
Fund managers often have disagreements with each other on how to weigh upside and downside risks.
Sometimes research projects that are exciting according to one (or a few) fund managers are object-level useless for saving the world according to other fund managers.
Sometimes a particular fund manager champions a project and inside-view believes it has world-saving potential when other fund managers disagrees, sometimes nobody inside-view believes that it has world-saving potential but the project has outside-view indications of being valuable (eg the grantee has done useful work in the past, or has endorsements by people who had), and different fund managers treat the outside-view evidence more or less strongly.
Grants with unusually high stipend asks.
Sometimes a better-than-average grant application will ask for a stipend (or other expense) that’s unusually high by our usual standards.
We have internal disagreements both on the object-level of whether approving such grants is a good idea and also under which set of policies or values we ought to use to set salaries (“naive EV” vs “fairness among grantees” vs “having a ‘equal sacrifice’ perspective between us and the grantee” vs “not wanting to worsen power dynamics” vs “wanting to respect local norms in other fields” etc).
For example, academic stipends for graduate students in technical fields are often much lower than salaries in the corporate world. A deliberation process might look like:
A. Our normal policy of “paying 70% of counterfactual” would suggest a very high stipend
B. But this will be very “out of line” with prevailing academic norms; it’s also a lot more expensive than following normal academic norms, where grad students are paid in non-monetary value often
C. But our grantees often want the non-monetary gains from a graduate degree much less than the typical grad student. Most of our grantees in academia don’t really want to be professors, and care less about academic prestige than typical. They often sought grad school primarily because it’s one of the few places with good technical mentorship in AI safety-adjacent fields.
D. But if we pay them our normal level, then this is sort of “defecting” on prevailing academic norms. It’d look weird, draw attention, etc
E. But if the stipends are as low as normal in CS academia (especially in some locations/universities), this is also creating a disparity between our academic grantees and other grantees, even if the former are actually higher impact...
F. etc...
To add to what Linch said, anecdotally, it seems like there’s more disagreements when the path to impact of the grant is less direct (as opposed to, say, AI technical research), such as with certain types of governance work, outreach, or forecasting.