Giving guesses about the Long-Term Future Fund. I did not check with anybody else from LTFF before posting.
We’re in the process of writing a more detailed post, but I’m sharing notes here for now since Lizka wanted someone to break the ice on this question.
It’s pretty hard for us to know the exact activities or value of marginal donations to the Long-Term Future Fund, as this depends greatly on a) the distribution of grant applications we receive, and b) our other donations, which unfortunately is fairly correlated with each other. That said, I think a reasonable guess is that it’d be similar to grants we’ve narrowly rejected in the last few months (changing small details and merging grants for anonymity):
($45k)~ nine months of independent research to research LLM epistemology and build lie detectors for LLMs. The applicant had 2 workshop papers and one paper came in the top three in a competition at top tier ml conference. The applicant has gotten a smaller research grant from us before, which was fairly successful but not stellar.
($50k) ~six months of independent research to convert some theoretical notions of agency to existing ML models. The applicant has had a fairly impressive theoretical track record and came with a very strong recommendation by an expert who we respect in this area, but we narrowly voted to reject.
($5k) ~ three months for a masters graduate in mathematical physics to continue independent research on interpretable value learning and apply for CS PhDs. We thought their track record was very strong (high grades, strong reference) and of course the price tag is quite low, but we decided to narrowly reject due in part to tractability.
($25k) ~ four months for a former academic to tackle some unusually tractable research problems in disaster resilience after large-scale GCRs. We thought their track record was quite strong and their past experiences and networks made them well-positioned for further work in this area, but the ultimately decided to triage our resources.[1]
I think under many worldviews these grants are quite promising and it’s a loss that our community wasn’t able to fund them, but of course rational distribution of limited resources is always hard, and I don’t have a great view of what projects the more established organizations are narrowly cutting due to funding constraints.
See also an earlier post analyzing our marginal grants in more detail here (which I believe Lizka has also linked above).
Another potential donor crux for donating to us is the Open Phil matching. Donors who value us having money much more than OP having money should in theory be more excited to give to us. Right now we have 1.28m/1.75m of the matching filled. I think it’s more likely than not (~60%?) that we’ll be able to fill the matching, but donors may wish to take into account worlds where we don’t get the matching as well.
Giving guesses about the Long-Term Future Fund. I did not check with anybody else from LTFF before posting.
We’re in the process of writing a more detailed post, but I’m sharing notes here for now since Lizka wanted someone to break the ice on this question.
It’s pretty hard for us to know the exact activities or value of marginal donations to the Long-Term Future Fund, as this depends greatly on a) the distribution of grant applications we receive, and b) our other donations, which unfortunately is fairly correlated with each other. That said, I think a reasonable guess is that it’d be similar to grants we’ve narrowly rejected in the last few months (changing small details and merging grants for anonymity):
($45k)~ nine months of independent research to research LLM epistemology and build lie detectors for LLMs. The applicant had 2 workshop papers and one paper came in the top three in a competition at top tier ml conference. The applicant has gotten a smaller research grant from us before, which was fairly successful but not stellar.
($50k) ~six months of independent research to convert some theoretical notions of agency to existing ML models. The applicant has had a fairly impressive theoretical track record and came with a very strong recommendation by an expert who we respect in this area, but we narrowly voted to reject.
($5k) ~ three months for a masters graduate in mathematical physics to continue independent research on interpretable value learning and apply for CS PhDs. We thought their track record was very strong (high grades, strong reference) and of course the price tag is quite low, but we decided to narrowly reject due in part to tractability.
($25k) ~ four months for a former academic to tackle some unusually tractable research problems in disaster resilience after large-scale GCRs. We thought their track record was quite strong and their past experiences and networks made them well-positioned for further work in this area, but the ultimately decided to triage our resources.[1]
I think under many worldviews these grants are quite promising and it’s a loss that our community wasn’t able to fund them, but of course rational distribution of limited resources is always hard, and I don’t have a great view of what projects the more established organizations are narrowly cutting due to funding constraints.
See also an earlier post analyzing our marginal grants in more detail here (which I believe Lizka has also linked above).
Another potential donor crux for donating to us is the Open Phil matching. Donors who value us having money much more than OP having money should in theory be more excited to give to us. Right now we have 1.28m/1.75m of the matching filled. I think it’s more likely than not (~60%?) that we’ll be able to fill the matching, but donors may wish to take into account worlds where we don’t get the matching as well.
You can donate to us either via Giving What We Can or every.org.
I’ll write (and link) a more detailed post on this subject soon.
We internally disagreed enough about cause prioritization that it was relevant for this grant, but our votes ultimately were very close.
We just published a longer post with some more grants here.