How should applicants think about grant proposals that are rejected. I especially find newer members of the community can be heavily discouraged by rejections, is there anything you would want to communicate to them?
I don’t know how many points I can really cleanly communicate to such a heterogeneous group, and I’m really worried about anything I say in this context being misunderstood or reified in unhelpful ways. But here goes nothing:
First of all, I don’t know man, should you really listen to my opinion? I’m just one guy, who happened to have some resources/power/attention vested in me; I worry that people (especially the younger EAs) vastly overestimate how much my judgment is worth, relative to their own opinions and local context.
Thank you for applying, and for wanting to do the right thing. I genuinely appreciate everybody who applies, whether for a small project or large, in the hopes that their work can make the world a better place. It’s emotionally hard and risky, and I have a lot of appreciation for the very small number people who tried to take a step in making the world better.
These decisions are really hard, and we’re likely to screw up.Morality is hard and longtermism by its very nature means worse feedback loops than normal. I’m sure you’re familiar with how selection/rejections can often be extremely noisy in other domains (colleges, jobs, etc). There aren’t many reasons to think we’ll do better, and some key reasons to think we’d do worse. We tried our best to make the best funding decisions we could, given limited resources, limited grantmaker time, and limited attention and cognitive capabilities. It’s very likely that we have and will continue to consistently fuck up.
This probably means that if you continue to be excited about your project in the absence of LTFF funding, it makes sense to continue to pursue it either under your own time or while seeking other funding.
Funding is a constraint again, at least for now. So earning-to-give might make sense. The wonderful thing about earning-to-give is that money is fungible; anybody can contribute, and probabilistically our grantees and would-be grantees are likely to be people with among the highest earning potentials in the world. So if you haven’t found a good match for direct work (whether due to personal preferences or external factors like not receiving funding), earning-to-give can be a great option for both impact and other desiderata.
Please don’t work on capabilities in a scaling lab, or otherwise contributing to ending humanity. I don’t know how much you care about my opinion, or even if you should. But while some people find it surprisingly comforting to work on projects that are destructive for the world when they suffer a temporary setback in attempting to save it, I suspect this will end up being the type of thing they’d regret, in addition to being straightforwardly[1] altruistically bad.
assuming that you agree with the object-level assessment that working in scaling labs hastens the world ending. Obviously there are reasonable object-level disagreements to be had here! (And I’m far from certain about that claim myself).
How should applicants think about grant proposals that are rejected. I especially find newer members of the community can be heavily discouraged by rejections, is there anything you would want to communicate to them?
I don’t know how many points I can really cleanly communicate to such a heterogeneous group, and I’m really worried about anything I say in this context being misunderstood or reified in unhelpful ways. But here goes nothing:
First of all, I don’t know man, should you really listen to my opinion? I’m just one guy, who happened to have some resources/power/attention vested in me; I worry that people (especially the younger EAs) vastly overestimate how much my judgment is worth, relative to their own opinions and local context.
Thank you for applying, and for wanting to do the right thing. I genuinely appreciate everybody who applies, whether for a small project or large, in the hopes that their work can make the world a better place. It’s emotionally hard and risky, and I have a lot of appreciation for the very small number people who tried to take a step in making the world better.
These decisions are really hard, and we’re likely to screw up. Morality is hard and longtermism by its very nature means worse feedback loops than normal. I’m sure you’re familiar with how selection/rejections can often be extremely noisy in other domains (colleges, jobs, etc). There aren’t many reasons to think we’ll do better, and some key reasons to think we’d do worse. We tried our best to make the best funding decisions we could, given limited resources, limited grantmaker time, and limited attention and cognitive capabilities. It’s very likely that we have and will continue to consistently fuck up.
This probably means that if you continue to be excited about your project in the absence of LTFF funding, it makes sense to continue to pursue it either under your own time or while seeking other funding.
Funding is a constraint again, at least for now. So earning-to-give might make sense. The wonderful thing about earning-to-give is that money is fungible; anybody can contribute, and probabilistically our grantees and would-be grantees are likely to be people with among the highest earning potentials in the world. So if you haven’t found a good match for direct work (whether due to personal preferences or external factors like not receiving funding), earning-to-give can be a great option for both impact and other desiderata.
Please don’t work on capabilities in a scaling lab, or otherwise contributing to ending humanity. I don’t know how much you care about my opinion, or even if you should. But while some people find it surprisingly comforting to work on projects that are destructive for the world when they suffer a temporary setback in attempting to save it, I suspect this will end up being the type of thing they’d regret, in addition to being straightforwardly[1] altruistically bad.
assuming that you agree with the object-level assessment that working in scaling labs hastens the world ending. Obviously there are reasonable object-level disagreements to be had here! (And I’m far from certain about that claim myself).