Infohazard policy/committment? I’d like to make sure that the person who read the grant takes AI safety seriously and much more seriously than other X-risks, to me that’s the main and only limiting factor (I don’t worry about taking credit for others ideas, profiting off of knowledge, or sharing info with with others as long as the sharing is done in a way that takes AI safety seriously, only that the reader is not aligned with AI safety). I’m worried that my AI-related grant proposal will distract large numbers of people from AI safety, and I think that someone who also prioritizes AI safety would, like me, act to prevent that (consistently enough for the benefits of the research to outweigh the risks).
I think we (especially the permanent fund managers; some of the guest fund managers are very new) are reasonably good at discretion with infohazards. But ultimately we neither have processes nor software in place to prevent with reasonably high confidence either social or technical breaches; if you are very worried about infohazard risks of your proposals; I’m not entirely sure what to do and suspect we’d be a bad place to host such an evaluation.
Depending on the situation, it’s plausible one of us could advise you re: who else to reach out to, likely a funder at Open Philanthropy.
I would guess more likely than not that this belief is universal at the fund tbh. (eg nobody objected to the recent decision to triage ~all of our currently limited funding to alignment grants).
Infohazard policy/committment? I’d like to make sure that the person who read the grant takes AI safety seriously and much more seriously than other X-risks, to me that’s the main and only limiting factor (I don’t worry about taking credit for others ideas, profiting off of knowledge, or sharing info with with others as long as the sharing is done in a way that takes AI safety seriously, only that the reader is not aligned with AI safety). I’m worried that my AI-related grant proposal will distract large numbers of people from AI safety, and I think that someone who also prioritizes AI safety would, like me, act to prevent that (consistently enough for the benefits of the research to outweigh the risks).
I think we (especially the permanent fund managers; some of the guest fund managers are very new) are reasonably good at discretion with infohazards. But ultimately we neither have processes nor software in place to prevent with reasonably high confidence either social or technical breaches; if you are very worried about infohazard risks of your proposals; I’m not entirely sure what to do and suspect we’d be a bad place to host such an evaluation.
Depending on the situation, it’s plausible one of us could advise you re: who else to reach out to, likely a funder at Open Philanthropy.
This link might also be helpful.
FWIW I fit that description in the sense that I think AI X-risk is higher probability. I imagine some / most others at LTFF would as well.
I would guess more likely than not that this belief is universal at the fund tbh. (eg nobody objected to the recent decision to triage ~all of our currently limited funding to alignment grants).