I’d like to make sure that the person who read the grant takes AI safety seriously and much more seriously than other X-risks
FWIW I fit that description in the sense that I think AI X-risk is higher probability. I imagine some / most others at LTFF would as well.
I would guess more likely than not that this belief is universal at the fund tbh. (eg nobody objected to the recent decision to triage ~all of our currently limited funding to alignment grants).
FWIW I fit that description in the sense that I think AI X-risk is higher probability. I imagine some / most others at LTFF would as well.
I would guess more likely than not that this belief is universal at the fund tbh. (eg nobody objected to the recent decision to triage ~all of our currently limited funding to alignment grants).